Assessment

Ed-Tech Pilot Tests in Districts Informal, Short on Student Feedback, Study Finds

By Sean Cavanagh — November 17, 2015 5 min read
  • Save to favorites
  • Print

Cross-posted from the Marketplace K-12 blog

School districts vary enormously in how they judge ed-tech products through pilot tests, following a largely informal process that often lacks a clear approach for weighing the opinions of teachers and students.

Those are some of the main findings of a study of the pilot-testing process, released today by the organization Digital Promise, in a review that focuses on the experiences of six districts with very different student populations.

Pilot tests can serve as a critical vehicle for districts considering whether to adopt potentially costly educational-technology tools and platforms for use in their classrooms.

Districts typically agree to allow companies to test products, in the hope of meeting a specific need for teachers and students in their K-12 systems. That process allows districts to experiment with ed-tech that potentially can help them improve teaching and learning. Ed-tech companies, in turn, get the opportunity to showcase their goods to potential buyers and lure them into making full-blown purchases.

Yet the study lays bare the factors that can stymie both parties in trying to get what they want.

Most of the districts in the study wanted to try to test the impact of ed-tech products by looking at gains in student test scores—not a surprise, given the pressure schools face to raise achievement. Those districts, like others around the country, were looking for a way to judge ed-tech products independent of companies’ claims, and independent of vendor-sponsored research.

But the participating districts also said that in many cases state assessment results came in too late for them to make a decision to carve out money in their budgets for that technology (A few districts in the study got around this by using locally crafted tests, which could be administered on their own timetables.). K-12 officials also had misgivings about trying to link test-score gains to the influence of a single product, as opposed to other factors.

For ed-tech developers, the study found, staging a successful pilot in a district does not guarantee landing a contract to do work there. By the time a pilot is complete—say, at the end of an academic year—the time available to the district to set aside money and set in motion a procurement process to buy it for the following academic year has typically already passed, the study found.

“There’s an asynchronous calendar,” between pilots and district purchases, said Valerie Adams-Bass, the lead author of the study, in an interview.

In addition to the time barriers, “many districts felt like, ‘we need to spend more time to try this product before we invest in it,’ ” said Adams-Bass, a postdoctoral fellow at the University of California, Davis, which led the study.

‘Mature’ Student Feedback

The research on pilots builds on an earlier study commissioned last year by Digital Promise, a congressionally authorized organization that focuses on improving education through technology and research, and the Education Industry Association, and conducted by researchers at Johns Hopkins University. That study found a broad disconnect between ed-tech vendors and K-12 systems: Companies have only a vague sense of districts’ buying needs, and how to interest them in their products; K-12 leaders, meanwhile, said they’re overwhelmed by the products pitched to them, and lack the time and ability to evaluate them thoroughly.

That study also showed that districts rely heavily on pilots to test ed-tech products, and that the definition of pilots varied enormously.

To probe the topic in more depth, Digital Promise recruited six districts of varying sizes to participate in its new study: the District of Columbia; Fulton County, Ga.; Piedmont City, Ala.; South Fayette Township, Pa.; Vista, Calif., Unified; and West Ada, Idaho. In some cases, district plans to pilot were already underway before the study began. Different vendors piloted ed-tech in the various K-12 systems. Fulton was the biggest district with more than 95,000 students; Piedmont, the smallest, with about 1,200.

Districts officials who took part in the study said they valued teacher and student feedback about products but rarely collected it in a formal way, Adams-Bass noted. Information about pilot test tests to trickle up from teachers to principals and students to teachers, the study found.

But the lack of a more systematic way of capturing what students think of ed-tech products is a lost opportunity for developers, the study suggests.

Students’ comments about ed-tech products were “surprisingly mature, and they had particularly insightful comments about advice for education-technology developers,” the study found. “The student voice is vital to consider throughout a pilot process, as they are the true end-users.”

The insights offered by students during a pilot helped shape the thinking of both district officials and developers working in the South Fayette Township district, recalled Aileen Owens, the system’s director. The district partnered with Carnegie Mellon University, the University of Pittsburgh, and used a product called Vex IQ for a project focused on computational thinking and robotics programming. The district and its partners are making adjustments this year, based on that what students liked, and what they didn’t, Owens said.

“That was eye-opening, to learn about students’ discoveries and hear their voice,” Owens said.

The study makes a series of recommendations to districts (and ed-tech developers) for staging effective pilots. Among them:

  • Teachers should “proactively” give feedback to administrators about pilots, and tell administrators about students’ experiences with ed-tech products. That process should be formalized;
  • The ideas put forward by teachers and students should be incorporated throughout the process to create buy-in for the product;
  • If districts are using student outcomes to judge the product, they should develop an evaluation plan and research design to measure the impact;
  • The length of a pilot should be sufficient so that districts have the option of making a buying decision;
  • Plans to evaluate products, and plans for what will occur after the pilot, should be conveyed to all parties;
  • A point-person should be assigned to deal with tech issues and other problems that emerge;
  • Teachers and district officials should take note of things that didn’t work during the pilot, and things that were unexpected; and
  • K-12 systems need to provide support for teachers with the new ed-tech; encourage students to help educators implement the product.

See also:

A version of this news article first appeared in the Digital Education blog.