Opinion
Classroom Technology Opinion

What Sort of Research Should We Be Doing on Blended Learning?

By Tom Vander Ark — April 05, 2013 6 min read
  • Save to favorites
  • Print

A foundation advisor asked, “Is there a smart approach donors might take with blended learning research?” Following are seven areas that could use some
attention.

1. Better growth measures.
In October, The International Association for K-12 Online Learning (iNACOL) released " Measuring Quality From Inputs to Outcomes” that lays out a framework for
improving the quality of online learning. The report calls for growth models based on the growth of individual students over time, not on cohorts. (See my
review of the report Better Online Schools & Learning Options.)

In February the

K12 Academic Report called for Better growth measures.

K12 thinks there are big limitations to current tests used to estimate growth. They think adaptive tests (like Scantron or MAP from NWEA) provide more
accurate measures. Using adaptive tests, K12 observed 2011-12 student growth of about one year of progress in math and two years in reading. That’s good
progress and probably more rapid than in previous settings but because most students are several years behind they show low levels of proficiency on end of
year grade level tests.

The Department of Education announced a data partnership with SRI. Their first paper, Evidence Framework for Innovation and Excellence in Education, was finalized this
week. In my review of the draft I noted that we “need to
invent new definitions of comparability.” We need to develop new methods to compare thousands of observations in super-gradebooks of standards-aligned
data. We should be able to compare data sets for children progressing through third grade learning expectation and, using multiple measures, make
inferences about their rate of progress.

Other topics mentioned in the SRI paper but given short shrift: micro tagging (key to comparability), motivational profiles (key to persistence), iterative
development (key to quality and cost effectiveness).

To develop common growth measures, I’d recommend starting with Metametrics Lexile and Quantile scales. I think we just need a system where any assessment
developer (including thousands of teachers on MasteryConnect and Naiku) can link to and improve correlation with use.

After consulting the Department of Education (especially Rich Culatta), PARCC, and Smarter Balanced, an SRI and/or Digital Learning Now! (DLN) paper on growth measures and comparability could provide more specific direction.

2. Better gradebooks and profiles.
In October, DLN released

Data Backpacks: Portable Records & Learner Profiles

and made the case for:



  1. Portable records.
    An expanded gradebook of data should follow a student grade to grade and school to school.

  2. Learner Profiles
    . Parents and teachers should manage a comprehensive learner profile that will drive recommendations, identify problems, and share successes.

Utah passed a

data backpack bill

--a useful early example of the kind of state leadership that foundations could encourage and support.

A foundation could fund improvement of existing gradebooks (accepting input from multiple sources, competency tracking, achievement recognition, reporting)
for use on widely adopted platforms and with widely adopted data standards (e.g., EdFi, inBloom). Super gradebooks will be key to managing competency-based
environments.

Foundations could sponsor a ‘learner profile design project’ with a district or network (of schools on the same platform). That could include helpingMichigan’s EAA improve the Buzz recommendation engine or sponsoring projects with Education Elements or Edmodo schools.

On the need for robust data systems and personal, secure student records, John Bailey said, “We need something similar to the research RAND and others did
that eventually led to the $20 billion electronic medical record program under the stimulus. That is a great
model for us in education given the similar regulatory frameworks, privacy issues, and benefits--personalized medicine and personalized education.”

KIPP and other networks track their grads into college. Florida was an early leader in making this sort of tracking through post secondary into the
workforce possible. Long term support for tracking blended school/networks/districts graduates into the workforce could prove useful.

3. Profiling models.
We’re so early in evolution of blended learning that it continues to be important to catalog new school models (e.g., Innosight’sBlended Learning Universe, NGLC profiles, and the Dell Family
Foundation Blended Learning Profiles) and draw
lessons from implementation (e.g., DLN Blended Learning Implementation Guide).
Supporting efforts to review and catalog learning platforms and apps could also be useful.

4. Research.
It’s too early and too dynamic to conduct long term controlled trials to compare school models, learning platforms, and/or learning progressions. John
Bailey, Executive Director of DLN, said, “The technology and models are new and quickly evolving, which makes long-term, experimental design studies
difficult.”

Bailey suggests studies of local capacity and implementation, “Why do some implementations work and others don’t? What leads to success?” Scaled solutions
like Read 180 and ST Math have tons of controlled trial data and show
big gains in most places, less in others. We need to understand elements of successful implementations and need the ability to exclude poor implementations
from studies.

5. Classroom trials.
After hosting the Automated Student Assessment Prize, Common Pool and Getting Smart are planning Classroom Trials of secondary writing assessment products during the
2013-14 school year.

Similarly, short cycle trials and prizes would be a potentially useful way to accelerate improvement and adoption of adaptive reading and math instruction
systems in the elementary grades. A network of lab schools could run continual short cycle app trials.

I’m fascinated by Personal Experiments in health, a site that helps individuals plug into trials or start
their own. I’ve suggested that

Helping Teachers Self-Experiment Could Boost Learning

(but differences in implementation, mentioned above, make this challenging).

6. R&D.
Rather than traditional evaluation, what is needed now is smart design capacity. It could prove useful to embedding university faculty in school networks
to support iterative development and short cycle trials.

iNACOL members identified these design priorities:



  • Assessments aligned with program goals and course objectives--mastery of standards and attainment of broader competencies needed for career, college
    and citizenship success.

  • Teacher and leader preparation and career-long development models.

  • Instructional and course designs for the range of models and student types, including examinations of various pacing, forms of interaction, groupings,
    etc.

  • Data and learner analytics to support continuous school and program improvement.

7. Policy research & design.
It’s tempting to just focus on school models but, as John Bailey points out, “unless a state’s policy environment supports that model, it isn’t possible to
scale.”

DLN and iNACOL are both engaged in policy design projects. The DLN Smart Series addresses
critical issues at the intersection of digital learning and implementation of Common Core Standards. The last scheduled paper, to be published in June,
will address the quality of online learning and will make recommendations to providers, policy makers and authorizers. The 8 part series will be updated
and published as an ebook in August. Foundations could support regional blended learning conferences and a second Smart Series.

iNACOL has emerged as the leading voice for competency-based learning. Competency Works is an online community and
research resource. iNACOL is also developing detailed funding recommendations for online and blended schools.


Disclosures: K12, MIND Research Institute, and Digital Learning Now! are Getting Smart Advocacy Partners. Tom is a director of iNACOL. MasteryConnect
and Edmodo are Learn Capital portfolio companies, where Tom is a partner.

Related Tags:

The opinions expressed in Vander Ark on Innovation are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.