Privacy & Security

U.S. House Hearing on Algorithms & Big Data: 5 Takeaways for Schools

By Benjamin Herold — November 29, 2017 7 min read
  • Save to favorites
  • Print

The Energy and Commerce Committee of the U.S. House of Representatives held a hearing Wednesday on the role of algorithms in how companies collect and analyze consumer data, then use the information to target consumers.

Education was less of an explicit focus than headline-grabbing data breaches (Equifax, Uber), election interference and social-media manipulation, and the contentious debate over net neutrality. But there was still lots for the K-12 community to chew on.

Here are five big takeaways.


1. Loss of privacy is a major concern.

More than 9 of 10 adults believe that “consumers have lost control of how personal information is collected and used by companies, and 68 percent believe current laws are not good enough in protecting people’s privacy online,” argued Laura Moy, the director of the Center on Privacy and Technology at Georgetown Law, in her written testimony.

Here, she argued, K-12 students actually have it better than consumers at large, because there are at least some federal privacy laws (most notably the Family Educational Rights and Privacy Act, or FERPA, and the Children’s Online Privacy Protection Act, or COPPA) in place.

But many in the K-12 community argue that FERPA, in particular, is outdated. Some critics also contend the law was significantly weakened by regulatory changes approved by President Obama’s education department. As a result, the argument goes, the legal safety net protecting kids and families is full of gaping holes, and it generally fails to account for many of the modern technologies, data-gathering techniques, and uses of algorithms (basically, a series of steps that tells a computer how to solve a problem) that are common among ed-tech products today.

So far, such concerns have generated lots of state-level legislative activity. In the nation’s capital, it’s been mostly talk and little action, although there are again rumblings of potential efforts to update FERPA and issue new guidance on COPPA. This Friday, for example, the Federal Trade Commission and the U.S. Department of Education will be jointly hosting a public workshop on ed-tech and privacy, with a focus on how the two laws might be bolstered to better protect students.


2. Companies are gathering LOTS of data on consumers. But what they actually collect is only the start of the potential problem.

Credit Michael Kearns, the chair of the computer and information science at the University of Pennsylvania, with the catchphrase of the day: “Data intimacy.”

In his written testimony, Kearns defined the term as “the notion that machine learning enables companies to routinely draw predictions and inferences about consumers that go far deeper than the face-value of data collected as part of consumers’ online activities.” Examples include ascertaining your romantic partners, religious beliefs, and political affiliations based on where you shop and what you search for on the internet.

Particularly worrisome, Kearns said, is that machine-learning algorithms are increasingly being used to determine users’ emotions, moods, and mental states.

Such information can be used for “vulnerability-based marketing,” in which people are targeted with particular ads or services based on an algorithm’s determination of their emotional state. One potential example cited by Kearns: predatory loans aimed at low-income people considering for-profit higher education institutions.

Facebook also recently announced it would use its algorithms to help identify potential suicide risks.

All that likely resonates in the K-12 world, where some school districts are already paying companies to monitor the social-media feeds of their students for potential warning signs, and where the ed-tech industry has grown increasingly focused on “social-emotional learning.” Parents and activists across the political spectrum have voiced alarm at the prospect of companies using surveys, webcams, clickstream data, and other techniques to create psychological and behavioral profiles of students in the name of better understanding their mindsets and preferences, and targeting content accordingly.

Kearns’ advice to Congressional lawmakers is also relevant for K-12 policymakers and officials: When considering how to legislate, regulate, and contract with technology providers, don’t focus solely on the data they are directly collecting. Take the time to also consider all the inferences and predictions that companies might be able to do with those data, especially if they start combining it with other information.


3. Algorithmic bias is more complicated than you might think.

The possibility of algorithmic bias has started to receive more attention in K-12. What if software programs recommend a less-rigorous curriculum or lesser career-field opportunities to certain groups of students?

Such concerns have already been documented in the consumer space, said Catherine Tucker, a professor of management science and marketing at MIT’s Sloan School of Management.

She described a study in which researchers placed an advertisement on Facebook, Google, and Twitter featuring job opportunities in science and technology. They found that the companies’ targeting algorithms ended up showing the ads to 40 percent more men than women.

But further research showed that the algorithm itself wasn’t skewed in favor of men, Tucker said. Nor were there cultural factors at work that led women to be less responsive to the ad.

Instead, she said, the problem derived from hidden economic forces. The companies’ algorithms were essentially running auctions that let a variety of advertisers bid on the opportunity to reach particular audiences. Other advertisers were willing to pay extra to reach women. And from the algorithms’ perspective, the priority for the researchers was to keep costs down. The end result was that their job ad was steered away from the more-expensive female audience.

The lesson for policymakers, in K-12 and beyond?

There’s no easy solution, Tucker said.

Just making algorithms open or transparent (as New York City is currently considering forcing public agencies to do) won’t necessarily prevent bias, she said. And neither will auditing the data that are used to train them.


4. Sunlight may not be an adequate disinfectant.

In the big-data-and-algorithms field, transparency isn’t just about opening up the algorithms.

On a far more basic level, policymakers and regulators have pushed to make sure companies (including those in ed tech) are publicly posting their privacy policies, so consumers (and students, parents, and teachers) can ostensibly know what data are being collected and how they are being used.

But there’s a big problem with that approach, said law professor Omri Ben-Shahar of the University of Chicago.

It doesn’t work.

Mandated disclosure rules are “entirely ineffective” and an “empty ritual,” Ben-Shahar said in his written testimony. Most people don’t read them. If they do read them, they don’t understand them. And if they do understand them, they don’t care.

That’s not the consensus view, of course. Maryland law professor Frank Pasquale, who was also on hand to testify, is a big proponent of more transparency and greater disclosure.

One of Pasquale’s points that the ed-tech vendor community might take to heart: There’s a lot of talk about data privacy, security and transparency impeding innovation, he said. But developing and employing good data practices can also promote innovation and be a source of competitive advantage.


5. Despite all the worry about algorithms, data collection and privacy, most people don’t seem willing to change their behavior.

When it comes to data privacy, one of the big questions for schools, companies, parents, and activists alike is whether people care enough about the potential risks of massive data collection and algorithmic targeting to forego the conveniences and benefits that come with them.

So far, said Tucker of MIT, there’s not a lot of evidence that’s the case.

She cited another study that highlighted the so-called “privacy paradox.”

In it, researchers found that even those undergraduate students who said they cared most about their online privacy were willing to share very personal information in exchange for a slice of cheese pizza.


See also:

A version of this news article first appeared in the Digital Education blog.