Kate Crawford, a visiting professor at MIT's Center for Civic Media, gave a recent talk describing a scenario where using Big Data could magnify certain kinds of inequality:
Crawford used a recent project conducted by the city of Boston as an example. The initiative, dubbed "Street Bump," is meant to leverage data from drivers' smartphones to help detect where potholes are on the city's roads. Drivers place their smartphones somewhere in their car's interior -- on the dashboard, a seat, or in a cup holder -- and when a pothole-induced "bump" is detected, that data is sent to the city, so the pothole can be identified and fixed.
While great in theory, the project has one major flaw, Crawford said: It only captures data from parts of the city where there is a high population of smartphones. And as smartphone usage is predominantly higher in the wealthier parts of the city, the lower-income areas of Boston are somewhat left in the dust. What's more, areas of Boston with large elderly populations -- which are also less likely to own smartphones -- are left to fend off those pesky potholes on their own.
"So if you think about how this might be used to fix roads, we might see a future where the wealthy areas with young people get more attention and resources, unlike the areas with older citizens, who might get fewer resources," Crawford said. "So if you're off the map, this could have some really material consequences for social inequity."
(This image from the Street Bump site will, for people who know about residential patterns in Boston, support Crawford's concerns.)
Replace "Street Bump" with an online educational resource or service, and you can imagine very similar scenarios.
One scenario is where developers, in the race to produce the coolest tools for learning, produce tools that can only be used by those with full access to broadband-connected, full-screen devices.
A more nuanced scenario emerges when you don't even know about the inequalities in your user base. Imagine you have a program that distributes educational resources and generates random practice problems. For privacy reasons, you know nothing about your users. Imagine that it turns out that your users are disproportionately affluent. As you gather data about the usage of your platform to serve all learners, you change your features and platform to support the needs of the majority of your users, through strategies like A/B testing. One could imagine that platforms who have a disproportionately affluent user base (who for privacy reasons don't actually know this)could end up producing an educational web that uses iterative improvements based on big data to incrementally tilt the playing field in favor of the advantaged (longer version of this argument here).
To serve it's citizens, Boston needs to use the data from "Street Bump" in ways that acknowledge the risks of disproportionately serving the most advantaged.
To serve our learners, education technologists need to do the same.