Curriculum Q&A

Artificial Intelligence in the Classroom: Q&A With Michelle Zimmerman

By Benjamin Herold — January 11, 2019 5 min read
  • Save to favorites
  • Print

From adaptive software to recommendation engines to voice-activated speakers, artificial intelligence is making its way into K-12 classrooms.

At the same time, schools are under growing pressure to prepare students to be workers in a labor market where AI is likely to play an ever-larger role—and to be citizens in a society where AI reshapes the decisions we face, the meaning we make, and the challenges we must confront.

How can busy educators make sense of this rapidly changing world?

The International Society for Technology in Education hopes to help, in part via a new book funded through a grant from General Motors. In ‘Teaching AI: Exploring New Frontiers for Learning,’ educator Michelle Zimmerman, who has a Ph.D. in learning sciences and human development from the University of Washington, brings a teacher-centric lens to big questions around the various definitions of artificial intelligence, how AI is upending the workforce, and how to teach about—and with—artificial intelligence.

“Today’s students will live and work with AI,” Zimmerman wrote. “To succeed in their future jobs—regardless of the field they choose—they need to learn how to both maximize AI’s capabilities and transcend its limits.”

Here’s a transcript of Education Week‘s conversation with Zimmerman, edited for length and clarity.


You write that artificial intelligence is relevant to everyone in education, not just computer science teachers or students interested in technology. Why?

There are so many applications all around us, from home heating systems to language translation. If we’re not aware that these things are run by AI, we’re not maximizing their capability or looking at their potential downsides in terms of things like privacy. Educators need to be aware themselves so they can help students be inspired for future careers.

For schools, what do you see as the appropriate balance between preparing kids to develop AI and helping them to understand what artificial intelligence is and what it means to live alongside it?

I don’t know that there’s one clear, right answer.

There’s a big push to teach kids to code. But at a certain point, machines will be able to create basic code more efficiently than kids. So if coding becomes something students are doing step-by-step by memorizing a process, without practicing the mindset that allows them to do computational thinking, there’s a chance machines will pass them up.

At a basic level, we need to teach kids how to get around problems, not just a step-by-step process for solving them.

What does that mean for how schools think about what it means for students to be prepared when they leave K-12?

I heard varied perspectives across different industries. But one thing I kept hearing was that employers want students to know how to collaborate with other people. AI requires highly collaborative teams. Take the medical field. For any kind of device that uses AI, companies want to know how it works for caregivers, patients, the people designing the interface, medical professionals, researchers. You have to have the ability to find commonality between those people. If someone comes in who knows how to code but not how to apply that skill to useful situations, it loses effectiveness.

To what extent are AI tools and technologies already being used in schools?

There’s a range. There are adaptive technologies in mathematics, tools like ALEKS. There are other tools where AI runs in the background. A Microsoft tool called Sway allows kids to focus on making connections across domains and disciplines while machine learning runs the design behind it. Tools like chatbots we see more in daily life than in education so far. Groups like Sesame Workshop are in the early stages of using AI to help with vocabulary development.

How do you see schools approaching use of these tools?

It can go one of two ways. There can be a mindset of “OK, here’s an adaptive program for school, we don’t need a teacher.” Or there can be another perspective, where schools want to use the technology as a supplement, but want educators to help motivate and apply.

How do teachers approach teaching about, and with, AI?

I really heard a dichotomy in what teachers wanted. Some say, “teach me how to teach AI in classroom, how to teach students to program a chatbot, so I can feel like I’ve prepared them.”

When you say it will take so much more than that for students to be prepared for AI, it can be unsettling. There can be a tendency to feel more safe if there’s a step-by-step process outlined for you.

Being given freedom can feel scary, because it adds so much more responsibility to the educator. That can be a messy process. Being able to get together a team of teachers that interacts and draws ideas across domains can be a solid foundation. But that takes planning time and changes in the structure of the school day. It’s a bigger system challenge than just dropping in a piece of technology.

What are some resources and strategies teachers can use to bring AI into the classroom?

In an ideal world, I would love to see more tools that allow kids to take the same AI functionalities being used in industry and learn and practice those concepts at a basic level. The best example of this is Pixar in a Box, which takes the exact tools and concepts that Pixar animators use daily, and says to kids, “Here, try this.”

There are also some really interesting resources to help students use tools like TensorFlow, if they want to jump right into trying AI. Hacking STEM gives an intersection between robotics and data analysis pieces, so there’s a chance to see the interplay between sensors, data, and math, and bring it into direct application in fields like marine biology.

And without fail, everyone in the tech world recommends teaching statistics. If you can get kids a solid understanding of stats, it will give them an edge.

Many prominent voices have warned about the possible dangers of AI, including super-intelligence, autonomous weapons, manipulated videos, and mass surveillance. Should teachers and schools be broaching these concerns with their students?

Absolutely.

How?

Start getting students thinking about ethical dilemmas and consider the complexities of real-life situations. If we’re creating tools that have to make life-or-death decisions, like autonomous cars, how do we keep society safe? What’s the right balance between freedom and control? These have to be discussions, there’s not one simple answer.

Other thoughts for teachers?

As educators considering new technologies, we really have to ask, ‘What is it doing, and how does it change my role?” For educational programs using AI, I’d like to hear more of what we’re starting to hear around autonomous vehicles: “Yes, the technology can help you. But we need you alert as a teacher at all times.”

Photo of Michelle Zimmerman courtesy of the International Society for Technology in Education


See also:

A version of this news article first appeared in the Digital Education blog.