How Taylor Swift’s Experience With ‘Deepfakes’ Can Help Students Examine AI Ethics By Alyson Klein & Lauraine Langreo
Tuesday, February 6, 2024, 07:48 AM
Posted by Administrator
High school senior Alex Bordeaux was scrolling through social media to check out Taylor Swift’s latest outfits when she stumbled on disturbing pornographic images of the pop music icon.Posted by Administrator
Bordeaux, 17, has studied artificial intelligence in her journalism and yearbook class at Dawson County High School in Dawsonville, Ga., so she knew immediately that she was looking at “deepfakes”—AI-manipulated video, audio, or photos created using someone’s voice or likeness without their permission.
But she was furious—and baffled—that so many on social media at least pretended to treat the images as the real deal. “It’s just so crazy how easily people can see something on the internet and immediately believe it,” she said. “A lot of kids my own age do not double check anything.”
At the same time, it hurt to see someone she’s admired so deeply, for so long, digitally degraded, added Bordeaux, who credits Swift’s 2014 hit “Welcome to New York” as inspiring her dream of living in the Big Apple.
“Taylor Swift seems so untouchable because she’s so rich. She’s so famous. And she’s so sweet. And she’s basically been used” by online trolls, Bordeaux said.
Not all students have the background that Bordeaux does to understand the role AI—a relatively nascent technology—plays in creating deepfakes like the ones targeting Swift, as well as other fake images and video designed to spread misinformation, influence public opinion, or con people out of money, experts say.
Schools need to make teaching about this type of technology a priority.
Deepfakes are “a big concern” because they “pollute our information environment to a pretty astonishing degree,” said Kate Ruane, the director for the Center for Democracy and Technology’s Free Expression project, a nonprofit group that promotes digital rights.
If educators aren’t already thinking about teaching students about deepfakes, “they really should be … because this is the water that their students are swimming in every day,” she added.
‘It’s gonna continue to happen because AI is growing so massively’
Many of the deepfake images of Swift were taken down, but not before they’d attracted plenty of eyeballs. One posted on X (formerly Twitter) racked up more than 45 million views, 24,000 reposts, and hundreds of thousands of likes before the user who shared the images had their account suspended for violating X’s policy, reported the Verge, a technology news website.
By that point, the post had been accessible on the platform for about 17 hours, according to the Verge. The content became so problematic that X had to temporarily block searches of Swift, the Wall Street Journal reported.
Bordeaux knows Swift isn’t the first celebrity to be the victim of a viral deepfake. Former president Barack Obama, Tesla CEO Elon Musk, and Pope Francis have all been recent targets. Male students at a high school in New Jersey made and shared deep-fake pornographic images of their female classmates. And a deepfake robocall audio recording of President Biden was circulated during the New Hampshire presidential primary.
“It’s gonna continue to happen because AI is growing so massively,” Bordeaux said. “More people will learn how to use it. And the more people use it, the more people will abuse it. That’s just the way it works. … I think the ethical implications of AI are so important.”
While there may seem to be a lot of obvious negatives to deepfakes, teachers need to steer their students toward critical questions about the technology, discussing how policymakers and developers can work to mitigate the downsides, said Leigh Ann DeLyser, the executive director and co-founder of CSforALL, a nonprofit organization that seeks to help expand computer science education.
Teachers could ask students: “What are the benefits of deepfakes? What are the challenges of deepfakes? And if there are challenges, how can or how should we as society, create rules around them, like labeling a deep fake” or getting permission before using someone’s image? she said.
Bordeaux’s journalism teacher, Pam Amendola, who received training on how to teach AI from the International Society for Technology in Education, said many of her students, especially those who consider themselves “Swifties”—a nickname for Swift’s fans—were incensed on the pop star’s behalf.
But they also considered what might have happened if the subject of the images wasn’t “somebody who had her [fame], how would they ever be able to combat it?” Amendola said.
That question can provide an opening for teachers to remind students of their digital footprint, explaining how information they’ve already put online can be twisted and used for nefarious purposes, Amendola added.
That lesson hit home with Bordeaux, who understands that AI is getting more sophisticated all the time.
“I’m definitely worried that my face is out there,” she said. “There’s so many ways you can manipulate with deepfakes to make someone look horrible.”
‘Photoshop on steroids’
There’s an opening to discuss the technical aspects of deepfakes, too, in the context of media literacy.
Many people were able to figure out right away that the images of Swift online weren’t authentic, said Ruane from the Center for Democracy and Technology.
Teachers could ask: “How did they do that? What are the things about the image, the person in the image, that led you to know those things? What are the instincts that you felt within yourself that led you to that conclusion?” Ruane suggested.
That’s something that students have discussed in Elizabeth Thomas-Capello’s computer science class at Newburgh Free Academy, a public school in Newburgh, N.Y.
Taylor Swift came up when the class was “talking about how there are things AI can’t do very well yet,” when it comes to creating images of people, Thomas-Capello said. “It can’t really form ear lobes, or the face is a little too perfect or the backgrounds are a little bit muddled. And [it] can’t quite get teeth yet.”
Keeping those flaws in mind, the class tried to identify which among a series of faces were AI-generated and which were real, Thomas-Capello said. “And we all still failed.”
Safinah Arshad Ali, a research assistant at the Massachusetts Institute of Technology who works on teaching AI to middle school students, describes deepfakes as “photoshop on steroids” and is quick to point out the technical weaknesses in the images it creates.
But she also asks students to “think critically about the source, think about who’s posting it, why would they be posting it?”
Amendola, too, reminds her students that technology is getting to a place where it is difficult to believe what you see with your own eyes. That means they must consider the context behind everything they see online—whether it’s pictures of a pop star or a message from a presidential candidate.
“I tell them, ‘question everything because we’re at a point in history where you need to be a bit of a skeptic because you’re going to be taken advantage of otherwise,’” Amendola said.
The Taylor Swift deepfakes—and the many similar incidents that are sure to follow them—provide an opportunity for would-be computer scientists to delve into the ethics behind the technologies they are learning to create, Thomas-Capello said.
“I really try to emphasize that this what is occurring in our society. These are the implications for our society,” Thomas-Capello said. “You as students are the ones who are going to write this. You are the ones who can create technology for good [and] make sure that these types of things are harder and harder to [produce].”
She added, “We don’t really know where artificial intelligence is going. We’re just at the very, very beginning. But if we can train our students to use technology for good, then I think [we’ll get to a] really good place.”
add comment
( 157 views )
| permalink
| ( 3 / 174 )