Susan Gentz, COO, K20Connect
We are all trying to harness the good of AI to spur learning and engage students in ways we have never been able to do. Through AI, audio and visual deceptions can be made. The dictionary defines deepfake as, “a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information.” As always, with every good invention or innovation, there can be serious risks, and we’re already seeing examples.
In April 2024 a Maryland high school teacher was arrested because he allegedly used artificial intelligence to plant racist and antisemitic words into the voice of his boss, Principal Eric Eiswert. Thankfully the AI is still new enough that authorities were able to determine the recording was generated through AI, but as AI gets smarter- will that always be an option?
The use of AI in this instance is suspected because of retaliation. Students will soon (if they haven’t already) be able to figure out how to do this to their teachers at the drop of a hat. If they get a grade they don’t like, if they don’t like that their teacher called them out in class, if their teacher makes a call to their parents- the reasons are endless. All educators need to be prepared for this.
How Do you Identify a Deepfake?
According to the MIT Media Lab, there are several things to be on the lookout for:
“Pay attention to the face. High-end Deepfake manipulations are almost always facial transformations.
Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? Deepfakes may be incongruent on some dimensions.
Pay attention to the eyes and eyebrows. Do shadows appear in places that you would expect? Deepfakes may fail to fully represent the natural physics of a scene.
Pay attention to the glasses. Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves? Once again, Deepfakes may fail to fully represent the natural physics of lighting.
Pay attention to the facial hair or lack thereof. Does this facial hair look real? Deepfakes might add or remove a mustache, sideburns, or beard. But Deepfakes may fail to make facial hair transformations fully natural.
Pay attention to facial moles. Does the mole look real?
Pay attention to blinking. Does the person blink enough or too much?
Pay attention to the lip movements. Some deepfakes are based on lip-syncing. Do the lip movements look natural?”
States are Working on This- But Districts Need to Discuss and Implement Policy Now
The National Conference of State Legislatures (NCSL) just released a roundup of state policies working to address “deceptive audio or visual media” in 2024. According to NCSL, “lawmakers in at least 17 states enacted laws that specifically refer to online impersonation done with an intent to intimidate, bully, threaten or harass a person through social media sites, email or other electronic or online communications. These states are California, Connecticut, Florida, Hawaii, Illinois, Louisiana, Massachusetts, Mississippi, New Jersey, New York, North Carolina, Oklahoma, Rhode Island, Texas, Utah, Washington and Wyoming.” Additionally, they report that 40 states have pending legislation.
A few examples of what states are doing include disclaimers that the media has been AI-generated, stating that a person is guilty of possessing child pornography is the person knowingly possessed any computer-generated child pornography, and updating personal rights to provide that any individual has a property right in the use of that individual’s name, photograph, voice, or likeness in any medium in any manner. Legislators are generally in the definition phase right now, very few have made it to the consequences.
Additionally, legislators are not as focused on what this means in schools, and as minors who participate. Districts must have a strong district policy on how to address these circumstances. Does audio or visual evidence of an educator automatically mean administrative leave? Who oversees identifying a deepfake? What is the punishment? How do you restore the reputation of the victim? All these questions must be addressed, and if possible, before the scenario happens in the school.
K20Connect would love to help you navigate these challenges. Reach out to Susan@k20connect.net to talk through some ways to stay ahead of this increasingly prolific challenge.
コメント