[ad_1]
Past research shows that large language models are capable of generating text harmful to some groups of people, including those who identify as Black, women, people with disabilities, and Muslims. Since 90 percent of students who attend schools that work with Charter School Growth Fund identify as people of color, Connell says, “having a human in the loop is even more important, because it can pretty quickly generate content that is not OK to put in front of kids.”
April Goble, executive director of charter school group KIPP Chicago, which has many students who are people of color, says understanding the risk tied to integrating AI into schools and classrooms is an important issue for those trying to ensure AI helps rather than harms students. AI has “a history of bias against the communities we serve,” she says.
Last week, the American Federation of Teachers, a labor union for educators, created a committee to develop best practices for teachers using AI, with guidelines due out in December. Its president, Randi Weingarten, says that although educators can learn to harness the strength of AI and teach kids how to benefit too, the technology shouldn’t replace teachers and should be subject to regulation to ensure accuracy, equity, and accessibility. “Generative AI is the ‘next big thing’ in our classrooms, but developers need a set of checks and balances so it doesn’t become our next big problem.”
It’s too early to know much about how teachers’ use of generative text affects students and what they can achieve. Vincent Aleven, co-editor of an AI in education research journal and a professor at Carnegie Mellon University worries about teachers assigning nuanced tasks to language models like grading or how to address student behavior problems where knowledge about a particular student can be important. “Teachers know their students. A language model does not,” he says. He also worries about teachers growing overly reliant on language models and passing on information to students without questioning the output.
Shana White, a former teacher who leads a tech justice and ethics project at the Kapor Center, a nonprofit focused on closing equity gaps in technology, says teachers must learn not to take what AI gives them at face value. During a training session with Oakland Unified School District educators this summer, teachers using ChatGPT to make lesson plans discovered errors in its output, including text unfit for a sixth grade classroom and inaccurate translations of teaching material from English to Spanish or Vietnamese.
Due to a lack of resources and relevant teaching material, some Black and Latino teachers may favor generative AI use in the classroom, says Antavis Spells, a principal in residence at a KIPP Chicago school who started using MagicSchool AI six weeks ago. He isn’t worried about teachers growing overly reliant on language models. He’s happy with how the tool saves him time and lets him feel more present and less preoccupied at his daughter’s sporting events, but also with how he can quickly generate content that gives students a sense of belonging.
In one instance three weeks ago, Spells got a text message from a parent making a collage for her son’s birthday who asked him to share a few words. With a handful of adjectives to describe him, Spells responded to the message with a custom version of the student’s favorite song, “Put On,” by Young Jeezy and Kanye West.
“I sent that to the parent and she sent me back crying emojis,” Spells says. “Just to see the joy that it brought to a family … and it probably took me less than 60 seconds to do that.” KIPP Chicago plans to begin getting feedback from parents and rolling out use of MagicSchool to more teachers in October.
[ad_2]
Source link