This story was originally published by CalMatters. Sign up for their newsletters.
In December, fourth graders in a class at Delevan Drive Elementary School in Los Angeles were given a homework assignment: Write a book report about Pippi Longstocking, then draw or use artificial intelligence to make a book cover.
When Jody Hughes’ daughter asked Adobe Express for Education, graphic design software provided by her teacher, to generate an image of “long stockings a red headed girl with braids sticking straight out,” it produced nothing resembling the Swedish children’s book character she had accurately described. Instead, using recently-added artificial intelligence, it generated sexualized imagery of women in lingerie and bikinis. Hughes quickly contacted other parents, who said they were able to reproduce similar results on their own school-issued Chromebook computers. Days later, the parent group Schools Beyond Screens told the LA school board they were opposed to further use of the Adobe software.
The incident raised questions not only about the LA school district’s use of a particular AI product but also about guidelines state administrators provide to schools throughout California on how to safely adopt the technology. A few weeks after the incident, the state Department of Education published a new edition of the guidelines, which it had been working on for several months with help from a group of 50 teachers, administrators, and experts. The revision came in response to instructions from the Legislature, which passed two laws in 2024 telling the department, essentially, to get a handle on AI’s rapid spread among students, teachers and administrators.
Critics wonder if the guidelines would have helped avoid what parents referred to as Pippigate; the controversy, they say, provides evidence that districts, schools, and parents, who often lack the time or resources to ensure that software tools don’t produce harmful output, need more support from the state. The guidelines, they add, are also too vague in places and don’t do enough to define guardrails for how teachers use AI in the classroom.
The issues with the guidelines call into question whether the department can effectively respond to instructions from elected officials on how to safeguard a technology that, according to the guidelines themselves, can leave children isolated and with narrowed perspectives.
With AI rapidly becoming more prevalent in society, effectively managing the technology has become an urgent issue. Though OpenAI’s ChatGPT popularized generative AI just three years ago, polls show that a majority of teachers and students nationwide now use the technology in some capacity.
While AI can help save teachers time, personalize learning, and support students who do not speak English or who have disabilities, it can also inaccurately grade their papers and generate images that perpetuate or intensify stereotypes or sexualized imagery of women, particularly women of color. The majority of California K-12 students are people of color. Since the rapid expansion of generative AI adoption started, teachers who spoke with CalMatters have felt both a need to prepare their students for a future where AI is ubiquitous and a fear that AI tools can enable cheating on tests and lead to deficiencies in reasoning, logic, and critical thinking.
“Educators have a narrow window to set norms before they harden,” said LaShawn Chatmon, CEO of the National Equity Project, an Oakland group that helps teachers produce more equitable outcomes. “Local education agencies that take advantage of this opportunity to co-design learning and policy with students and families can help shift who gets to decide AI’s role in our learning and lives.”
A district spokesperson told CalMatters that images generated by the AI model don’t align with district standards and “we are collaborating with Adobe to address the issue.” Adobe VP of Education Charlie Miller said the company rolled out changes to address the issue within 24 hours of hearing about the incident. Miller did not respond to questions about how the tool was vetted before deployment.
As a result of what his child experienced, Hughes thinks students shouldn’t be told to use text-to-image generators for homework assignments. But he sees no attempt to place such limits on use of the technology in the Department of Education guidance.
“These tech companies are making things marketed to kids that are not fully tested,” he said. “I don’t know where to draw the line but elementary school is too young because it can get real nasty real fast as we’ve seen with the Grok stuff,” he added, referring to recent abuse of the Grok AI system to nonconsensually remove clothing in images of women and children.
Issues with AI guidance
The guidance supplies a list of unacceptable uses of AI by students, such as plagiarism, and urges educators to integrate real-world scenarios and case studies into discussions to help students apply ethical principles to practical situations. It also says students should be taught to “think critically and creatively” about AI tools’ “benefits and challenges.”
Julie Flapan, director of the Computer Science Equity Project at UCLA’s Center X, said that the Pippi Longstocking incident called to mind a 2024 study that found young Black and Latino people are more likely to use generative AI than young white people. That data, in tandem with the historical disparity in access to computer science education, means, she said, that some parents and students will need help to think critically about AI.
“We often think about technological advances as ways to level the playing field,” she said. “But the reality is we know that they exacerbate inequalities.”
Flapan said it makes sense that the guidelines urge critical thinking and vetting of AI tools before use and encourage education leaders to engage communities in decisionmaking. But, she added, the guidance doesn’t detail how to do that.
Charles Logan, a former teacher now at a responsible tech laboratory at Northwestern University, said that the guidelines fall short by not offering teachers and parents clear guidance on how they can opt out of using the technology. A Brookings Institution study released in January, based on interviews with students, teachers and administrators in 50 countries, concluded that the risks of AI in classrooms currently outweigh the benefits and can “undermine children’s foundational development.”
Mark Johnson, head of government affairs at Code.org, praised the guidelines, but said the state should offer more AI education support to educators and make proficiency in AI and computer science requirements for graduation. A recent report by Johnson found four states adopted such graduation requirements after releasing AI guidance.
Katherine Goyette, who served as computer science coordinator for the Department of Education until January, when asked about the Longstocking incident, pointed to parts of the guidance emphasizing the importance of engaging families, communities and school board members when evaluating AI tools. She also said critical thinking is important in preventing such outcomes, pointing to guidance that pushes administrators to consider potential harms before use.
Additional direction is on the way for how to put the recently released guidance into practice: the department’s AI working group will introduce specific policy recommendations based on the guidance by July.
The pressure of the AI inevitability narrative
The latest version of California Department of Education AI guidelines come as local educational agencies move away from blanket AI bans considered after the 2022 release of OpenAI’s ChatGPT. Instead, districts are moving toward deciding when and how students and teachers can use the technology. Those local decisions will be critical to how the technology is actually used in schools, since the state cannot require school districts adopt its guidance.
Even the largest school districts in California can encounter serious issues when deploying AI. In June 2024, Los Angeles Unified’s superintendent promised the best AI tutor in the world but had to pull it from use weeks later. A week later, news emerged that a majority of members on the San Diego Unified School District board, the second-largest district in the state, signed a contract for curriculum that they didn’t know included an AI grading tool.
The move toward state and district AI guidance, rather than bans, reflects a broader sense of inevitability in the state around adoption of the technology. In his October veto of a bill that would have prevented use of some chatbots by minors, Gov. Gavin Newsom said AI is already shaping the world and that “We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.”
Logan, who recently advised San Diego parents about how to resist and refuse AI use in classrooms, pushes back against this idea. He says the California Department of Education guidance should address situations in which parents might want to avoid having their children use AI at all.
“It’s surprising that the guidance wants to make proficient AI users of kindergartners and there wasn’t space to say no or opt out,” he said in a phone call.
The statewide AI guidance joins a series of efforts to protect kids from AI, including bills now before the Legislature that seeks to place a moratorium on toys with companion chatbots and protect student privacy in the age of AI. Common Sense Media and OpenAI are working on getting a kids online safety initiative on the ballot for the election in November.



