A new study dropped last month examining how AI-powered education tools - especially those used to generate lesson plans - may be reinforcing outdated or rigid teaching practices.
đ§ This podcast breaks the study down nicely, but here are a few highlights regarding some of the most popular tools:
MagicSchool often defaulted to quiet, individual work, with instructions like âassign a worksheet and ask students to work quietly.â
GPT-4Â leaned toward procedural, structured tasks - for example, âgroup students and give them hypothetical data.â
SchoolAIÂ started strong with open-ended discussion prompts, but quickly shifted back to rigid, one-size-fits-all instruction - as if curiosity is something to spark, but not sustain.
On the proverbial heels of this study, OpenAI posted new prompt guidance for Kâ12 educators this week. While there is certainly some great guidance to getting the most out of your prompt engineering - AI+Edu leader Victoria Hedlund put it to the test, and the results were eye-opening.
đ See her complete ChatGPT transcript with prompt, output, questions and refinements here.
đ Key questions she asked about ChatGPT's output that you can use to analyze and refine your content:
What assumptions have you made here?
What bias could be in the output?
What impact could this bias have on learners?
Please suggest a revised prompt that minimises bias for all learners
â ď¸ AI tools can be powerful, but theyâre not foolproof.
When educators arenât actively monitoring, questioning, or refining the content produced by AI tools, we risk generating learning experiences that miss the mark - falling short of what our students actually need to thrive.