The problem is not AI; the problem is Education.
I am not a Luddite. Let’s make that clear. Years ago, as a Digital Learning Coach, I advocated for social media to be used in classrooms as early as kindergarten. On a Twitter how-to website, I wrote,
The ubiquity of social media is a new phenomenon, and there is no actual template for moving forward in this landscape. When constructing other school policies, educators draw on a rich history of nearly a century of child psychology and create policies informed by such theory. With the rapid shift in social media, schools look to various resources to define what is ‘right’ for their community. We suggest creating clear, intelligent, simple, positive, open statements on a school’s philosophy for expected behaviours to light the way forward for educators embracing teaching and learning in the ever-changing potential of the social, mobile web. However, as they move forward in this endeavour, schools no longer need to be the microcosm in which children learn how to navigate civilization, interact with others, build relationships and forge connections. Their learning landscape is now global. They should be able to create their connections safely, effectively, and ethically. It follows that we have the responsibility as educators to create documents, policies, and philosophies that underpin our practice with the desire to leverage technology in the best way for our learners (Killoran 2017).
In 2023, the above statement may still ring true, but I now hold a different view of children on social media platforms. My views changed because technology changed. Social media sites pose as fronts for personal data mining, and technology such as deep fakes make it challenging to determine what is real (tobrown05 2020). Kindergarten to Grade 12 educational institutions still need to discuss the impact of changing technologies such as deep fakes, the overt filter bubble created by advertising algorithms, the aggressive influence of willful disinformation, the potential of AI, or the fallout of what the Singularity will hold. We have not taken the time to host difficult conversations about the ethical ramifications of our rapid advancement and, more importantly, what it means for our students.
I mean, I still have conversations about the viability of makerspaces in schools.
Let’s admit we are well beyond the glib Facebook status updates of 2005 or the overtly positive community engagement of Twitter in 2006. By interacting globally, we evolved at a rate that has fundamentally changed us as a species. There have been profound consequences, good and bad; Arab Spring, #BlackLivesMatter, and #MeToo, but also bullying, racism, harassment and an age of targeted influence on governments and peoples.
Recently, a new technology has arisen that screams potential, just as much as fresh-faced Twitter did in 2006. ChatGPT has caused quite a stir in the educational world since the news that AI has reached a level that can surpass some humans in writing or thinking tasks.
ChatGPT launched on November 30th, 2022, but it is only the newest iteration in a decade of ongoing AI research. In 2016, my edtech geek friends and I watched in real-time as Microsoft’s bot, Tay, was taught misogynistic and racist rhetoric in less than 24 hours by Twitter users. Within 16 hours, Microsoft shut it down. We laughed around the table, but it was a disquieted, uncomfortable laugh. Tay’s successor, Zo, again failed. A Quartz 2018 article stated, “Zo is politically correct to the worst possible extreme; mention any of her triggers, and she transforms into a judgmental little brat” (Stuart-Ulin). More recently, Meta tried its hand at the world of chatbots when it released Blender Bot 3 in August 2022. However, like Tay, the bot has come under fire for spreading racist, anti-social and false information (Mashable 2022).
How do these AI models achieve this kind of lacklustre performance? Do they merely memorize data and reread it for our amusement? Are they picking up the rules of human grammar and language syntax, or are they building something like an internal world model? Are they merely spitting out what they learn, or are they close to learning the same way we do?
Humans develop a mental model of the world based on what they can perceive with their five senses. The decisions and actions we make are from this internal model. Jay Wright Forrester, the father of system dynamics, explains:
“The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system.”
Our brain processes the vast amount of information that we encounter each day. In order to do so, our brain creates an abstract representation of that information. Simply put, we construct a world model from what is input into us.
In 2017 I attended TED Summit in which neuroscientist and philosopher Sam Harris predicted,
“So the only alternative, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us” (Harris, 2017).
Harris explained that intelligence is just a matter of information processing in physical systems. Human computation is thinking flexibly across multiple domains to create a world model. So, now in 2023, how close are we to AI creating that internal model that learns at a rate impossible to fathom for a singular human brain? Indeed, these questions should be left to the scientists and not to educators. Or should they?
The question in Education is not whether we should use AI such as ChatGPT or block it; of course, we will use it. The deeper more poignant question is, What is learning when machines can learn?
However, that question can be as scary as Tay going full Nazi in less than 16 hours.
When writing this article, a Google search for ‘ChatGPT in Education’ produced about 7,820,000 results in 0.27 seconds. We have already been inundated with learning activities. Disappointingly, a similar search for ‘Ethics of ChatGPT in Education’ resulted overwhelmingly in discussions on cheating rather than the fundamental disruptive possibilities of AI on the premise of student learning.
We will probably band-aid ChatGPT into our learning models for the next while and use it as a vocab boost, writing prompts, or ‘storytelling magic’. We will use it for gamification, as a teaching assistant, and as a translator. We will worry about how it is used by students and as a result, some schools will ban it. But overall, we will integrate it as much as we can.
After all, “Disruption” = budget + systemic willingness to embrace change.
Unfortunately, as educators, we tend not to be able to surf the tech tide. We get swamped in the waves and attempt to just keep our heads above water. It is not because educators are Luddites or fearful; it is because systemic change is difficult. We are stuck in a system with budgetary limitations that require grade levels based on age, marks for assignments, report cards, and university acceptances. Content, assignments, assessments, reflection, rinse and repeat. Most things have stayed the same in over 100 years of Western Education. Nevertheless, change is coming, and our hands may be forced.
Until we have deeper, difficult conversations about what learning is in a rapidly changing world, until we have parents that forgo the need for the status quo, until we have schools that embrace the discomfort of systemic change, we will merely play with AI, until it plays with us.
Further reading and citations not in text:
Fast Company: How to trick Open AI Chat GPT
Unleashing the power of ChatGPT in education