In the swirling discourse around Generative AI, so many continue to sound the alarm to abandon this technology because of the very real dangers it poses right now and because of the very unshaped and possible benefits it may bring in some as of yet unseen future. These criticisms are not wrong. Like any system built by human minds, bodies, spirits and hearts, these generative AI platforms will inherently do some kind of damage to other living creatures, the environment and ourselves. We saw this play enacted in living color with the rise of the internet and from it the proliferation of social media; a system designed with good intentions that caused great harm and still does. “AI Needs You” by Verity Harding is a wonderful, historical accounting of several key moments in humankind’s epochal growth that points to how we might learn from the crucial stepping stones on our timeline of technology and scientific innovation to, perhaps, circumvent some of the same perilous mistakes of the past and make better choices as we grow into this new technology of Generative AI.
However, in so many of these discussions around what AI is and why it is so harmful, many seem to conflate an idea that I think needs un-conflating. My dear friend and colleague Damien P. Williams’ work on Generative AI is essential reading and the first step in understanding how to think about these tools, specially in educational settings, which is where I would like to focus my thoughts. I would highly suggest you read, watch and listen to all of his material before continuing on here. Perhaps start with this piece: Disabling AI: Biases and Values Embedded in Artificial Intelligence.
Generative AI systems are not search engines; they do not deal in fact. They are not product makers; they are simulations of production. When you ask ChatGPT or Gemini to ‘write you an essay’; they are simulating the writing of that essay. They are guessing; they are making things up; just like a human brain would make it up. Human brains do this all the time, some do it better than others because they have trained longer about how to do it or maybe have better ideas about what words go next because they have read more books or made more synaptic connections here or there, or watched more movies or whatever. The expert experts have spent years training their brains in colleges and universities or wherever so their brains are really, really well equipped to spit out that paper and make it sound really, really good. This is how AI works too; it is a neural network; not a search engine. It works just like that university trained brain; with all the same bias, fallibility and propensity to just plain make up things. The issue is that we have, for some reason, learned to think about AI and GPTs and platforms like ChatGPT as product tools or ‘things that make products’ instead of what they really are: tools that help us with externalized metacognition.
John Borkowski tapped into this notion over thirty years ago, providing us with foundational frameworks around metacognition and offering ideas for using guided-discovery and scaffolding coupled with unscripted teaching. Borkowski writes,
“Strategy instruction, including the kind of scaffolding provided to particular students, is unique because the components of teacher-student interactions are not scripted but, rather, develop as instruction unfolds. The nature and content of each interaction is determined by the teacher's perception of a student's progress in acquiring an individual strategy. Thus, the ultimate goal of strategy-oriented scaffolding is to develop student independence through the gradual internalization of the processes that are encouraged during instruction.”
Borkowski, J. G. (1992). Metacognitive Theory: A Framework for Teaching Literacy, Writing, and Math Skills. Journal of Learning Disabilities, 25(4), 253–257. https://doi.org/10.1177/002221949202500406
Imagine if Borkowski and other proponents of this type of instructional methodology could have envisioned a technology capable of helping guide students, in real time, through their own thinking, like a sort of ‘second mind’, external to their own, visualized on a screen, that they could naturally speak and write to, that could provide endless metacognitive feedback on their leaning to support, guide and model the work they were doing with their instructional experts, developing new interactions as instruction unfolds, in concert with their own mind and with their teachers, in this symbiotic loop of metacognitive process?
Metacognition is an awareness of ones’ own thoughts and the patterns that drive them; a sense of how and why you are thinking and remaining conscious of what you are doing and why. In my field, writing studies, this concept translates to critical reflection, process and rhetorical awareness. We teach students to think deeply about the choices they make as writers, to reflect on the exigence of any given situation they are writing in and for, the audience they are trying to reach and why, and the methods, strategies, genres and communities their writing is working in and around. So much of this work is metacognitive and working in concert with other, more constructivist ideas like forms of texts, but there is so much metacognitive scaffolding to be done here.
Generative AI is not a tool for ‘write me an essay’. There are far too many who have conflated the idea that these tools are bad because 1). they are filled with bias because they have been created and for big companies using information that does not represent accurately the communities the data was taken from (this is true information and extensive research exists on this which you should read) and 2). that when used to create things like essays, writing, speeches, whatever, a system like this should not be taken as fact (which is true; it should not). When these two ideas get conflated, of course Generative AI will let you down. It should. Because that isn’t what it’s for. It should not be for making products. Because it doesn’t make anything. It is simulating making things. So what should it be used for?
External Metacognition.
Never before have we been able to get outside our own heads like this. Sure, we could take up pen and paper and write a journal entry and reflect on something. We do this all the time in writing studies. But with Generative AI, we can literally have a conversation with our own writing, interact with our thoughts, process our thinking and see it enlivened on a screen, in real time, unscripted, and who knows where that thinking might go? This is the power of generative AI; to think outside the brain. Not to produce some simulation of an essay; that essay you got from ChatGPT is just a thought made form; that is a simulation of something you were thinking about; a metacognitive thought process that you can now use to keep thinking.
So keep thinking. Maybe shift how you are thinking about AI a little; shift that frame slightly and think about AI not as your pocket essay writer or your go to email maker or the thing you use when you need to figure out how something works but instead think of it as you; your own ‘second mind’; your own personal metacognitive thinking partner.
I love your concept. Especially the line about thinking outside the brain. People need to engage with this and use AI as a process partner not a product maker. Great read.