Wednesday, December 21, 2022

How AI Will Save Education

A decade ago, I was a high school English teacher blissfully naive to what radical changes were coming thanks to edtech.  How innocent the days of 2012 seem now, when a teacher only had to worry about the ways Google Search posed a threat to cheating on multiple choice assessments (which began my questioning of the value of assessing regurgitated facts in the first place), or how I had to participate in the ever escalating "arms race" of combating students stealing online essays and quoting other people's words without citation by deploying the latest upgraded anti-plagiarism programs. 

Recently I wrote "What If Your Co-Teacher is a Computer?"  While I mainly addressed taking classic co-teaching strategies and applying them to scenarios that integrated in-person teaching with an online course or adaptive learning platform, I intentionally titled the blog entry with my tongue firmly in cheek, implying a day when a human would seriously have to wrangle with how best to work with an artificial intelligence (AI) educational partner.   I imagined such a day was far, far into the future.  

That was back in October.  But like a Christmas miracle a few months later, ChatGPT exploded into our zeitgeist seemingly out of nowhere, and it feels like something foundational has rapidly and irreversibly shifted in education...indeed, the world.

It should be noted, however, that ChatGPT did not arrive on the scene in one fell swoop, and is merely the steady evolution of AI over the last several years.  Consider the following developments in artificial intelligence:

  • In 2016, the Associated Press began outsourcing its minor-league baseball dispatches to a company called Automated Insights.  By merely taking boxscore data, the company's AI can write simple stories about the key facts of the game, which are then published in newspapers and websites nationally.  AP has expanded its use of Automated Insights into other sport dispatches, quarterly business financial reports, and more.*
  • In February 2019, an IBM AI supercomputer named "Project Debater" was pitted in a competition against a human national debate champion Harish Natarajan.  A journalist was impressed not just at Debater's ability to speak, but to listen: "It sort of took [the audience's] breath away to see that the computer, in its rebuttal, had actually listened to some of what Hari had said, digested it, and was formatting a response in real time to it...I think there was a sense from the audience that there was almost really a personality there."  While officials ultimately judged Harish the winner, the audience voted that Debater was more informative.**
  • While rough "deepfake" videos have been around for several years, the proliferation of video manipulation that appear real enough to fool viewers really shifted into high gear in 2019 with the launch of multiple AI open-source software and mobile apps.  (On a side note: I highly recommend sharing with students The Washington Post's website "The Fact-Checker's Guide to Manipulated Video"; this may require a free account to access.)
  • In 2021, as part of a project called "The Lost Tapes of the 27 Club," AI was used to create new music and lyrics in the stylings of several artists that died tragically young, such as Kurt Cobain, Jimi Hendrix, and Amy Winehouse.  Human soundalike vocalists were then utilized to actually record the compositions.  The organization behind these "new" songs, Over the Bridge, did the project to raise awareness of mental illness among musicians. 
  • In the last year, AI such as Midjourney and DALL-E that creates art based on user suggestions have soared in popularity while also raising the question of the ethics of such programs; for example, it has been discovered that some AI will find art already on the Internet to potentially blend into a new piece without proper attribution or compensation for the artists, or the more basic fear that artificial intelligence apps will do for free what you used to pay and commission artists to create.  

In this context, ChatGPT -- a free online program that, in essence, takes your prompts and questions in order to produce original writing -- can be seen as the natural next step for AI.  In some ways it's like a sophisticated search engine giving you a highly specialized, curated text (without, it should be noted, any citation of the primary sources or facts that led to such an answer).  It relies on the huge database of the Internet, and like much of AI, is learning how to be more effective every time we use it.  What fascinates and disturbs many people is not only the human-like style of ChatGPT's writing, but its ability to generate original and even artistic pieces. 

I thought I could easily stump ChatGPT.  But I was wrong.

My wife wrote an essay back in high school where she compared Bob Dylan's "All Along the Watchtower" to Shakespeare's Hamlet.  A quick Google search resulted in no immediate hits of others writing such an essay.  Yet, when I gave the prompt to ChatGPT, I got the following within a few seconds:

An excerpt (about two-thirds) of ChatGPT's original essay.

Next, I asked it to write a 12 line poem about dachshunds and Jedi Knights.  While it ignored my directions and wrote four extra lines, I was shocked to get a poem that dutifully blended both subjects in non-rhyming, narratively consistent verse:


Not Shakespearean, and certainly missing some compelling imagery, but I would have considered it okay if one of my freshmen had submitted it.  Still, I'm nitpicking about something game-changing here: ChatGPT wrote an original poem in seconds.  Even more amazing, ChatGPT made some independent artistic choices.  I didn't tell it to make the doxie a Jedi Knight, but by deciding to do so, it created a sense of whimsy.  Can a robot be whimsical?

The last example I will share goes to the heart of instruction.   I had heard of teachers using ChatGPT to write a lesson plan.  This will be kludgy and awful, I thought.  Especially if I load up my prompt with detailed specifics.  The generation of the text took longer than the other two examples -- maybe an entire ten seconds -- but once again, I could not stump the bot.


Again, far from perfect, and ignoring some of what I asked for in my prompt, but I was pretty much astounded that AI could earn its keep as my instructional collaborator. (For the entire text of this example lesson plan, click here to view as a Google Doc.). 

The implications of all of this seems clear if not outright ominous: if an AI tool today can now write essays and poems, how must that change how we teach and assess student writing?  If a bot today can write a lesson plan, how does that change or even threaten the teacher's job in the future (in both the metaphysical and literal sense of the word "job")?

It should be no surprise that much of the reaction to ChatGPT is somewhere between existential dread and anxiety.  This has already resulted in some traditional "arms race" approaches, such as fighting potential student (mis)use of ChatGPT with a tool to determine the potential that a submitted text was created by AI.  But is this sidestepping a bigger question? Recently in The Atlantic, Isabel Fattal and Derek Thompson discussed our potentially "wild future."  If an AI can write a college essay, maybe we should re-evaluate the purpose of a formulaic college essay in the first place. As Thompson posits, "Some people argue that ChatGPT could replace the college essay, and Ian [Bogust, another staff writer] is saying: That’s only because the college essay is dumb to begin with. It’s possible that lots of things in the economy are dumb in the way that the college essay is dumb. If that’s true, then GPT can still be revolutionary. What it does might be dumb, but it’s also incredibly useful." An AI that can write a lesson plan might be better seen not as a threat to what an educator can create, but instead be an "inspiration stimulant" (Thompson's phrase) that serves to improve what a final lesson plan might be.  

Earlier this fall, Corey Mohn, President and Executive Director of CAPS Network, had recommended to our Deeper Learning Team a book that was just published in 2021 --- Running with Robots: The American High School's Third Century by Greg Toppo and Jim Tracy.  (Corey had me at "Greg Toppo," as I've been a fan of his since 2018 when I first read one of his earlier books.)  When ChatGPT hit the headlines, I binge read the book, as it seemed eerily prescient to the current hot topic question of AI's impact on education.  At this highly charged inflection moment, where the need for transforming school is critical, I could not more highly recommend this book.  It's a fascinating and engaging blend of four narrative strands: an American public education history from the 1800's to the present; the impact and ramifications of recent AI developments with predictions on what is to come; a spotlight on four high schools operating today which serve as models on how school can and should be; and last but not least, a novelistic Rip Van Winkle utopian story where a principal falls asleep in 2020 and wakes up twenty years later visiting his radically changed school. 


For Toppo and Tracy, future AI should not be seen as the bleak robotic replacement of teachers and humanity itself, but instead a challenge we must acknowledge and address: "As a society, we need a 'pre-AI moment' -- that is, a systemic anticipatory response on a scale commensurate with a previous generation's 'post-Sputnik moment.' And, like the response after the fact of Sputnik in the 1950s, this should transpire as much in the realm of educational as in policy initiatives" (190).  But our "systemic anticipatory response" must recognize why prior educational initiatives have failed in the past:
[D]ecades of failed revolutions, of the "next big thing," of teaching machines and expensive technical marvels, have failed to transform a hidebound, nearly intractable system.  Revolutionizing education is neither wise nor feasible.  Rather, we must evolutionize education, seeking to forge the optimal synthesis of the best and most germane from the received tradition with the most promising of emergent pedagogical practice.  After all, the received traditions have been honed over millennia and have much that is fine and still relevant, while new educational paradigms are often faddish and ill-considered.  The careful sifting and selection is essential. (37, authors' italics)

We don't have to throw the traditional pedagogical baby out completely with the "hidebound" bathwater, but we certainly have to evolve our educational practices in light of our near-future technology and needs.

Toppo and Tracy point out an illustrative example that involves math instruction. By the 1970's, pocket calculators became cheap and commonplace enough that an average student could easily own one.   Technology offered us a chance to be a partner. This should have been a watershed moment for instruction, when math teachers began focusing at least as much (if not more!) time on mathematical practices -- on authentic practical application, on conceptional knowledge, on experiencing the aesthetic beauty of mathematical thinking -- rather than having students spend their time mindlessly plugging and chugging through memorized but ill-understood formulas.  However, even a half century later with the latest smartphones and laptops and AI apps, we still spend a disproportionate amount of time asking students to compute like, and compete with, a computer instead of thinking math like a human and collaborating math solutions with humans.   As the authors' fictional future principal puts it, "We humans are still pretty unique in our ability to make creative connections across apparently disparate knowledge areas....That's the value of omnidisciplinary literacy. The time that is freed up by not bringing every student to content fluency can be devoted instead to imbuing them with process fluency" (16, my italics).   In other words, have students spend less time regurgitating facts that you can Google and on skills rapidly being augmented or replaced by AI, and instead spend more time learning the critical human applications that truly matter that can't be easily supplanted by a robot.***

The idea that AI can actually be the savior of educators, education, learners, and learning and not its destroyer really goes to the heart of our current Deeper Learning work.  We pose questions such as "How might we reimagine the student experience to become more meaningful, relevant, and inspiring so that every learner is equipped for a successful future?" We know the answer is not more regurgitation of facts, more formulaic writing, more solving of inauthentic scenarios of two trains running toward each other on the same track.   We cannot have students spend years learning the "hidebound" traditions of the so-called fundamentals before they get a chance to "play the whole game," to use Jal Mehta's excellent Little League analogy.  We need more lessons that emphasize Profile of a Graduate competencies that lead to application of durable skills, which will likely be the last bastion of uniquely human cognition left in the decades to come.  If we hesitate to transform teaching now, AI may truly become humanity's rival instead of our partner in the future.

To take us back to our present ChatGPT dilemma for student writing, I offer three possibilities for teachers.

  • Lean into ChatGPT as an opportunity for a student to generate a first draft, provided the understanding is that the student revises that draft to further enrich and enlarge her/his thinking with additional detail and complexity.  The student should also do "reverse engineering research" to find, validate, and cite any facts or quotes given in ChatGPT's text. As a student requires feedback and shows examples of their subsequent drafts,  the student is responsible for being transparent about their writing process and is asked to reflect on how the paper evolved from ChatGPT's initial text to their own final product.
  • Create "mini-dissertation defense" opportunities where students must orally explain their writing to the teacher and/or peers.  A student that can metacognitively talk through their thinking is a person who can demonstrate how they, and not a computer, have demonstrated mastery.  Pose questions and inquiries like, "Where are you personally connected in the work?" or "Explain your use of metaphor here."  Of course, such oral opportunities also provide practice and proof of those Profile of a Graduate competencies such as effective communication. 
  • "Where's the epiphany?" If the expectation of the writing is to simply synthesize known facts into a narrative, tools like ChatGPT will continue to put the human 3.5 essay out of business.  Let's instead make the expectation of the writing be the creation of a new insight or invention, instead of a regurgitation of bullet points.  Consider ways to refine what your writing is asking for, such as the performance assessment criteria tool in my blog entry "From Good to Great, Initial to Ideal: A Way to Improve Exhibitions and Other Performance Assessments."

I'll end with a rallying cry from Toppo and Tracy:
So, if we want to help our young people thrive in a world of miraculous technology, we must forget the nouns -- the AI and robots and intelligence systems -- for these will always be changing.  We must focus instead on the verbs: What it is we trust our young people to do that we don't trust technology to do?  And how can we prepare them for this future?   
For starters, let's not give them robots' work.  Let's trust them to do better, harder, more rigorous, more interdisciplinary, high-stakes work -- and not just for the 10 percent, but for everyone.  We must expect more of young people, not only because the world will expect it of them but because they are begging us to do so. (70)
Let's teach our students to run with the robots, and not against them.



Some footnotes:
  
* Toppo and Tracy, pages 82-83.

** Toppo and Tracy, pages 201-203.

*** "For so many students, school is simply not demanding enough - and the dilemma, of course, doesn't begin in high school. When the Education Trust, a civil rights group that advocates for low-income and minority students, examined more than 1,800 math assignments given to middle school students in six urban, suburban, and rural school districts in 2018, it found that most of the assignments featured 'low cognitive demand,' overemphasized procedural skills and fluency, and provided little opportunity for students to communicate their mathematical thinking.  The problem was often worse in high-poverty schools" (Toppo and Tracy, page 70).  

2 comments:

  1. I absolutely loved this post! The clarity and depth of your thoughts are impressive. Keep up the great work!

    ReplyDelete