Learning and Teaching in the Age of AI
@LisaBerghoff/@MrKimDHS
*The following blog was written by a human being
Last night a few colleagues and I went to the Learning and Teaching in the Age of AI conference held by Lake Forest Country Day School and we listened in on a variety of breakout sessions and guest panels run by some pretty serious and interesting folk. A few professors of computer science and technology and literature spoke on a variety of topics all relating to what they were seeing as the impacts of AI in their respective fields and classrooms. Besides being the only PD I've been to that had booze at the end, it was one of the first times as an educator that I truly felt my mind was being blown by what was being discussed. It was intriguing and thought-provoking but also just gave me the heeby jeebies.
For the most part, I've got a decent understanding of current forms of AI and how they integrate into our professional and personal lives. My Roomba, Frank, has been terrorizing my kids and dog for months now and my daughter is so afraid of Frank that she is all but frozen with fright at the sight of it which leaves me no choice but to have it constantly covered with a towel. When she's at daycare, Frank expertly navigates a fairly perilous floor of toys, towels, and other traps using a combination of LIDAR, cameras, and an AI system that FAR outpaces its predecessor, Bob. But what does it mean that my robo vacuum Frank uses AI to clean my floor? Alan Turing OBO, the famed British mathematician, philosopher, and computer scientist still holds the best description of what AI truly is: an intelligent computer system that can imitate a human being. Can your self-driving car or robot vacuum or autofill email successfully predict the nuanced actions of what a human would have done in its place? A human, ideally, would see a pedestrian and slow down and stop the vehicle. A human would see dog poop on the floor and vacuum around the mess instead of dragging it around the entire first floor. Frank in turn navigates my filthy and battered floor the same way a rational human would: avoiding messes and traps but also creating an efficient route that is responsive to my floor plan and its optimized battery life.
So why do ChatGTP and Midjourney and other AI tools feel so much more impactful and impressive and rather more terrifying than Frank? It may be because they have the capacity to not just imitate us, but to surpass us and substitute us with a superior version of anything we could do ourselves. What I cannot wrap my head around is the absolutely break-neck pace at which AI is improving...and what AI is going to look like in the next few years. A popular Reddit post on ChatGTP summarized the newest and best ChatGTP API tools that were created in JUST THAT WEEK and it was dizzying how many advancements and resources the user was able to compile.
The current released iteration of ChatGTP is actually 3.5. They recently released 4.0 under a paywall last month, but the productive difference between 3.5 and 4 is staggering. Take a look at what ChatGTP 4 can do that its predecessors cannot. And the rumors are that ChatGTP 5, which has not yet been officially announced by OpenAI, will be the closest thing to General AI that mankind has ever created. If true, the impacts would be exponential and staggering. A general AI can reason and plan and make unilateral judgments and learn instead of merely guessing the next logical word, which is what ChatGTP 3.4/4 is doing (it's essentially the world's most complicated autofill program). What the heck does this all mean for us as teachers and parents and citizens?
Paradigm Shift
In one of last night's breakout sessions, I had the unexpected pleasure of sitting next to former HP principal Tom Koulentes, who asked a poignant question asking how we adjust the ways we challenge our students in the face of something like ChatGTP. When Google and Wikipedia become more readily available, educators were pushed to avoid simple fact/recall questions and offer more analytically heavy formatives that students couldn't just "look up." But what do we do now?
Dr. Janel White-Taylor, an associate professor of educational technology at Arizona State University, offered some assurances that once you are familiar with the academic "voice" of your students, you can easily discern the difference between legitimate work and artificially produced work. While that may be the case now, I wasn't so sure of my own ability to identify the surreptitious use of ChatGTP, especially when newer, more powerful versions are made available.
Dr. White-Taylor also offered the idea of assigning fewer essays and substituting different ways for students to demonstrate authentic learning. The social studies teacher in me cringed a little bit, but for the most part, I was just confused. What do we replace The Essay with? Isn't it still mission-critical for students to work and think long and hard on an analytical piece of writing? Don't we have an almost fiduciary duty to our kids to give them the skills to convert important and complex arguments and theories in their minds and manifest them into something concrete? Do we just do more in-class writing assignments? I thought we abdicated penmanship and cursive for QWERTY? Video responses and speech assignments and project-based creations? Dr. White-Taylor didn't really have a good answer, and I'm beginning to think that there really isn't a good answer yet. And our careful answers to AI's like ChatGTP may become obsolete and redundant in a matter of MONTHS, not years as the pace of innovation sprints ahead of what we can properly respond to. Luckily, Turnitin has released its AI writing detector function in Australia and New Zealand and reportedly boasts a 98% confidence rating. Can firms keep up with the exponential growth of AI with their own AI?
Something Safe
I could go on and on and on about everything I saw and heard and thought about during last night's conference. Firms are offering Six Figure salaries to "prompt writers" or professionals who can trigger the most efficient series of commands to AI tools to get the desired outcome." Apparently, administrators at Vanderbilt University sent a ChatGTP-generated email to the student body in response to the horrific Michigan State University shooting in February and are now on administrative leave. One of the breakout sessions was titled, "Ethics and AI - Can a Computer Pray?" One panelist mentioned how simple coding scripts can instantaneously be written for you in ChatGTP and he doesn't know how much longer he'll be teaching those scripts in his beginner coding class. I then realized that ChatGTP can give you detailed instructions on Queary functions in Google Sheets (if you know what to ask it)!
Again, that all seemed somewhat overwhelming so now I've decided to compile some simple, concrete ways to incorporate AI into the production side of teaching. I'm not offering much insight on how to bring AI into direct instruction. I'm not quite there yet. But on the teacher productivity side, there's much to see. Here are a few examples of how I've used ChatGTP and AI in the classroom, as well as examples given last night:
Review Materials:
Last week I had a number of students miss an all-important Monetary Policy lecture on how the Federal Reserve used to impact interest rates. In addition to reminding my student via email that the presentation and lecture notes and assignments were on Schoology, I decided to ask ChatGTP for assistance. The prompt went something like this: "You are now an AP Macroeconomics teacher and you had a number of students miss your lecture on monetary policy. Write a brief explanation of the three fed monetary tools used to change the money supply." Then I asked it to rewrite the explanation as if it were talking to a 10-year-old. Then to a 5-year-old. I read and reread each excerpt for accuracy and created three separate docs and sent them to my students. Students were immediately intrigued, especially considering ChatGTP is certainly much more interesting and cleverer than I am. It took us (ChatGTP and I that is) a mere 45 seconds of prompts and re-prompts to generate the three review excerpts, and it took me 5 minutes to slap them into a Google Doc and digitally ship them off to my students.
Warning: ALWAYS review what ChatGTP generates for accuracy |
Generating Templates and Documents
Need to write an email and.....don't want to? Enter a specific prompt into ChatGTP and use and edit the template. Now prompt writing may become the next new skill we may want to pay attention to. The ChatGTP-generated product is only as effective as the prompt. If you give it a prompt but do not like the output, refine your prompt conversationally. Here are a few email prompts to get you started:
Could you create an email template for…
- Reminding students about missing homework.
- Parent communication regarding absences.
- Information going out to a club, sport, or activity.
Could you write a direct email to colleagues about...
- Anything
I even asked it to write a song to the tune of Everlong by The Foo Fighters explaining Monetary Policy. I mean....it produced it, but it's certainly not winning any Grammys anytime soon.
Conclusions?
Here's where we need to go next: as an educational community we have to gain some insights into AI and envision its role in our pedagogy. Will we outright ban its use and strike it from our network or do we embrace it and utilize its efficiencies? To what extent do we add AI to our digital literacy curriculum? Do we teach our elementary and middle school students about ChatGTP and why it's useful or harmful to their learning? What about the ethics of AI-produced work? Who "owns" the value of what it generates? Should staff automatically disclose when they use it? What are the costs of using ChatGTP to write a letter of recommendation for students or colleagues if they are then accepted/admitted to their program of choice?
This blog could go on and on and on and it still wouldn't cover interesting things in AI that happened in just this week. So I'll just leave you with something one of the panelists said that stayed with me on the drive home: Dr. Sugata Banerji, an associate professor of Computer Science at Lake Forest College said that "no professional runner competes without shoes, and within 1 year I don't imagine a world where any educator doesn't use AI in their classrooms."
This was an awesome read. I personally believe that banning AI outright is a mistake, and dare I say, regressive approach to education. As you've made abundantly clear the technology is here and advancing at a remarkable pace. It's not going to go anywhere. Our role as educators is to prepare our students to thrive in the world. Pretending AI doesn't exists and asking them to perform their coursework as if it's not a thing that will be penetrating nearly every aspect of their lives in 20 years isn't doing anyone any favors except for those who simply aren't open to considering new technologies on the basis of "we've always done it this way..." or "when I was a student..." Life is change; technology is change. To provide a meaningful education we need to adapt and use these new tools to enhance the learning process rather than obscure it.
ReplyDeleteexist*
Delete