AI Policy?
@LisaBerghoff/@DanKim
Last week the Biden Administration issued an executive order for "Safe, Secure, and Trustyworthy Artificial Intelligence." The order looks to maximize benefits of AI by sustaining American leadership in development and research, while simultaneously managing potential security risks and other opportunities for harm and discrimination. How should our nation keep up with innovation and competition while safeguarding our privacy and equity standards? Country-sized questions being collaboratively worked on by various nations, organizations, and even the UN. Much like the onset of the computer and the internet, recent advancements in AI could be the next big thing to alter the paradigm. Bigger questions like these also tend to hide smaller questions beneath them. Questions like: which of my assignments should I allow my students to use AI for? How formally should students communicate their usage of AI to their teachers?
Will we have our very own "executive order" on the educational use of AI in our district, building, or classroom? If our students are urged not to utilize AI to completely fabricate their assignment, should staff be barred or penalized for using AI to write an evaluation reflection in TalentEd? What about letters of recommendation? These questions are currently being asked in staff lounges, and administrative meetings all around the world. So what are other teachers or educational institutions thinking?
Ethan Mollick is an associate professor at the Wharton School of the University of Pennsylvania and is actively studying the impacts of artificial intelligence on work and education. He recently created and released his course policy on AI and his starts with, "I expect you to use AI in this class...In fact, some assignments will require it..."
He then goes on to highlight the limitations of large language model AI tools like ChatGPT. Mollick's policy is in line with the thinking that AI usage is now ubiquitous with out students, and we are well past the point of banning its usage outright. Instead, he argues that appropriate and guided usage of AI tools can be productive and beneficial to student learning. We should also keep in mind though, that he is teaching undergraduate and graduate students.
Another voice, coincidentally also from the University of Pennsylvania, is Professor Jonathan Zimmerman, who teaches education and history. His relatively recent Op Ed in The Washington Post titled, "Here's my AI policy for students: I don't have one" attempts to convince readers that overutilizing AI tools to think for you will rob one's capacity to know what you really believe in. Zimmerman believes that the banal processes that AI generators promise to eliminate are actually crucial in your development as a critical consumer (and producer) of information. He writes:
I want you to be intelligent. I want you to stare at a blank page or screen for hours, trying to decide how to start. I want to you to write draft after draft and develop a stronger version of your own ideas. I want you to be proud of what you accomplished, not ashamed that you cut corners.
I want you to be intelligent. I want you to stare at a blank page or screen for hours, trying to decide how to start. I want to you to write draft after draft and develop a stronger version of your own ideas. I want you to be proud of what you accomplished, not ashamed that you cut corners.
...So here’s my question: Do you want to live your life this way? If so, AI bots are definitely for you. Let them write your essays, do your problem sets, draw your artwork, compose your poetry. As they get better, outpacing the systems designed to detect them, you’re less and less likely to get caught. And you might even ace your classes.
But you will never know what you really believe. You will become the kind of person who is adept at spouting memes and clichés. Like ChatGPT, you will sound as if you know what you’re talking about even when you don’t.
I will readily (and unhappily) admit that many college classes don’t help you figure out what you really believe in. They reward students who spit back what the book or the professor says. You might as well be a robot. So I don’t blame you if you draw on an actual robot to do the work for you.
But some courses really do ask you to think. And if you ask an AI bot to do it instead, you are cheating yourself. You are missing out on the chance to decide what kind of life is worth living and how you are going to live it...
I don't necessarily think these two schools of thought are mutually exclusive paradigms to hold. They both have their merits and sound justifications. The difficulty lies with how much of the answers are contingent upon conditions and standards that bleed through myriad shades of gray. Where do we begin to wrap our heads around this? If there was ever a time to be extra aware and attentive and thoughtful of one's own curriculum and pedagogy, well, now may be time to change the paradigm. Let us know what you think below!
While I do not support students using AI to create entire essays, another point that's worth paying attention to is that fact checking skills become even more important for them to learn than they already are since bots like ChatGPT and Bard rely on a dataset that may or may not be accurate.
ReplyDeleteIn a lot of ways these bots are a lot like that one guy that everyone seems to know at least one of who will give you a completely false answer to a question a dozen times in order to avoid admitting that they don't know the answer. I can't remember the case (I could find it if necessary), but there was a lawyer was penalized (disbarred?) for using these tools to find cases to establish precedence in a trial. When the judge attempted to look up cases that were referenced, he discovered that they did not exist.
Depending on the prompt, these tools can become the epitome of confirmation bias if sources are not properly checked.