Take a Virtual, Interactive Tour 

Artificial Intelligence, UAT News, Ethics, ChatGPT

  |  
8 Min Read

ChatGPT and AI: Moral Quandaries of Emerging Technologies

Screenshot 2023-06-20 at 11.46.26 AM

Professor Belanger is the Program Chair for General Education/Core and teaches across several areas in the Humanities, including literature, creative writing and composition, and cultural studies. During his career as a writer and editor, he has served as chief editor for The Journal of Advancing Technology, coordinating editor for a reference series on the 2000s, and written numerous articles on history and culture for print and online journals. 

 

 

In the Ethics, Technology & Society course, I use the concepts of conceptual and policy vacuums to explain the social and ethical complications that can arise when a new technology is introduced to the world. This idea comes from the philosopher James Moor, who in 1992 described how a new technology (and its associated trends, user behaviors, modifications, etc.) often poses interesting conceptual questions that have to be answered before we can address the formal and informal rules that govern the design, creation, use, modification and sale of the technology in question. This absence of rules, laws, etc. is the policy vacuum.

OpenAI’s ChatGPT and other large language models illustrate Moor’s idea. At the conceptual level, a tool like ChatGPT presents a number of pathways to understand what it is and what it can do. It is, simultaneously, a chatbot that simulates conversation, a search engine, a text generator, and broadly, a fun tool for the AI-curious who simply want to play around. I’m certainly leaving out many other uses, but that is the nature, conceptually speaking, of many new technologies—we will discover what they “are” the more we use them.

While the conceptual aspect of ChatGPT is mostly benign because of its perceived benefits (more efficient knowledge management, smarter tools for workers, etc.), we are witnessing a wide-scale moral panic over AI’s policy vacuum. What are the appropriate limits we should put on AI tools? Who should be able to use them and when? Should we limit its development or curtail it altogether? If it’s wrong for a college student to use it to generate an essay from scratch, is it also wrong to collaborate with it to compose a well-developed and articulate essay instead?

Screenshot 2023-06-20 at 2.40.32 PM

 

 

This phase of attempting to address the absence of rules—both formal and informal—related to AI is fascinating to watch, but a little confusing because, currently, there are questions about AI’s malignant potential. At the risk of reducing the potential evils of superintelligence to a fictional character, picture HAL 9000 from 2001: A Space Odyssey and that’s basically what people are afraid of. HAL 9000 is a murderer. Hal 9000 brute forces a mission change because it (he?) feels threatened. Putting down HAL 9000 is necessary because not doing so is an existential threat to humans.

While AI proponents bristle whenever this issue is raised, there are numerous tech-savvy organizations issuing statements like this one, released just a few weeks ago by the Center for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” (https://www.cnn.com/2023/05/30/tech/ai-industry-statement-extinction-risk-warning/index.html). Although they appear to have fewer worries than the CAS, the creators of ChatGPT recognize that the risk potential of AI is high enough that they recommend mandating limits to AI development. (https://openai.com/blog/governance-of-superintelligence#SamAltman)    

I always warn my students not to rely on science fiction for their ethical ideas about technology because the majority of stories about technology tend to be paranoid fantasies about all the ways things can go wrong. Something similar is happening today with AI because we simply don’t know enough about its potential, and what we do know rings alarm bells. The rules of use regarding ChatGPT and similar models are not yet settled. If we don’t know exactly what this technology is capable of—what it is and what it can do—it’s difficult to trust it. This wouldn’t be a problem if all of the implications of AI tools were positive, but we don’t know yet. Hopefully, we will figure it out.

Comment

Artificial Intelligence: A to Z Part VII

Artificial Intelligence (AI) has been at the top of technology-conversations during 2023. AI suggests many opportunities for innovation as it is worked into applications that make use of large ...

Artificial Intelligence: A to Z Part VI

Artificial Intelligence (AI) has been at the top of technology-conversations during 2023. AI suggests many opportunities for innovation as it is worked into applications that make use of large ...

Tech the Halls

Dont be elfish, take a look at this list of tech-tastic gifts for inspiration on what to get your favorite techie this holiday season, put together by UAT President Jason Pistillo. From an Artificial ...
Picture of Katy Toerner Katy Toerner 8 Min Read