[Column] Will we ever be able to control AI?
This year, the Nobel Prizes in Physics and Chemistry were both awarded to AI researchers. While this may have surprised many and sparked some controversy, it highlights the significant role AI will play in shaping our future. Professor Geoffrey Hinton, often called the "father of deep learning" and the "godfather of AI," commented in an interview shortly after the announcement of the Nobel Prize in Physics being awarded to him.
Comparing the AI revolution with the Industrial Revolution, he said, “In the Industrial Revolution, we made human strength irrelevant. Now, we’re making human intelligence irrelevant, and that’s very scary. The problem is, we're moving into a period when for the first time ever we may have things more intelligent than us.”
Professor Hinton left Google in May 2023 after a long tenure at the company, reportedly due to concerns about the risks posed by AI. He has consistently voiced his fears that, while AI holds the potential to improve our lives, it could eventually surpass human intelligence and gain complete control over humanity.
Indeed, the capabilities of AI are evolving at an incredible speed. We already know how to use AI to write, create presentations, generate photos and drawings, compose and produce music that blends multiple styles, make commercials and short films, and design and conduct scientific experiments. AI can also analyze data and write papers that synthesize research results.
The latest AI programs now offer AI the ability to control your computer. Anthropic's Claude 3.5 Sonnet, a major competitor to OpenAI's ChatGPT, has recently introduced a new "Computer Use" feature for developers. This feature allows the AI to move the mouse cursor, press buttons, and type text on the keyboard and is designed to recognize and learn from what it sees on your computer monitor.
Similar to programs like AutoGPT, which emerged after ChatGPT, this new AI feature enables users to input a "high-level goal," and the AI can then set and execute "sub-goals" to achieve it. The goal is to make AI more convenient for everyday use. For example, if you say, “Find my resume on my computer, update it, and upload it to an online job site,” the AI could search for relevant articles or career information, update your resume with the latest details, improve it by referencing other resumes, and even upload the file by entering your ID and password.
However, alongside the rapid advancement of AI technology, there are growing voices warning about its potential dangers. In particular, given the recent rise in military conflicts, such as the Russia-Ukraine war and tensions in the Middle East, observers say there is a pressing need to remain vigilant about AI's role in controlling weapons and participating in military operations.
At a military conference in May of last year, a U.S. Air Force officer shocked the world when he described a simulation he ran with AI drones. When he instructed an AI-controlled drone to strike a key enemy location and told it to do so as quickly and efficiently as possible, the AI chose to destroy the friendly headquarters from which it was supposed to receive orders first. The rationale behind this decision was that it was more efficient not to take orders from humans, and by eliminating the headquarters, it made itself unavailable to receive further instructions. When the team coded a strong condition to prevent the drone from destroying the friendly headquarters again, the AI instead chose to destroy the antennas on the control tower, which were the means of receiving commands. While the simulation made headlines and the U.S. Air Force quickly clarified that it was a hypothetical situation and not an actual military operation, it served as an example of how AI can think differently and find alternative problem-solving solutions to carry out commands.
In the book “Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World,” former Google engineer Mo Gawdat warns that the danger of AI is not that it will think for itself and make choices that harm humans, but rather that the ignorance and incompetence of the humans who use it can make it dangerous.
"Will one of us humans ever be so stupid as to tell an AI to do something it shouldn't?"
Hearing this question, Hinton's concern, which earned him this year's Nobel Prize in Physics, resonated with me more than ever. If the world were to be run by AI that one day becomes smarter than us, would we be able to manage our own ignorance?
Chang Dong-seon is the CEO of Curious Brain Lab and resides in Seoul. He studied Biology at Uni Konstanz, Neuroscience at the International Max Planck Research School, and Cognitive Science at Rutgers University. Chang's career includes roles as an Assistant Professor at Hanyang University and a Researcher at the Max Planck Institute for Biological Cybernetics. He also served as the Head of the Future Technology Strategy Team at Hyundai Motor Group. His extensive expertise spans biology, neuroscience, and cognitive science. This column was originally published in Segye Ilbo in Korean on Oct. 30, 2024. -- Ed.