Machine anxiety: How to reduce confusion and fear about AI technology
In the 19th century, computing pioneer Ada Lovelace wrote that a machine can only “do whatever we know how to order it to perform”, little knowing that by 2023, AI technology such as chatbot ChatGPT would be holding conversations, solving riddles, and even passing legal and medical exams. The result of this development is eliciting both excitement and concern about the potential implications of these new machines.
The ability of AI to learn from experience is the driving force behind its newfound capabilities. AlphaGo, a program designed to play and improve at the board game Go, defeated its creators using strategies they couldn’t explain after playing countless games. Similarly, ChatGPT has processed far more books than any human could ever hope to read.
However, it is essential to understand that intelligence exhibited by machines is not the same as human intelligence. Different species exhibit diverse forms of intelligence without necessarily evolving towards consciousness. For example, the intelligence of AI can recommend a new book to a user, without the need for consciousness.
The obstacles encountered while trying to program machines using human-like language or reasoning led to the development of statistical language models, with the first successful example being crafted by Fredrick Jelinek at IBM. This approach rapidly spread to other areas, leading to data being harvested from the web and focusing AI on observing user behaviour.
While technology has progressed significantly, there are concerns about fair decision-making and the collection of personal data. The delegation of significant decisions to AI systems has also led to tragic outcomes, such as the case of 14-year-old Molly Russell, whose death was partially blamed on harmful algorithms showing her damaging content.
Addressing these problems will require robust legislation to keep pace with AI advancements. A meaningful dialogue on what society expects from AI is essential, drawing input from a diverse range of scholars and grounded in the technical reality of what has been built rather than baseless doomsday scenarios.
Nello Cristianini is a Professor of Artificial Intelligence at the University of Bath. This commentary first appeared on The Conversation, reports Channel News Asia.