AI: Challenges and opportunities

by malinga
November 5, 2023 1:05 am 0 comment 813 views

While this editorial was certainly not written by or with the help of Artificial Intelligence (AI), it is certainly possible to do so using freely available AI programs and web software such as ChatGPT. Such is the power of AI now. AI, along with Machine Learning (ML), Robotics, Natural Language Processing (NLP), Computer Vision (CV) and the Internet of Things (IOT) have become somewhat controversial concepts in our tech-driven world.

Elon Musk of Tesla, SpaceX and X (formerly Twitter) fame, himself no stranger to AI with his Neuralink concept, last week declared AI “one of the most disruptive forces in global history” in a sit-down conversation with British Prime Minister Rishi Sunak. The duo dove into the dangers and opportunities of AI, during the UK’s inaugural AI Safety Summit.

“AI will be a force for good most likely,” Musk said. “But the probability of it going bad is not zero percent.” Musk also outlined the tantalising but grim possibility that AI could take over almost all jobs that currently need human or even animal intelligence (such as sniffing dogs).

“I’m glad to see at this point that people are taking AI seriously,” Musk said to Sunak. Musk also had some positive thoughts on AI, predicting that AI companionship would be one of the highest forms of friendship. This has already been explored in movies such as A.I. and Bicentennial Man but this is the first time that a global tech leader has made the same bold prediction.

But the bigger fear is that AI could go wrong or even develop a mind of its own, a concept that formed the basis of Arthur C Clarke’s 2001: A Space Odyssey and Stanley Kubrick’s seminal movie based on the same book. In this story, the computer named H.A.L. (Heuristically programmed ALgorithmic computer) goes haywire, overruling the human inputs, posing a threat to the very survival of a space crew.

Even right now, the destructive side of AI is seen in the two main theatres of war – Israel/Gaza and Russia/Ukraine, where some advanced weapons have AI capabilities. Could AI be developed to the extent where it can threaten the very existence of humanity itself? This is a question that is increasingly being asked now, even though James Cameron foresaw this decades ago in his hit movie The Terminator, where an AI program called Skynet takes over the world and wages war against humanity.

This fear and a myriad other concerns have driven Governments around the world to regulate the development of AI and ML. The United States, where some of the leading tech companies are headquartered, recently adopted an Executive Order (EO) on AI Regulation. US President Joe Biden signed a sweeping AI EO, wielding the force of agencies across the Federal Government and invoking broad emergency powers to harness the potential of AI and tackle the risks of what he called the “most consequential technology of our time.”

“One thing is clear: To realise the promise of AI and avoid the risks, we need to govern this technology,” President Biden said during a White House address ahead of the signing, calling the EO the “most significant action any Government anywhere in the world has ever taken on AI safety, security and trust.”

The EO directs the Government to develop standards for companies to label AI-generated content, often referred to as watermarking, and calls on various Government agencies to grapple with how the technology could disrupt sectors including education, health and defence. Indeed, some experts fear that robotic AI-equipped teachers, medical personnel and soldiers could be extensively used in these sectors in just 20-30 years. Already, many diagnostic tests for cancer and some other diseases are carried out by AI. One cannot even imagine how this sector will develop over the next 20 years.

Other regions and countries have noticed what the US is doing. The European Union is expected to reach a deal by the end of this year on its AI Act, a wide-ranging package that aims to protect its residents from potentially dangerous applications of AI. China is introducing new laws for generative AI systems. G7 countries have announced voluntary guidance for companies, called the International Code of Conduct for Organisations Developing Advanced AI Systems. Many other countries are evolving laws and regulations governing the use and development of AI and ML. Sri Lanka too should bring in laws that govern the development and deployment of AI.

There is no doubt that AI can be a force for good in a world driven apart by conflict and inequality. For example, AI software can be used to identify potential agricultural sites which may be able to feed thousands, if not millions, in a world where 800 million people go to bed hungry every night. Likewise, the early detection of cancer and other life-threatening diseases by AI could save thousands of precious lives. But deciding where the line ends before AI can overtake Human Intelligence will perhaps be the greatest challenge of our time.

You may also like

Leave a Comment

lakehouse-logo

The Sunday Observer is the oldest and most circulated weekly English-language newspaper in Sri Lanka since 1928

[email protected] 
Call Us : (+94) 112 429 361

Advertising Manager:
Sudath   +94 77 7387632
 
Web Advertising :
Nuwan   +94 77 727 1960
 
Classifieds & Matrimonial
Chamara  +94 77 727 0067

Facebook Page

All Right Reserved. Designed and Developed by Lakehouse IT Division