Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

AI Doom Not Imminent, Say Officials at UK Summit

AI Systems Don't Yet Pose Risk of Loss of Control, Say Attendees
AI Doom Not Imminent, Say Officials at UK Summit
International leaders at the UK AI Safety Summit on Nov. 1, 2023 (Image: U.K. government)

As day one of the U.K. AI Safety Summit draws to an end, attendees said fears over losing control over AI systems is a future worry, although they appeared to agree that securing AI is a pressing topic for today.

See Also: User Entity & Behavior Analytics 101: Strategies to Detect Unusual Security Behaviors

Secretary of State for Science, Innovation and Technology Michelle Donelan opened the session, warning of the dangers of AI if it "concentrates unaccountable power in the hands of few." She said the event sought to hold "honest and candid" conversations about such risks posed by AI and address these concerns.

The Conservative government of Prime Minister Rishi Sunak convened the summit as part of its bid to convert Great Britain into a global hub for AI development - a goal that has been met with skepticism by academics (see: UK's AI Leadership Goal 'Unrealistic,' Experts Warn). The summit is set to conclude on Thursday.

Commenting on the behind-closed-doors discussion on the possible scenario of AI systems losing control, Singapore Communications and Information Minister Josephine Teo said while the participants took a "divergent view," the general consensus was that AI did not pose an imminent risk of loss of control.

"AI systems today do not yet pose a real risk of loss of control," Teo said. "They require human prompting and generally fail when asked to plan over time toward a goal. And current models also have limited ability to take action in the real world.

Poppy Gustafsson, the CEO of British cybersecurity firm Darktrace said the discussions largely focused on the "daily reality" of artificial intelligence rather than more on "hypothetical risks of the future."

"I was a little worried that we were all going to be chatting about hypothetical risks of the future and robots are going to kill us all and not talking enough about AI like we are using it now," she told the Independent.

Chinese Vice Minister of Science and Technology Wu Zhaohui, while calling for a global effort in developing international AI governance, stressed nations' right to develop their own AI systems.

"We uphold the principles of mutual respect, equality and mutual benefits. Countries, regardless of their size and scale, have equal rights to develop and use AI," Zhaohui said.

China is among 28 countries that are signatories to the Bletchley Declaration, which calls for an urgent global consensus on managing various AI risks.

Tesla CEO Elon Musk told ITV news that "a little bit of fear" of AI is "probably wise."

"The very worst could be extremely bad but i think the probability of extremely bad is low," Musk said on the social media network formerly known as Twitter, which he has owned for a year now.

On Wednesday, U.S. Vice President Kamala Harris, who will be attending the AI Safety Summit on Thursday, announced additional measures taken by the U.S. government to promote secure AI. They include the launch of a new AI Safety Institute - a multigovernment proposal to promote the responsible use of AI in the military (see: Ensuring Privacy in AI Systems Is Critical, VP Harris Says).


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.