ISMG Editors: Latest Updates on AI Tech, RegulationsAlso: Key Takeaways From UK AI Summit; Security Insights From India
In the latest weekly update, editors at Information Security Media Group discuss the shaping of responsible artificial intelligence governance, major takeaways from the U.K. AI Summit, and an overview of the main themes and insights from ISMG's recent Mumbai Summit.
The panelists - Anna Delaney, director, productions; Tony Morbin, executive news editor, EU; Rashmi Ramesh, assistant editor, global news desk; and Akshaya Asokan, senior correspondent - discussed:
- Recent developments on AI governance and potential regulations;
- Highlights from the U.K. AI Summit, which included participation from U.S. Vice President Kamala Harris and OpenAI founder Sam Altman;
- Highlights from the ISMG Mumbai Summit, which included sessions on supply chain management, zero trust and API security.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the Oct. 27 edition on business and cyber resilience amid the Israel-Hamas war and the Nov. 3 edition on the record surge in ransomware.
This transcript has been edited and refined for clarity.
Anna Delaney: Hello and welcome back to the ISMG Editors' Panel. I'm Anna Delaney, and this is a weekly spot where we examine the most important news and events in cyber and InfoSec right now. And, I'm thrilled to be joined by my colleagues Akshaya Asokan, senior correspondent; Rashmi Ramesh, assistant editor, Global News Desk; and Tony Morbin, executive news editor for the EU. Tony, you have been working hard on a generative AI survey. Tell us about it; what have we learned?
Tony Morbin: I will get to that. We recently celebrated bonfire night with bonfires and fireworks here in the U.K., commemorating how Guy Fawkes tried but failed to blow up the Houses of Parliament in the Gunpowder Plot back in 1605. Firstly, as a kid, I really enjoyed Bonfire Night, and thanks to an old children's encyclopedia, which used the recipe for making gunpowder to explain percentages, plus a chemistry set and a local chemist for extra supplies of potassium nitrate, I made my own gunpowder and fireworks. I know some people might say "No, Surely not!" but, I have the book here. I don't know if you're going to be able to see the page, but there we have the recipe for making gunpowder. Many will consider that that kind of knowledge is dangerous, and as Jen Easterly of CISA noted in her Vanderbilt Summit keynote the internet has been used to spread all sorts of dangerous information. She cited a treatise on how to make a bomb in the kitchen of your mom, which was used by the Boston Marathon bombers to kill people. The fact is, we launched the internet with no security controls, and we added social media without considering any negative impacts. We now see that AI is exceeding Moore's Law and doubling in capability every six months with no agreed controls. AI is going to make knowledge more widely available, and without controls that will include how to make cyber weapons, bio weapons and worse, including, the potential end of humanity. The time for action is now. At the recent U.K. AI Safety Summit, which Akshaya is going to be discussing, U.S. Vice President Kamala Harris noted how AI could enable cyberattacks at a scale beyond anything we've seen before. In the resulting Bletchley Declaration, governments agreed to tackle AI safety risks. In the U.S., the Biden AI safety bill includes a requirement for tech firms to submit test results for powerful AI systems to the government before they're released to the public. AI-powered cyberattacks are now on the radar of governments; however, without compulsion, what are organizations doing when it comes to mitigating the risks of AI? In a recent survey conducted by ISMG of both business leaders and cybersecurity professionals, it wasn't surprising that business leaders were less cautious about the introduction of AI, keen to harness the potential productivity gains. More than half of the respondents in our survey deploying AI said that they've achieved gains of more than 10%, and often considerably more. The use cases were wide ranging, from automation, back office functions, marketing and content creation, to patching vulnerability management, risk and incident management, research and diagnosis of medical conditions. Unfortunately, AI regulations are clearly changing too fast for users to keep up, and they lack uniformity on a global scale. It wasn't a surprise to find that only 38% of business leaders and 52% of cybersecurity leaders say that they understand what the AI regulations are that apply to their vertical sector and geography. It was concerning to see that only 30% of respondents reported having playbooks for AI deployment, even though 15% of respondents have already deployed AI and a further 60% have plans to do so. The concerns about AI risks among respondents were wide ranging, but the top was leakage of sensitive data by staff using AI, cited by 80%; followed by ingress of inaccurate data, cited by 7%; with the third place given to AI bias and ethical concerns, cited by about 60% of respondents. There were a multitude of mitigation strategies mentioned from encryption and blocking software to blacklists spanning access for certain AI formats, user groups or individuals, or whitelisting those that were going to be allowed. Contradicting the plans of 70% of respondents to use AI, it was significant that 38% of business leaders and 48% of cybersecurity leaders said that they intend to continue banning the use of generative AI in the workplace, and more than 70% intend to take a walled garden approach to AI going forward. Both of these latter suggestions imply that there's a desire to return to the wall and moat of the past, as businesses strive to regain control of the AI genie that's been let loose from the bottle. The reality is that generative AI is useful, and it's becoming as necessary as search engines. Bans on employees using generative AI are likely to result in users circumventing the regulations with the introduction of shadow AI, and lesser known brands whose security levels and origins are less known, and potentially more vulnerable to poisoning if not outright supplied by adversaries. Thus, rather than banning AI, we need to rapidly mature the guardrails and regulations now under discussion, implement them and enforce them, and do it now. There's no going back, and we need to embrace this new opportunity with zeal, but temper it with clear-sighted realism and not blind enthusiasm.
Delaney: That's a great overview! In looking ahead, Tony, what will you have your journalistic eye on when it comes to the key topics and developments in AI governance and regulation in the near future?
Morbin: There's two totally different things. One is the big long-term unknown. The old thing of the grain of rice on the chessboard, if you doubled it each time you'd bankrupt the kingdom. AI is doubling capability every six months now, and goodness knows where we're going to get to with that. Then there's the more practical thing of people are implementing AI right now. We need to have security measures that we can introduce now, blocking the ingress of harmful data, blocking the egress of private and confidential data, bigger use of encryption where it's not being used, and setting up exactly what is and isn't allowed. Right now, we're working out what our rules are, and we need to quickly sort out the practical rules, because people are using it, myself included! We're jumping in and giving it a go with no restrictions. Thus, there's that practical something now, and then the long-term thinking of where is this going?
Delaney: That leads very smoothly onto Akshaya's segment. Akshaya, you were very busy reporting on the U.K. AI Safety Summit, which happened in Bletchley Park last week. What were the key takeaways?
Akshaya Asokan: Last week the U.K. government concluded the world's first ever summit on artificial intelligence risk and safety - the U.K. AI Safety Summit. We saw heads of states from 28 countries, as well as founders and CEOs of leading AI companies come under one roof to discuss risks tied to frontier artificial intelligence systems. Among the attendees were U.S. Vice President Kamala Harris; Chinese Vice Minister of Science and Technology Wu Zhaohui; OpenAI founder Sam Altman; DeepMind co-founders Demis Hassabis and Mustafa Suleyman; among other notable names. The event was historic, firstly, because the venue, Bletchley Park, was the principal center for allied codebreaking during the Second World War. Secondly, this was the first time that nations' big names and big names from technology sector and civil society members came together to acknowledge the risks posed by AI. One key thing is, the timing of the event is important on the British government's part, because it comes as the European Union is set to finalize what is going to be the first ever global regulation on AI - the EU AI Act. As the Biden administration recently announced the new artificial intelligence Executive Order in the U.S., the event, in a way, helped the U.K. government to strategically position itself in the conversations about AI regulations and governance of the time. The two notable developments from the events were how the heads of state use the platform to shed light on the national AI strategies, and how the event gave rise to the AI dual faction with backers of closed-source AI, which is the proprietary applications trained on private data, calling for open sourcing AI models and criticizing open-source AI, such as OpenAI and DeepMind, for fear mongering. These two were the big takeaways from the event and how nations, including China, came together to stress the need for a common global understanding of AI risks, and leading unified efforts to control these risks. As much as these attendees acknowledged that AI did pose risks, such as job loss and bias, the common understanding was that the existential risks posed by the technology are not imminent. Another interesting thing was that none of the big companies uses the platform for product pitches or big announcements. Rather, they stressed on regulating the technology without stymieing innovation in the field. Overall, the response to the event was positive with attendees lauding the U.K.'s success and bringing important AI stakeholders under one roof. The event was so successful that soon a similar event will be held in France and later in South Korea.
Delaney: That was a perfect overview. A couple of questions for you, and maybe Tony would want to chime in as well. Rishi Sunak referred to AI as a co-pilot. Just from your journalistic perspective, what do you think this means? How does this metaphor really reflect the government's approach to AI?
Asokan: It's a very apt way to put it because there's so much concern and fear around the technology. The first fear is - as journalists or as anybody in the field who's writing and creating content, be it video or art, anywhere - whether AI will take over your job. As the head of the state, his responsibility is to assuage that fear and assure the people that "we're here; we are going to help you." That is what he meant by being a co-pilot. Instead of fearing this technology, adopt this technology to your daily life, so that it can better your job. He put it rightly and it reflects the sentiments of the event, which is we assure that we know there is risk, but there are also ways to overcome these risks.
Morbin: Akshaya, I am quite interested in the situation; you've done a lot of reporting on regulation in AI. The EU was taking a more cautious approach with more things being banned, such as real-time facial recognition, compared to the U.S., which was a little more gung ho, in terms of "let's get this AI out there." The U.K. was taking a middle position. However, the critics that I'd heard - of the UK - were saying that because our biggest trade partner is the EU, we're going to have to do whatever they do. Is that what came across from that meeting? Was it the U.S. is taking the lead? The EU is taking the lead? The U.K. genuinely has an alternative?
Asokan: The U.K. is positioning itself in the middle in terms of governance, which is we are looking into risk, at the same time, we're also promoting innovation and development, because all governments do realize how this technology can bring in money into their economy. If you look at the media reports and coverage there are some reports that say that the U.S. government's announcement overshadowed the U.K. government's initiatives in AI. Rightly so, because Kamala Harris along with the U.S. Secretary of Commerce, Gina Raimondo, announced a number of initiatives. That is in addition to the U.S. Executive Order on artificial intelligence. They did manage to get some limelight from the media, but by the end of the second day, the overwhelming response from the attendees was that the U.K. managed to pull out a successful event, which saw participation from the tech and government, as well as the civil society. Even though there was some criticism that the number of attendees were limited, and the venue was very small, but, for a first-time event, this was successful, and this model may be replicated in a larger scale, maybe in France, and we'll learn as we go.
Delaney: What is the response to China's participation in the summit? How did their minister's call for global cooperation in AI governance influence the discussions?
Asokan: Ahead of the U.K. AI Safety Summit, there were concerns, especially from the western quarters, who're from the U.K. leadership and the U.S., about Chinese participation. There is the fear that why are we having the Chinese in these events, because we know how China is using artificial intelligence to spy on its citizens or surveil even the Uyghur communities, which is a very controversial application of AI. However, the Chinese Vice Minister for Science and Technology delivered a speech stressing on the nation's right to deploy the technology, and not having any nation say that you cannot get to deploy this technology. It was a very mild message to the west that we can also deploy this technology without any intervention by other nations. They stressed on the right of nations to deploy the technology while focusing on the existential risk posed by AI.
Delaney: That's not the end of the conversation, I'm sure, because Rashmi, we've more AI to talk about at the Mumbai Summit. Akshaya, that was great. Thank you so much for that insight. As I said, you're in Mumbai this week, and from what I hear the event was a great success. Tell us about it. What were the key takeaways for you?
Rashmi Ramesh: Inexplicably, AI is woven into every conversation that we have right now. However, our summit was not as focused on AI as the AI Security Summit. We had about 700 executives who attended the event in person. The first session of the day set the tone for the rest of the day. The speaker was Sameer Ratolikar, who is the CISO of HDFC Bank, which is India's largest private-sector bank. He spoke about how CISOs must go beyond what is expected of them, from a technical standpoint, if they want to be taken seriously, and how they can shape a better future for themselves and the industry, and how they need to focus on communication and inclusive leadership skills, become primary role models, while doing their primary job of minimizing attack services while responding to market opportunities. Then, we had the keynote from Dr. Yoginder Talwar, who is the CISO of National Informatics Centre Services, which offers IT services to the government. He spoke about how the Indian government passed the Digital India Act, which aims to secure the internet and develop a future-ready cybersecurity framework. He gave the security experts in the audience insights on cybersecurity challenges, on opportunities, on defense strategies in the country, and how they're leveraging AI and ML tools to strengthen security. We had a total of 26 sessions running in two parallel tracks. I want to highlight a couple of them that I got great feedback on from the audience. One was from Dr. Bhimaraya Metri, who is the director of the Indian Institute of Management in Nagpur. He spoke about issues that have existed for years about communicating with the board, but with a twist in the age of AI. How do your tactics change? What can you do differently? Why the strategies that you've deployed so far may not work as well right now? There was also a Fireside Chat on incident reporting requirements and cyber threat information sharing, which was a debate between a CISO and a CFO, on how they view the same situation so differently? Where the friction arises? How they work around it?
Delaney: I love those debates. It was always a lot to learn from them. On the topic of AI, what exactly were the security practitioners asking about? Where do they need more knowledge or more insight?
Ramesh: As I mentioned, AI is interwoven into every conversation that we had. I overheard a lot of CISOs speaking amongst each other. When I spoke to attendees, every session that they liked, or thought that they took away from had an AI element in it. One was how traditional rule-based SIEMs struggled to keep up with the evolving threat landscape, because they primarily rely on predefined rules and signatures that may not be able to capture emerging attacks. The focus of that session was on how AI is likely to make this problem worse, but also how AI can support behavior-based threat detection, which can help address this issue. There was another session about where traditional threat monitoring tools are falling short, and how AI can help automate some of the SecOps and how weaving in what they call AI Ops with IT operations can practically help mitigate some of the age-old cybersecurity issues that we have. That's not to say that everything was about AI, it was a very high conversation topic, there was quite a bit of interest in Zero Trust in supply chain, cyberinsurance and ransomware. I'll close the loop with my favorite session on AI in the age of banking, which was a session that was led by Professor Janakiram, who is the director at the Indian Institute of Development and Research in Banking Technology. There are very few people who have the decades of experience in financial services that he does, are able to weave in emerging tech solutions for traditional problems the way he does, and are able to communicate all of this as clearly as he does.
Delaney: It sounds like you had a great session that was excellently conveyed by you. Thank you so much Rashmi. Finally, just for fun, in the age of IoT and smart devices, what's the weirdest or most unexpected item you've come across that can be hacked?
Ramesh: Baby monitors. I get the spying part; I'd just give up if the information gathering includes voluntarily listening to babies cry for hours on end, and babies that are not even yours. I think that that would be mine.
Delaney: That's pretty creepy. Akshaya?
Asokan: I was surprised that you could hack 3D medical scanning. I was thinking, why would a hacker target a medical scan? What do they gain out of it? The report was based on research; the researchers were working on it, and they hacked into it, and used artificial intelligence - GAN network, which is the Generative-powered Adversarial Network – to create fake tumors. Then they confused an AI system with AI-generated tumors. That was interesting to me.
Delaney: Yeah, scary stuff, and collecting data on the patients as well. Tony?
Morbin: Any connected device can potentially be hacked, however private. I'd hark back to 2017 when Darktrace reported how an internet connected fish tank in an unnamed casino was hacked. The attackers moved laterally in the system, and were able to steal data, send it off to Finland. Attacking a casino via the fish tank I thought was incredible.
Delaney: Impressive! This is a story from 2013, but it's also resurfaced recently in 2023. It's not a recent story, but you may remember reading about how smart toilets were found to have security vulnerabilities and the settings could be tampered with, or hackers could collect data on user habits. Via the built-in Bluetooth radio, hackers were able to remotely flush the toilet, open and close the lid, and more concerningly, activate the built in bidet function. It gives backdoor vulnerability a whole new meaning!
Morbin: Oh dear! I think apart from Japan, I've not seen those.
Delaney: I've not used it as well. However, all connected devices can be hacked in some way. Well, Rashmi, Tony, Akshaya, this was absolutely brilliant! Thank you so much!