Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Video
Proof of Concept: Assessing the US Executive Order on AI
Also: Improving Security Review Processes, AI Talent Acquisition Challenges Anna Delaney (annamadeline) • November 17, 2023In the latest "Proof of Concept," Sam Curry, vice president and CISO at Zscaler and a CyberEdBoard member, and Heather West, senior director of cybersecurity and privacy services at Venable LLP, join editors at Information Security Media Group to discuss the implications of President Joe Biden's executive order on AI, how AI enhances security review processes, the potential for AI to spot software flaws, and challenges of AI talent acquisition.
See Also: How Overreliance on EDR is Failing Healthcare Providers
Curry and West, along with Anna Delaney, director of productions at ISMG, and Tom Field, senior vice president of editorial, share:
- Impressions of the recent executive order on the safe, secure, and trustworthy development and deployment of artificial intelligence;
- Issues related to the order that may have been overlooked or not adequately addressed;
- How government organizations and vendors supplying to government can enhance their security review processes, especially in the context of red-team testing for AI systems.
Curry, a member of the CyberEdBoard, previously served as chief security officer at Cybereason and chief technology and security officer at Arbor Networks. Prior to those roles, he spent more than seven years at RSA - the security division of EMC - in a variety of senior management positions, including chief strategy officer and chief technologist and senior vice president of product management and product marketing. Curry also has held senior roles at Microstrategy, Computer Associates and McAfee.
West focuses on data governance, data security, digital identity and privacy in the digital age at Venable LLP. She has been a policy and tech translator, product consultant and long-term internet strategist, guiding clients through the intersection of emerging technologies, culture, governments and policy. She is a member of the CyberEdBoard.
Don't miss our previous installments of "Proof of Concept", including the Aug. 31 edition on securing digital government services and the Oct. 26 edition on overcoming open-source code security risks.
Transcript
This transcript has been edited and refined for clarity.
Anna Delaney: This is Proof of Concept, a talk show where we invite security leaders to discuss the cybersecurity and privacy challenges of today and tomorrow, and how we could potentially solve them. We are your hosts. I'm Anna Delaney, director of productions here at ISMG.
Tom Field: I'm Tom Field, senior vice president, editorial, at ISMG.
Delaney: Today we are talking about an important document unveiled by U.S. President Biden a few weeks ago, which is, the Executive Order to ensure the responsible development and deployment of AI with a particular focus on cybersecurity.
Field: I was calling it the AI EO, but it sounds too much like a nursery rhyme, so, I stopped saying that.
Delaney: One area of great interest, which is raised in the order, is how we can harness AI to bolster defenses against cyberthreats and protect sensitive AI-related intellectual property. We will be talking more about those areas in depth with our guests today. However, it's a 111 page Executive Order, so it's comprehensive. There seems to be cautious optimism for many lawmakers, civil rights organizations and industry groups, who say it's a positive step, but it's only the beginning, and like many of these orders, there is a consensus that it has limitations. What are your thoughts, Tom? What have you been hearing from people?
Field: You're right, there are a lot of words there. Is there traction underneath those words? There's good and there's bad with an Executive Order. The good is the attention it brings to a topic like the Executive Order did two years ago to zero trust, multifactor authentication and software supply chain security. The bad is that Executive Orders often represent unfunded mandates. How are you able to get the resources you need to make meaningful change? However, I think anytime you take a topic like AI, and try to put some structure and governance and regulation around it, starting that conversation can't be a bad thing.
Delaney: In order to get a better understanding of what the EO recommends, I'm pleased to welcome, to the ISMG studios, Heather West, senior director of cybersecurity and privacy services at Venable LLP, and Sam Curry, CISO at Zscaler.
Field: Heather and Sam, what's your overall impression? What do you think of the Executive Order? Do you think it's going to have some impact on the landscape? Is it just more talk?
Heather West: It's definitely going to have an impact, Tom. This is expansive. Anna, you mentioned, it's 111 pages long, it touches every part of government and it has the potential to impact how we think about and operationalize cybersecurity using artificial intelligence. It has touched a huge number of topics. The administration was trying to get everything in in one go, and it did a good job! It covers a lot, but that means it's aspirational. We're going to see what the government is doing with those resources you mentioned, Tom, going forward to see what it can do over the next year. There's over 90 actions in the order, and we're tracking a lot of them.
Sam Curry: I have to agree with you, Heather, on all points. It's big and expansive. However, it uses a lot of different tools in 90 actions, some of them are safety and security-related. Safety is sometimes not mixed with security, and I think that is a really good move from a policy perspective. Some of it is privacy related, some of its related to workers and some of it is to establish a plan for a plan, which is different as well. Big parts of it are how the government itself drives its own use. How does it come up with guidance and make sure that right tools are purchased, that hiring happens the right way? There's also some national security issues in here, because when you read part of it, it says you shouldn't use AI to build biological materials that could be used in warfare. That can be done without it, so it begs the question, how is it going to be policed or enforced? If it can be done without it, what is the fear? In anti-fraud, the fear is that this could be an accelerant or a catalyst for increasing rates of change or innovation we wouldn't expect in those spaces. That might be something worth diving into in discussion too, but, as Heather said, it is broad and sweeping. The thing I would emphasize is, people shouldn't confuse this with other forms of governmental guidance from the other branches of government, whether it's being tested by the judiciary, or whether it's in the form of legislation. This is leadership, and a big part of this is establishing how we're going to do things domestically? How the government's going to do its own thing? How we're going to be on the world stage? How do both inspire innovation and, at the same time, put boundaries down? That's a delicate game. Additionally, 111 pages may sound like a lot, but it's very little for the breadth of it.
West: You mentioned something important in there, Sam. A huge part of this order is how is the government going to use AI? One of the more interesting things that I think will be incredibly impactful is that the order actually directs agencies, not to default to banning or blocking particular kinds of AI, but instead to think about risk and to manage that risk through governance and processes. That will necessarily have an impact on how industry manages risk as well.
Curry: In the private sector side, that is the default many companies have taken. They did this with Cloud and search engines back in the day, yet neither was a good strategy. In the private sector, I often say, "look, your industry may get disrupted by AI, or it may not; in the cyber sector, you're going to have to figure out how do you interact with it? How does it affect the tools of your practice or trade?" It's great that the government is saying that this is something that the government's going to face.
Field: As broad and as sweeping as this, is there anything missing or overlooked? Any area where either of you want more clarity?
West: I will answer a slightly different question. One of the things I appreciate about this order is that it knows what we don't know. A huge amount of what it's calling for is investigation, information, comments and processes or rulemaking, so that we feel that we have a better understanding of AI and how it's being used in government, how it's being used in industry, and everything from asking agencies, how they're using AI now, and putting together a use-case inventory, to thinking through best standards around testing and evaluation, to what should reporting look like for particularly advanced AI that's being developed in the U.S.
Curry: What is missing for me is they could come out in the section on standards. It does call for standards work to be done. It addresses some big areas. However, if you look at something like certified ethical hacking, I was in a group with a number of regulators from around the world - a year ago now - and people were saying what are we going to do with AI offense? The initial response was, "hey, it should be banned outright?" I said, "no, you need certified ethical AI use, otherwise our red teams aren't going to be very like what the actual attackers are, our blue teams are not going to be sufficiently good on defense." It's up to the other participants, academia and private sector to start filling in the gaps there. What I liked about this is that the boundaries are there.
West: The question is, what this turns into, and how much information we need to gather and put in one place to be able to do all of that well. The standards piece is incredibly important there. It's going to set the stage, and it's intended to guide government use and industry best practices. However, I appreciate that they are not pretending to know everything yet. I don't think anyone should.
Curry: I'm happy some of the more hysterical hype earlier in 2023 was not a part of this. It's clear that a lot of thought went into this and a lot of discussion. This is not a piece of policy that's been quickly done or thrown over the wall; this has had many hands touching it and many voices. It's something that the industry can get behind and it's good for the country, and probably for everyone.
Delaney: Sam, you touched upon this briefly there, but the EO requires agencies to conduct security reviews and red team testing for AI systems. In this context, Sam, how can organizations enhance their security review processes?
Curry: When you say, "How can organizations-" do you mean government or "how can those who sell to the government"? Because that could be an interesting distinction.
Delaney: That's true. I was thinking government, but maybe you have two perspectives about them.
Curry: We should tackle them in order. What can government organizations do? Not all government organizations are created equally. There's a big difference between defense and civilian, and then there's a big difference between the big ones and the little ones. Not all agencies have the same resources to do this. How do they get access to the right resources? How do they pool? Who's going to help them? Organizations like CISA have a big role to play in this, and how the government responds to make sure all parts of the government - all agencies and departments - have access to quality resources is going to be a big deal. If you're the CIO, CISO or CTO of a large government department or agency with good resources, you're going to respond one way, and I think we'll start to see most of them probably talking about how they're going to do that. However, if you're in a small agency with very limited IT resources, it's going to be very difficult to figure out how you do that. I want to see how the government responds to that. If you're an organization, either in the defense industrial base or in the private sector, and you've been struggling with FedRAMP, then how do you get ready so that your uses of AI are transparent and visible? Have your policies done, because what this is saying is, if this is the standard the government is going to be held to, you have to at least be adhering to that. Most of the private sector is wrestling with how do they do this? I have yet to speak with a C-level executive who hasn't been in on the discussion about what do we do with AI? What do we do on our core business? How does it impact us? Where do we use it? Where don't we? Do we block it at the door? Because where we started this conversation, in the private sector, that has been the response. I've been telling folks, in many cases, it's time to consider a paneling, and for large companies, a review board approach. The idea here is get together trusted advisors and experts and internal constituents, so that you can review when and where you use the technology and think of it like reduction rather than induction. If you want to collect data, make sure you collect the right kinds, and when you're doing research your application, make sure that it's an approved use in the organization. I was talking to someone the other day and said "yes, we can de-identify data, meaning we can anonymize it, and yes, you can re-identify it." They said, "why should we bother?" I said, "because it still takes energy to re-identify, it's still visible when you do when somebody tries to re-identify data, meaning put the identity back into anonymized data, they leave a bigger forensics trail than they otherwise would, and it's harder to exfiltrate." These are things you can pick up on in the security controls, and in after the fact analysis and monitoring. This means come up with a policy! Make sure that you've got experts. If you're too small for that, look to your peers, look to your region, look to your industry and look to the ISACs for what you might be able to do.
West: The resource question, especially for smaller agencies, or for agencies who just have smaller IT functions, is a very real one. Some of those folks are going to look to the procurement rules to let their vendors do due diligence for them. Other folks are going to see the opportunity to move a little bit more slowly and intentionally; there's nothing wrong with that. They do need to look under the OMB guidance at the systems they have in place already. There's a deadline - sometime next year - to terminate contracts for folks that aren't in line with the minimum practices that are elaborated in the OMB guidance – that's a draft at the moment. The really interesting thing here is that putting this process in place; it has the potential to actually let them go fast. If they do their due diligence on top and they sit down and they think through how are we going to write an AI impact assessment? How are we going to get the information we need to let us move forward after that? That has the potential to really actually streamline some of these processes at smaller and under-resourced agencies. It may have an impact for folks, especially who aren't trying to build it all themselves. It's worth saying that a lot of these processes are not perfect; we're going to learn a lot about AI impact assessments and some of the governance questions and what an AI strategy should look like. However, they really could make this all move faster, and they could make the government more efficient. We could learn a lot about what the right places to plug AI in is.
Curry: If this is all feeling like some other technology waves, this is feeling like what we went through from an issue and an adoption perspective, similar to cloud, search engines, instant messaging, API security, open source and secure bill of materials, that's because this is the same set of issues. The difference here is power of the technology and breadth and the rate of change. Having been through this, I want to emphasize that this isn't completely novel territory. However, there's also lessons to be learned by where we failed in the past - culturally and societally - in dealing with these things.
Field: Heather, question for you about cybersecurity. What do you think about the EO's aim to use AI to spot software vulnerabilities? I welcome your thoughts on that.
West: If you read the entire EO, it has pieces and actions and all sorts of levels. Some of them are very high-level, about the impact on the labor market and put out a report about whether we're ready to meaningfully pivot some of these job sectors. There are sections like this that say we should evaluate whether we can use some of these particular kinds of advanced AI, specifically LLMs, to look for vulnerabilities in government software. It's laudable that they didn't call for it immediately, because it's this idea that has incredible promise and AI is probably going to be this incredible augmentation for cybersecurity professionals. It's going to be more helpful over time. I'm not sure that that technology is there yet. We're going to get some really interesting information about where it succeeds, and where it doesn't as the government looks to how it can leverage LLMs and how it can leverage advanced AI in a defensive cybersecurity mission.
Curry: I love that vulnerability mention. What went through my head was what's happened in other areas where there's been conflict. Where AI has been applied, like playing chess and playing Go. We actually had the good fortune to speak with Garry Kasparov about what it was like playing against the early chess games, and he said, "you know what, I won, and then I lost once and it was really heartbreaking. Then I won more than I lost, and then I lost more than I won, and eventually it beat me. For a while we were in an era of machine-assisted chess playing. However, years later, the machines were way better." What I heard recently was the AIs that play chess and play Go now use strategies that human grandmasters simply can't understand. In postgame analysis, the logic of the moves on the board is completely unusual. Now, what does that mean in vulnerability land? It means that vulnerabilities will pop up in things we have not predicted yet. In other words, weaknesses that can be exploited, will turn up in components or combinations of components that humans would not have gone to look for nor to patch. They could turn up in areas that are hard or impossible to patch or configure. Part of this was getting ahead of it. I think it was the same thing with the biological component, incidentally. Where can we be hurt? How do we use AI if you want to get ahead of that in defense? It's worth mentioning that it's an asymmetric game in these forms of conflict. It isn't necessarily with discovering zero days and vulnerability.
West: I think that's useful as a reminder. AI is good at a lot of these things, and one of the things the order doesn't explicitly state is that cybersecurity is already using AI and using advanced AI in important ways. It's identifying some new ways to use it. However, we sometimes forget as we anthropomorphize AI, because of science fiction, cultural norms and memes, that AI thinks differently than a brain, and it's going to identify different vulnerabilities, it's going to find different attack vectors, it's going to find new threats. The AI is also going to be better able to find weird pieces that are hard to wrap our own brains around. It will get more and more skilled at offense, it'll get more and more skilled at defense as humans develop these technologies. It'll be an ever-evolving space, and that's going to be one to watch.
Delaney: It's worth touching on AI and talent acquisition. What are the challenges? What are the opportunities that you foresee in attracting, hiring and retaining AI talent with a strong focus on security expertise?
Curry: I care about past track record to some degree. I care about added attitude and aptitude more than the hard skills; I care that someone can hit the ground running and learn and adapt. Because the pace of change is so fast, I need to know that they can not only keep up with it, they can add to it. I also care about cross-disciplinary expertise. When we're talking about workforce impact, a lot of the things that are routine tasks that are repeatable and are what the machines are good at. A lot of the power skills are important - things like communication and things that we might associate with liberal arts. I would like to have people that understand business problems, and tech and ethical issues, combined, because all of those are necessary to innovate and drive forward. I'm interested in people that have a philosophy background, a computer science background, a politics background, and can communicate these things. They're innovators, they like tackling problems. That's hard to get, so when it comes down to it, I'm willing to take people who learn fast and hit the ground running, as opposed to someone with a perfect resume.
West: There's a cybersecurity workforce gap. We cannot find all of the talent, we need to fill the roles that we have open. That's doubly true for the government. At the end of the day, there are a bunch of provisions in the EO about how they're going to close that gap and how they're going to make sure that if they're opening the use of AI in government, and particularly the use of advanced AI, they want to make sure that they can staff it appropriately. One of the places its really interesting is that there are pieces of the EO that are making sure that they are resourcing these tools with people and with money and modernizing the IT infrastructure of government to make this a more appealing job. If I'm looking for a job, and I have the opportunity to go work for someone in private industry, who has all the bells and whistles, but not the same mission. Or, I can go to an agency, which is underfunded and has like a modernization issue, but has this amazing mission, how do we make that a compelling job? To make sure that we are staffing it well? I'm glad that they put talent acquisition in there because we can we can plug AI in any number of places and we can put like reports in place and due diligence about evaluating whether it does what it should do. But, if we don't have the people in place, to deploy it, to oversee it, and to evaluate it, it's going to be a bit of a disaster. I am hopeful that this AI talent search is successful.