WHITE HOUSE SHOULD LOOK TO BOARD’S GUIDENCE ON AI AND CYBERSECURITY – PART 2 

October 31, 2023

The founder of the organization I am honored to lead was Dave McCurdy, the former Chair of the House Intelligence Committee.  Based on his long career in government Dave liked to say, “government does two things well, nothing and over-react.”  We are clearly, and rightfully, out of the” do-nothing” phase of government’s involvement in AI.  Now we need to be mindful of the impulse to over-react. 

In yesterday’s post (LINK) we noted that government’s role in developing policy regarding AI was not to dictate how to manage these systems by rather to provide considered oversight for them. In this regard government’s role is similar to that of corporate boards who also are not charged with the management of their companies but rather oversight of the management team.  

The fact that advancements in AI are coming so rapidly only magnifies the need for government to strike the proper tone in its actions and also to fully integrate the vastly larger, better resourced and more experienced private sector into the development of AI policy, perhaps especially as regard the cybersecurity implications for AI development and use.  

Most leading private companies have already done considerable work defining the oversight role that boards of directors need to take with respect to AI and specifically with respect to AI in cybersecurity. Of course, the mantra for cybersecurity policy development for the past 20 years has been that it needs to be done in a public private partnership. However, the well-known reality (at least on the private sector side) is that the “partnership model has too often been a rhetorical fiction rather than a true partnership.  

The Biden Administration’s new AI Executive Order requires an extremely wide range of governmental entities to quickly develop guidelines and even regulations with respect to the use of AI in cybersecurity.  The truth is that most of these governmental agencies have very little expertise in AI (the government has 35,000 cybersecurity jobs it can’t fill – the AI cupboard is even more bare). 

Given the speed of recent and ongoing AI development and deployment we argued it would be wise for government to abandon their traditional “not invented her” bias and integrate the work such as that already published. 

All federal agencies identified in the EO should start their analysis by going to the already published and considered guidance on the subject and then adapt.  They don’t have the time, nor probably the expertise, to build from the ground up.  

Getting the right answers begins with asking the right questions. Below we have printed the questions regarding the use of AI in cybersecurity proposed in the 2023 edition of the Cyber Risk Oversight Handbook which was created by an industry-government coalition including the National Association of Corporate Directors, CISA, the FBI, the US Secrete Service and the ISA (LINK).  Let’s start with these. 

QUESTIONS ORGINZATIONS CONSDERING AI AND CYBERSECURITY 

 
1 What is the company’s overall road map to implementing AI or ML in cybersecurity? 

2. What are the cybersecurity goals the organization’s is trying to achieve by implementing an AI/ML solution? 

3. How will the AI solution toughen the organization’s security stance and how will that be measured? 

4. What is the estimated harm the company will face if it does not deploy the AI system? 

5. What are the new vulnerabilities the company will face due to having deployed the AI system 

6. What type of cyber-attack is the system designed to detect, predict and respond to? 

7. Is the system prepared to detect and manage a Ransomware attack? 

8.How would deploying such a system impact the organization’s cybersecurity team? What are the benefits and risk associated with the tool’s use by the team? 

9. Should the company expand or update the cybersecurity team? 
10. How much would it cost to update the cybersecurity team for AI? 

11. Are there positions the company doesn’t need any more due to this new deployment? 

Should the company be creating a new sub-team to monitor the outcomes from the AI deployment? 

12. Will implementing the AI tool impact the company’s cyber insurance enrollment? 

13 Are there potential legal consequences for either deploying or not deploying AI in the system?