Introduction by ISA President Larry Clinton
There is tremendous anticipation regarding the imminent release of a sweeping new Executive Order (EO) on the use of Artificial Intelligence form the Biden White House (LINK). Although the EO holds potentially game-changing reach, it needs to be understood in the context that government is largely playing catch-up on developing policy with respect to AI.
This is not a criticism of the government. Actually, it’s a good thing. In a dynamic market economy such as we have in the United States, it is the appropriate role of the private sector – which is far larger and better resources than government –to lead with respect to technical innovation and management. Moreover, as we have argued extensively in our recent book Fixing American Cybersecurity, the entrepreneurial, incentive based, and innovative nature of the US economy is one of our primary advantages over authoritarian states and their government-controlled economies as we face the challenges of the digital age.
As the White House, and government writ-large, weigh into AI (as well eventually quantum) issues it is critical for government to understand that their role is not to “manage” the technologies. Government’s role in the public sector is equivalent to the role of a corporate board in the private sector. That is, the role of the board/government is not management, it is the equally important role of oversight.
The reality is that many, perhaps most (maybe all) leading private sector organizations have been grappling with both the practicalities and ethical implications of AI for a number of years. Given the stunning dynamism of growth of AI it would be especially wise in this case for government to abandon their traditional “not invented here” bias and seek to build on top of –not from the ground up – the considered work on oversight of AI that already exists in the private sector.
An obvious place for government to start is with the Principles and Toolkits published earlier this year by the National Association of Corporate Directors in partnership with the ISA in the fourth edition of their Cyber Risk Oversight Handbook (LINK). Government itself, through CISA, the FBI and the US Secrete Service all contributed to the handbook which has been continually assessed and found to generate positive cybersecurity impacts by organizations including PWC, MIT and the World Economic Forum.
The latest edition of the handbook, published this past March, contains two sections specifically devoted to the issues that organizations need to focus on as they consider their use of AI. We have attached material from one of these sections – general questions for oversight of the use of AI in today’s post. We will provide material from the second AI section of the handbook which deals specifically with cybersecurity in tomorrow’s post.
This material ought to be the starting point for the government’s much needed efforts to develop a coherent policy for the oversight of artificial intelligence.
NACD-ISA QUESTIONS FOR ORGANIZATIONS USE OF AI
- What are the specific goals the organization is seeking to achieve in deploying the AI system?
- What is the plan to build and deploy the AI or ML application responsibly?
- What type of system is the organization using; process automation, cognitive insight, cognitive engagement, or other/Does management understand how this system works?
- What are the economic benefits of the chosen system?
- What are the estimated costs of not implementing the system?
- Are there potential alternatives to the AI or ML system in question?
- How easy will it be for an adversary to attack the system based on its technical characteristics?
- What is the organization’s strategy to validate data and collection practices?
- How will the organization prevent inaccuracies that may exist in the data set?
- What will be the damage from an attack in the system including the likelihood and ramifications of the attack?
- How frequently will the organization update its data policies?
- What is the organization’s response plan for cyber-attacks on these systems?
- What is the organization’s plan to audit the AI system?
- Should the organization create a new team to audit the AI or ML system?
- Should the organization build an educational program for the staff to learn about the use and risks of and ML in general?