AI policy requires US agencies to appoint chief AI officers

AI policy requires US agencies to appoint chief AI officers

A new policy requires all US government departments to appoint a chief AI officer to ensure safer AI use.  

The new rule for government AI deployments is just one of the three bind requirements constituting the new AI policy VP Kamala Harris announced on Thursday.

The Office of Management and Budget (OMB) guidance requires chief AI officers with “the experience, expertise and authority” to oversee “all AI technologies used by that agency”.

It will also require government agencies to establish AI governance boards by the summer of 2024 to navigate AI use and “put the public interest first.”

The announcement comes five months after President Joe Biden signed an executive order which mandated US government departments to implement safety regulations and increase their AI workforce in light of rapid AI advancements.

Ahead of Thursday’s announcement, Harris told reporters: “I believe that all leaders from government, civil society and the private sector have a moral, ethical, and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its full benefit.”

Vice President Kamala Harris outlined three binding requirements in the unveiling of a new policy to improve the safety of AI use by the US government. (Photo by Lev Radin via Shutterstock)

An AI policy of “three binding requirements”

Another requirement will see AI officers mandated to publish an annual inventory of their AI systems, an assessment of the potential risks they pose, and a corresponding explanation of how these risks will be managed. Any decision on the part an officer to omit a specific AI system from their inventory will need to be publicly justified.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways

How businesses can safeguard themselves on the cyber frontline

How hackers’ tactics are evolving in an increasingly complex landscape

Agencies must also ensure AI tools “do not endanger the rights and safety of the American people”. This would stipulate that, should the Veterans Administration want to use AI in their hospitals to help doctors diagnose patients, for example, they must prove the AI systems would not expound “racially biased diagnoses”, said Harris. The White House pledges to hire 100 AI professionals to promote safe AI use, according to OMB chair Shalanda Young.

Agencies must have AI safeguards by end of 2024

The new AI policy outlines that AI systems considered to impact safety are those which are used in “real-world conditions, to control or significantly influence the outcomes” of decisions and activities, such as election and voting infrastructure, emergency services, public infrastructure such as water systems, autonomous vehicles and the use of robots in “a workplace, school, housing, transportation, medical or law enforcement setting”.

Agencies will have until 1 December, 2024 to establish concrete safeguards ensuring their deployed AI systems won’t impact the safety or rights of US citizens. Agencies that fail to achieve this will be required to cease using AI products unless their removal can be proved to have an “unacceptable” impact on critical operations.

Source link