Engadget
Why you can trust us

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

OpenAI's Sam Altman and other tech leaders join the federal AI safety board

It's like turkeys being appointed to the Christmas oversight board.

FABRICE COFFRINI via Getty Images

Sam Altman, OpenAI's CEO, Microsoft chief Satya Nadella, Alphabet CEO Sundar Pichai are joining the government's Artificial Intelligence Safety and Security Board, according to The Wall Street Journal. They're also joined by Nvidia's Jensen Huang, Northrop Grumman's Kathy Warden and Delta's Ed Bastian, along with other leaders in the tech and AI industry. The AI board will be working with and advising the Department of Homeland Security on how it can safely deploy AI within the country's critical infrastructure. They're also tasked with conjuring recommendations for power grid operators, transportation service providers and manufacturing plants on how they can can protect their systems against potential threats that could be brought about by advances in the technology.

The Biden administration ordered the creation of an AI safety board last year as part of a sweeping executive order that focuses on regulating AI development. In the Homeland Security's website, it said the board "includes AI experts from the private sector and government that advise the Secretary and the critical infrastructure community." Homeland Security secretary Alejandro Mayorkas told the Journal that the use of AI in critical infrastructure can greatly improve services — it can, for instance, speed up illness diagnoses or quickly detect anomalies in power plants — but they carry a significant risk which the agency is hoping to minimize with the help of this board.

That said, one can't help but question if these AI tech leaders can provide guidance that aren't meant to primarily serve themselves and their companies. Their work centers around advancing AI technologies and promoting their use, after all, while the board is meant to ensure that critical infrastructure systems are using AI responsibly. Mayorkas seems to be confident that they'll do their jobs properly, though, telling the Journal that the tech leaders "understand the mission of this board," and that it's "not a mission that is about business development."