Zico Kolter has been appointed to the board of directors of OpenAI
Kolter, a scholar and director of the machine learning department at Carnegie Mellon, concentrates his research on the safety of AI. In a post on its official blog, OpenAI asserts that this renders him an “invaluable technical director for [OpenAI’s] governance.”
The company has faced a significant challenge regarding the safety of AI. Kolter’s appointment occurred just a few months after the departure of several prominent OpenAI administrators and employees responsible for safety, including co-founder Ilya Sutskever.
Sutskever’s erstwhile “Superalignment” team, responsible for developing methods for governing “superintelligent” AI systems, submitted numerous resignations. However, according to a source, the team was denied access to the computing resources that were initially promised.
Additionally, Kolter will serve on the Safety and Security Committee of the OpenAI board, which includes directors Bret Taylor, Adam D’Angelo, Paul Nakasone, Nicole Seligman, CEO Sam Altman, and OpenAI technical experts. The committee provides safety and security recommendations for all OpenAI initiatives.
However, as we previously noted in a May article, it is primarily composed of insiders, which has prompted concerns among pundits regarding its efficacy.
Taylor, chairman of the OpenAI board, stated, “Zico’s significant technical expertise and unique perspective on AI safety and robustness will help us guarantee that general artificial intelligence is beneficial to all of humanity.”
Previously the principal data scientist at C3.ai, Kolter earned his PhD in computer science from Stanford University in 2010. Subsequently, he pursued a postdoctoral fellowship at MIT from 2010 to 2012. His research demonstrates the potential to circumvent current AI safeguards by employing automated optimization techniques.
Kolter, who is presently the chief technical advisor at AI startup Gray Swan and the “chief expert” at Bosch, is no stranger to industry collaborations.