The AI Book
    Facebook Twitter Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook Twitter Instagram
    The AI Book
    Daily AI News

    Sam Altman’s Second Coming Sparks New Fears of the AI Apocalypse

    22 November 2023No Comments4 Mins Read

    [ad_1]

    Open AI’s new boss is the same as the old boss. But the company—and the artificial intelligence industry—may have been profoundly changed by the past five days of high-stakes soap opera. Sam Altman, OpenAI’s CEO, cofounder and figurehead, was removed by the board of directors on Friday. By Tuesday night, after a mass protest by the majority of the startup’s staff, Altman was on his way back, and most of the existing board was gone. But that board, mostly independent of OpenAI’s operations, bound to a “for the good of humanity” mission statement, was critical to the company’s uniqueness.

    As Altman toured the world in 2023, warning the media and governments about the existential dangers of the technology that he, himself was building, he portrayed OpenAI’s unusual for-profit-within-a-non-profit structure as a firebreak against the irresponsible development of powerful AI. Whatever Altman did with Microsoft’s billions, the board could keep him and other company leaders in check. If he started acting dangerously or against the interests of humanity, in the board’s view, the group could eject him. “The board can fire me, I think that’s important,” Altman told Bloomberg in June.

    “It turns out that they couldn’t fire him, and that was bad,” says Toby Ord, senior research fellow in philosophy at Oxford University, and a prominent voice among people who warn AI could pose an existential risk to humanity.

    The chaotic leadership reset at OpenAI ended with the board being reshuffled to consist of establishment figures in tech and former US Secretary of the Treasury Larry Summers. Two directors associated with the Effective Altruism movement, the only women, were removed from the board. It has crystallized existing divides over how the future of AI should be governed. The outcome is seen very differently by doomers who worry AI is going to destroy humanity; transhumanists who think the tech will hasten on a utopian future; those who believe in freewheeling market capitalism; and advocates of tight regulation to contain tech giants that cannot be trusted to balance the potential harms of powerfully disruptive technology with a desire to make money.

    “To some extent, this was a collision course that had been set on for a long time,” says Ord, who is also credited with co-founding the Effective Altruism movement, parts of which have become obsessed with the doomier end of the AI risk spectrum. “If it’s the case that the nonprofit governance board of OpenAI was fundamentally powerless to actually affect its behavior, then I think that exposing that it was powerless was probably a good thing.”

    Governance Gap

    The reason that OpenAI’s board decided to move against Altman remains a mystery. Its announcement that Altman was out of the CEO seat said he “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” An internal OpenAI memo later clarified Altman’s ejection “was not made in response to malfeasance.” Emmett Shear, the second of two interim CEOs to run the company between Friday night and Wednesday morning, wrote after accepting the role that he’d asked why Altman was removed. “The board did not remove Sam over any specific disagreement on safety,” he wrote. “Their reasoning was completely different from that.” He pledged to launch an investigation into the reasons for Altman’s dismissal.

    The vacuum has left space for rumors, including that Altman was devoting too much time to side projects or too deferential to Microsoft. It has also nurtured conspiracy theories, like the idea that OpenAI had created artificial general intelligence and the board had flipped the kill switch on the advice of chief scientist, cofounder, and board member Ilya Sutskever.

    “What I know with certainty is we don’t have AGI,” says David Shrier, professor of practice, AI and innovation, at Imperial College Business School in London. “I know with certainty there was a colossal failure of governance.”

    [ad_2]

    Source link

    Previous ArticleMods Are Asleep. Quick, Everyone Release AI Products
    Next Article OpenAI drama a reflection of AI’s pivotal moment
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment

    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.