Close Menu
The AI Book
    Facebook X (Twitter) Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook X (Twitter) Instagram
    The AI Book
    Daily AI News

    Former White House advisors and tech researchers co-sign new statement against AI harms

    15 June 2023No Comments3 Mins Read

    [ad_1]

    Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


    Two former White House AI policy advisors, along with over 150 AI academics, researchers and policy practitioners, have signed a new “Statement on AI Harms and Policy” published by ACM FaaCT (the Conference on Fairness, Accountability and Transparency) which is currently holding its annual conference in Chicago.

    Alondra Nelson, former deputy assistant to President Joe Biden and acting director at the White House Office of Science and Technology Policy, and Suresh Venkatasubramanian, a former White House advisor for the “Blueprint for an AI Bill of Rights,” both signed the statement. It comes just a few weeks after a widely shared Statement on AI Risk signed by top AI researchers and CEOs cited concern about human “extinction” from AI, and three months after an open letter calling for a six-month AI “pause” on large-scale AI development beyond OpenAI’s GPT-4.

    Unlike the previous petitions, the ACM FaaCT statement focuses on current harmful impacts of AI systems and calls for a policy based on existing research and tools. It says: “We, the undersigned scholars and practitioners of the Conference on Fairness, Accountability, and Transparency welcome the growing calls to develop and deploy AI in a manner that protects public interests and fundamental rights. From the dangers of inaccurate or biased algorithms that deny life-saving healthcare to language models exacerbating manipulation and misinformation, our research has long anticipated harmful impacts of AI systems of all levels of complexity and capability. This body of work also shows how to design, audit, or resist AI systems to protect democracy, social justice, and human rights. This moment calls for sound policy based on the years of research that has focused on this topic. We already have tools to help build a safer technological future, and we call on policymakers to fully deploy them.

    After sharing the statement on Twitter, Nelson cited the opinion of The AI Policy and Governance and Working Group at the Institute for Advanced Study, where she currently serves as a professor, after stepping down from the Biden Administration in February.

    Event

    Transform 2023

    Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

     

    Register Now

    “The AI Policy and Governance and Working Group, representing different sectors, disciplines, perspectives, and approaches, agree that it is necessary and possible to address the multitude of concerns raised by the expanding use of AI systems and tools and their increasing power,” she wrote on Twitter. “We also agree that both present-day harms and risks that have been unattended to and uncertain hazards and risks on the horizon warrant urgent attention and the public’s expectation of safety.”

    Other AI researchers who signed the statement include Timnit Gebru, founder of The Distributed AI Research Institute (DAIR), as well as researchers from Google DeepMind, Microsoft, Stanford University, and UC Berkeley.

    VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



    [ad_2]

    Source link

    Previous ArticleGood News! China and the US Are Talking About AI Dangers
    Next Article MIT-based AI apps startup aims to block supply chain attacks with advanced cybersecurity
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment
    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.