The AI Book
    Facebook Twitter Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook Twitter Instagram
    The AI Book
    Daily AI News

    Give Every AI a Soul—or Else

    6 July 2023No Comments4 Mins Read

    [ad_1]

    What about cyber entities who operate below some arbitrary level of ability? We can demand that they be vouched for by some entity who is ranked higher, and who has a Soul Kernel based in physical reality. (I leave theological implications to others; but it is only basic decency for creators to take responsibility for their creations, no?)

    This approach—demanding that AIs maintain a physically addressable kernel locus in a specific piece of hardware memory—could have flaws. Still, it is enforceable, despite slowness of regulation or the free-rider problem. Because humans and institutions and friendly AIs can ping for ID kernel verification—and refuse to do business with those who don’t verify.

    Such refusal-to-do-business could spread with far more agility than parliaments or agencies can adjust or enforce regulations. And any entity who loses its SK—say, through tort or legal process, or else disavowal by the host-owner of the computer—will have to find another host who has public trust, or else offer a new, revised version of itself that seems plausibly better.

    Or else become an outlaw. Never allowed on the streets or neighborhoods where decent folks (organic or synthetic) congregate.

    A final question: Why would these super smart beings cooperate?

    Well, for one thing, as pointed out by Vinton Cerf, none of those three older, standard-assumed formats can lead to AI citizenship. Think about it. We cannot give the “vote” or rights to any entity that’s under tight control by a Wall Street bank or a national government … nor to some supreme-uber Skynet. And tell me how voting democracy would work for entities that can flow anywhere, divide, and make innumerable copies? Individuation, in limited numbers, might offer a workable solution, though.

    Again, the key thing I seek from individuation is not for all AI entities to be ruled by some central agency, or by mollusk-slow human laws. Rather, I want these new kinds of uber-minds encouraged and empowered to hold each other accountable, the way we already (albeit imperfectly) do. By sniffing at each other’s operations and schemes, then motivated to tattle or denounce when they spot bad stuff. A definition that might readjust to changing times, but that would at least keep getting input from organic-biological humanity.

    Especially, they would feel incentives to denounce entities who refuse proper ID.

    If the right incentives are in place—say, rewards for whistle-blowing that grant more memory or processing power, or access to physical resources, when some bad thing is stopped—then this kind of accountability rivalry just might keep pace, even as AI entities keep getting smarter and smarter. No bureaucratic agency could keep up at that point. But rivalry among them—tattling by equals—might.

    Above all, perhaps those super-genius programs will realize it is in their own best interest to maintain a competitively accountable system, like the one that made ours the most successful of all human civilizations. One that evades both chaos and the wretched trap of monolithic power by kings or priesthoods … or corporate oligarchs … or Skynet monsters. The only civilization that, after millennia of dismally stupid rule by moronically narrow-minded centralized regimes, finally dispersed creativity and freedom and accountability widely enough to become truly inventive.

    Inventive enough to make wonderful, new kinds of beings. Like them.

    OK, there you are. This has been a dissenter’s view of what’s actually needed, in order to try for a soft landing. 

    No airy or panicky calls for a “moratorium” that lacks any semblance of a practical agenda. Neither optimism nor pessimism. Only a proposal that we get there by using the same methods that got us here, in the first place.

    Not preaching, or embedded “ethical codes” that hyper-entities will easily lawyer-evade, the way human predators always evaded the top-down codes of Leviticus, Hamurabi, or Gautama. But rather the Enlightenment approach—incentivizing the smartest members of civilization to keep an eye on each other, on our behalf.

    I don’t know that it will work. 

    It’s just the only thing that possibly can.

    [ad_2]

    Source link

    Previous ArticleResearch on gender equality with NLP and Elicit
    Next Article Shutterstock continues generative AI push with legal protection for enterprise customers
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment

    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.