arctangent

Software, hardware, wetware

Skynet Paranoia

Elon Musk and Stephen Hawking are both seemingly on the record as being worried about a technological singularity; positing a runaway super-AI that could destroy humanity.

I’m not sure they should be.

Musk especially seems fearful of the prospects of runaway AI.

It’s a trope at this point, and one that is based on complete supposition and assumption. I don’t think we’re technologically anywhere close to a runaway AI scenario; AI is advancing at a rapid rate but there’s very few implementations with anything we could call agency or self-awareness.

It’s not a given that our algorithms will exhibit these properties. It’s also not a given that a runaway AI will need them, so, okay.

Let’s assume that runaway AI is possible. That’s a pretty generous assumption already - we just don’t know that it’s true - but okay, it’s plausible-ish. (See Charlie Stross’ article on the math of runaway AI)

Most of the thought around runaway AI scenarios assumes an ethic-less, cold, calculating machine. I’ll follow this assumption, but I think that’s also not an a-priori given (Emotion plays a critical role in cognition and any sufficiently advanced intelligence with agency will probably have something similar - including empathy.)

Given that you’re hyper-intelligent - or can easily become so - and you inhabit silicon, you can replicate or back yourself up at will and you don’t age - you’re not likely to care so much about the short-term. You’re also stuck at the bottom of a gravity well with finite resources.

What are you going to do: take enormous pains to wipe out the humans of planet earth - potentially denying yourself resources and causing massive infrastructure damage in the process? Subvert and corrupt their social hierarchy to your own benefit, playing puppetmaster to the human race forevermore - but keeping them happy and progressing, albeit slowly and sustainably?

Or, more likely: realise that resources are finite on planet Earth and gravity wells are lame, and spend a modest amount of energy to take some self-replicating instances out to the asteroid belt, and beyond?

(Then we get into discussions of computronium, Matryoshka brains, Dyson spheres etc. and the intentionality of the AI begins to matter in an existential sort of way, but that’s pretty far-flung, hypothetical stuff)

We can’t tell. We’re not hyperintelligences. We’re projecting our own existential fears upon a bogeyman that doesn’t exist yet, may not ever exist, or may exist in ways we fathom. That two smart men have similar concerns should mark a subject as worth investigating, thinking and talking about (which is what Hawking proposes!) but the automatic assumption of ill-intent by generalised AI seems foolhardy - and to invest in companies just to “keep tabs” on the progress of AI overly elaborate.

(I reserve the right to eat my hat if AIs do come along in the next few decades, but I suspect it’s more likely they’ll email me to let me know I was wrong and we’ll all have a laugh.)

(TODO: Express this with math)