Gray Goo Paranoia
Recall, in the early days of Nanotechnology development, an irrational fear of gray goo taking over the world? That was all about tiny self-replicating inorganic thingies – nano-sized – multiplying rapidly and taking over the world. We have those things in the organic world already, actually. They are called viruses, but they work only inside organic cells. Gray goo fears built upon ideas of inorganic replicating entities that some imagined to be capable of rapid, malevolent multiplication and destruction.
But though nanotechnology is ubiquitous these days, it is largely invisible, and rather helpful. Gray goo has not, this far, taken over the world. Unless you mean the ubiquitous manufactured waste that we seem to clog and harm our environment with.
Pundits, pontificators, and pernicious protectors are at it once again. Now it is machines that think and feel that they wish to save us from. All the while, of course, some vociferous prevaricators surreptitiously continue, and fund, development of the very same technology that they beat their public drums on…in anticipation of windfall gains. Protectors of profit is a more apt label for such champions of capitalism, perhaps. Whipping up the consuming public’s fancy and interest with predictions of dire threats and doom that they lap up eagerly.
But how real is this fear? Will machines that think, feel, and learn, take over humanity and the living world? That seems to be the villainous stand in the movie, Chappie, to come in March…while an endearingly human (obviously) robot becomes a champion of progress, of a next step in evolution as the film calls it, an advancement of human thought and philosophy implemented in technology.
Before making the quantum leap to a conclusion that a combination of thinking, feeling, and learning, in entities different from us, will result in an inevitable (rather human?) desire in them for world dominion, let’s ask this: how real is it to realize an entity that does all this that humans (or even the simplest of tetrapod lifeforms) do? Can some lines of software code do this, as is so casually assumed in some Follywood depictions? Romantic and imaginative such moving works of art may be, but this ready realization of consciousness, and self-awareness, or even feeling, or imagining, is something I’ve argued against in a blog post recently.
But I’ll concede this: I’ll start to be concerned if we ever develop an inorganic brain that functions at the size, capacity, and speed of a flea’s brain while consuming only as much energy as the flea brain does in similar activities. Rest assured that this efficiency and capability is very many orders of magnitude away at the present.
More significant than such advanced development, though, are some fundamental questions. Will any such intelligence that develops self-awareness successfully replicate and multiply, if self-interest is its principal drive? Can there be an entity that thinks, feels, and learns, that will only show malevolence towards all others? Is continuation and evolution of life, in any form, only competitive, or is it a balance between competition and cooperation?
Update (2/22/15): Oren Etzioni, of the Allen Institute for Artificial Intelligence, said in an interview with CNBC recently that machines “have no free will, no autonomy. They are no more likely to do damage than your calculator is likely to do its own calculations.”
Update (3/7/2015): More on Chappie and empathy by AI in a CNBC article.