I'm a Singularitarian (without the religious implications) who believes that part of evolution will eventually see technology transcend biology. In fact, there's really no clear cut difference between them.
In that mindset, when I watched IBM's Watson play Jeopardy tonight against two top former human contestants. It was amazing how well the machine did. That led to a discussion about the eventual merger of human and machine. Some people are horrified by the idea. I find it cool as hell. We're already cyborgs anyway, what with artificial limbs and organs. But there is the question of what will be lost in the transition from minerals and goo to plastics and silicon and will it be worth the gain?
I've been thinking about the the merger of human and machines since Robbie the Robot was on screen in "Forbidden Planet"...at the ripe age of 7 yrs old. And almost all of the baby boomers remember Robbie's later "Lost In Space" iconic warning "Danger, Will Robinson, danger!" Robbie was like a super cool pet or a playmate you didn't have to feed or clean up after. That would change. Robbie's helpfulness would eventually take a sinister turn.
By 1968 there was "Space Odyssey" with the cyclopian but warmly engaging artificial intelligence of HAL, who by all accounts was not just super fast and smart, but sentient. With that miracle of self-awareness came danger to humans. The 70's saw battle Star Galactica and the wars with rebellious robots. Then came "The Terminator" and the concept of Skynet, the evil intelligence that wanted to eliminate humans because they were an impediment and disease and even later followed by "The Matrix" amplifying the same premise.
It's notable that ll through the various incarnations of Star Trek the computers were never given full reign of the ship or decision making except in the most dire of situations. Even though the series was set in centuries in the future computers were portrayed more or less as highly sophisticated tools tethered to the service of human needs, the exception being Mr Data, who at times proved dangerous to the humans around him as well.
Issac Assimov's famous three laws of robotics from the late 1940's (later changed to 4) and which attempted to mitigate for these sorts of fantasied dangers, are:
1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
2. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
3. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
4. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
People have been thinking about this robots vs humans thing for a long time and many of the ethical questions remain. For example, say there was a sentient machine owned by a business that wanted to upgrade and part of that process would mean turning the sentient machine off. The machine realizing their intent fires off emails to top lawyers to take the company to court to prevent its demise. Does the company own the "life" that the machine senses it has or is the machine simply a slave with no rights? In my opinion, the machine has every right to live. But others might argue otherwise. The time to really put our shoulders to the wheel on these matters is right now. Because as science fiction like it sounds the reality of a conscious artificial intelligence is closer than you think.
Check out the Time article below.
Thanx for reading,