Using our masters


‘’Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.’’
Vernor Vinge, 1993

See? This number keeps appearing every time. Nevertheless I think it is safe to say that this will not happen by 2023. But let’s put aside the exact chronological coming of the Singularity for the time being. I would like to focus on the second part of this saying.

The end of the human era. The end of the human era as we know it, I believe is what he means. It is true that the Singularity as an event can only be compared to the rise of human life on Earth. Since humanity took over many other forms of life have been living with us like dogs, cats, horses, cows, Escherichia coli and so on. In most cases we don’t exactly live together. We rather take advantage of them. However if you were to ask other creatures like homo neanderthal,the Tasmanian tiger or the Pyranean ibex the answer you would get would probably not depict humans as fair rulers.

In which category will our fate fall? The answer is obviously tricky and maybe we have to base our answers on our imagination as well as on science-fiction scenarios. What if the ultra-intelligent machine is the last invention man will need to make? I mean, what if we invent the perfect rulers, ones that would take all the right decisions for the whole world, but would have no desire for ruling over or driving to extinction all other species. Clearly we were not able to act like this, but are we capable of helping the creation of such an entity?

This thought might seem ridiculous but there is an argument that makes me rather optimistic. We humans have emerged as a ruling class through the course of natural selection and years of struggles, which are still going on. Maybe this is why we have malevolent thoughts in our minds and we resort to insidious actions. Everything starts from our instinct to survive. But if superintelligence is made as the most intelligent species de novo, without needing to prove it or fight for it maybe, just maybe, it will act as a savior for life in general on planet earth.

After all we just have to follow Asimov’s Laws.
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Was it that difficult Will Smith?


The ultra-intelligent machine will be the last invention man will need to make. One way or another.

Σχόλια

Δημοφιλείς αναρτήσεις