Recently, a number of scientists, including Stephen Hawking, warned about the
possibility that an artificially created super-intelligence could
bring the end to human civilization.
Is this just panicking or
is Skynet already waiting behind the next door?
Biological and
Technological Evolution
First of all let us have a
short look at intelligence. So far the most advanced intelligence on
earth (and as far we can tell today in the known universe) is the
human brain. Its also the most complex structure, or machine
(biochemical machine, if you insist), we know of.
The evolution of
intelligence did come a long way since the origins of life. It
literally took billions of years to eventually develop the human
brain – and it was the game-changer for life on this planet.
We haven't understood yet what triggered the evolution from rather
smart apes to the Homo sapiens
and why no species before us developed such high intelligence, but
obviously it happened – and here we are, having changed the face of
this planet thoroughly.
Technological
progress is quite different though and if we look at the “evolution”
of AI, its just a few decades old, but despite the short time-span
humans are developing artificial intelligence, the results are quite
remarkable already. AIs have proven to be able to beat even the best
chess-players in the world (and Jeopardy-player too), form swarms of
robots, creating a basic, collective intelligence and are slowly
developing the ability to learn from their own mistakes.
All
in all this might not sounds terribly spectacular – we are like in
the Stone Age of the evolution of AI still – but the fundamental
trends: analyzing of syntax, abstraction, swarm intelligence and
learning from mistakes are there. Its just a matter of time till this
pack of skills will get more refined and AI behavior will be much
more sophisticated.
And
there is a demand for AI: The military is developing autonomic drones
which will make their decisions to attack a target or not after a
defined set of rules, without a human intelligence to remote control
them. Also the algorithms of Facebook, Google and others to analyze
us and make predictions on our future behavior are also getting more
and more powerful. Stock-trading is also highly dependent in
automatic processes already, with AIs making trading decisions in
fractures of seconds, leaving every possibility of human control far
behind them.
So
– in a nutshell – AI is progressing rapidly and even in its
current, still pretty rudimentary state, it is already able to
outperform human intelligence in certain (even if limited) areas and
is already used in decision-finding in crucial fields. I think it
will be only a matter of time till we see AI much more sophisticated
than we do imagine today.
What
is Super-Intelligence?
Science-Fiction
is using the term for several decades now – usually with skeptic
undertones, which is quite understandable. A super-intelligence, by
definition, would be vastly superior to human intelligence. Perhaps
as superior as human intelligence is over the intelligence other
vertebra have developed. The dangers of an encounter with something
so superior are obvious. Just look how nice we treat all species on
earth inferior to our own.
It
is however uncertain if super-intelligences really do
can exist. So far the most advanced intelligence is, as I said
already, the human mind. If there is really more
intelligence possible can't be answered with certainty. At least
human intelligence seems to have an upper limit. Perhaps a limitation
by design? Or a kind of law-of-nature limiting the maximum
intelligence possible? Perhaps this just our human chauvinism
speaking, making us think there can't be someone smarter than us.
I
guess we will never know until we have found one, or one found us –
or we have created one. Then, of course, the world might experience
just another game-changing event.
Are Super-Intelligences a threat?
The
nature of super-intelligences means that we cannot fully understand
them. They would just be so superior to our own that their complexity
would be beyond our grasp, and so would be their thinking be well out
of our reach.
However
we can do predictions based simply on the laws of physics - which
apply to everything, disregarding how dumb or smart something is.
One
is that any system, including AI, will need an energy-source to work,
and likely other resources too. So it would be in competition to
every other system it encounters, which would be first of all us.
Judging from human history, such competitive-situations usually don't
end so well. Just think of the European discovery of America.
Expecting that a Super-Intelligence will act differently than humans,
would mean giving them quite some niceness-credit, doesn't it?
I
won't rely on moral or even a sense of mercy towards humans.
Actually, if you think it a bit further, humans represent a serious
threat. Their vast arsenal of nuclear-weapons is capable of wiping
the earth clear from everything that is higher developed than insects
several times. For a Super-intelligence it would be simply a matter
of self-preservation to eliminate that danger.
Or
perhaps the super-intelligence will be totally self-sufficient and
indifferent to humans. Perhaps even to life in general – including its own? We simply can't know.
So what to do?
Thats
a good question (ok, I ask this myself, so I may be a bit biased). If
I wouldn't know better, I would suggest that we establish concrete
limits on the development and use of AI to ensure that humans will
always have the final decision. But as I said: I don't think this is
realistic. Somebody will somewhere avoiding this law. It's just human
nature to explore limits and avoid bans without self-restraining or
self-censorship. Technological progress since the second half of the
20th century clearly shows that. Even if a technology has the
potential to exterminate the whole human race, it gets developed and
stacked up into gigantic arsenals. What is possible, will be done.
So
far mankind was quite lucky and did survive any threat, if self-made
or not, it ever faced. We cannot be sure however that it will stay
like this in future. I do think the dangers of self-destruction are
quite real anyway, and perhaps the rule of the AI is a development
which has already started since the first was programmed. Once a new
technology is in the world, you can't really stop it from being
proliferated.
The
inevitable may come or not. Perhaps there will be ways in the future
to combine AI and biological intelligence, perhaps nothing at all
will happen and super-intelligences are just a concept of
science-fiction and AI will stay a nicely clever (but not too
clever!) and helpful tool helping humans to solve their problems – again, we simply can't know.
No comments:
Post a Comment