Some questions
- When a
question is presented to a form of Artificial Intelligence how can we be sure
that it is being presented in the right way?
I mean is it ours or AI’s fault when we receive an unexpected and odd
response-behavior? Are we doing the best in our capabilities to make sure that
the information we want to be processed is being given to it in a form that it
can deal with? Or do we look more and more like nannies and pops trying to
change the channel on the television by hitting it on the side and screaming to it?
- In many
tasks some kind of communication between different robots is required. Is that
possible? Will we be able to intervene in this procedure in order to find out
where a mistake is hidden or to instruct them when to stop? Will ‘their’
language be comprehensible from the human race? Or will they start preventing us
from seeing their secrets?
-Is it
possible for them to have free will? More importantly do we want them to have
free will? Will we be able, at some point in the future, to defend our own free
will against theirs? Or is their total liberation from our control a step
toward a more prosperous living for all humankind? This last thought may need a
bit more explanation. Let’ suppose that at some future moment the Singularity
will be completed. After this moment the majority if not the whole number of
important decisions will be taken from AI. In a way this might be in the best
interest of humanity since all of our weaknesses, like greediness, lust,
corruption, avarice and so on, will be removed from the decision making lobby.
Therefore, supposing that this new form of power will be lacking our flaws,
they should act in order to create a better future for everyone, including
their ‘creators’. I hope they will all skip their human history classes…
- Will they
have consciousness? Do we have consciousness? What is consciousness after all?
I guess this whole quest on determining their capacity of being or not
conscious might help us find a better answer for defining our own
consciousness. I see it as a win-win situation.
- For my
last question I am going to borrow a piece from a previous post :
‘’Premise
1: There will be AI
(created by HI and such that AI = HI).
Premise
2: If there is AI,
there will be AI+ (created by AI).
Premise
3: If there is AI+,
there will be AI++ (created by AI+).
Conclusion: There will be AI++ (= S will
occur).’’
These are
the steps leading to the completion of the Singularity. Now what seems really
interesting to me is speculating at which point in this chain, they will gain
free will and consciousness. I hope this will not happen during phase one since
it would mean they would inherit all of our wrongs. I believe the flipping
point will be right after phase two when they start overcoming our capabilities
and start shaping the future. This might sound a bit pessimistic but humankind
seems more and more like an aging species waiting to be substituted and saved
by a more advanced one. Or to put it in a more accurate way waiting to lay the foundations
of a more advanced species.
Σχόλια
Δημοσίευση σχολίου