The "Freeola Customer Forum" forum, which includes Retro Game Reviews, has been archived and is now read-only. You cannot post here or create a new thread or review on this forum.
Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.
-------------------------------------------------------
http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html
--------------------------------------------------------
A paper on the birth of AI and the possibilities it presents. I've stuck this link on here before, but what with all the matrix story hype, I thought some people might wanna read.
Interesting stuff!
For those of you who find reading boring, here is another link :
---------------------------------------------
http://www.trevorvanmeter.com/flyguy/
---------------------------------------------
Just like Homer Simpson ......
"So Homer, how do you hope to pass this exam?"
"Well, during the exam, I plan to hide under a pile of coats, and hope everything turns out ok"
> Exactly, scientists do it mainly for advancement in science itself. If
> one day an evil singularity does kill us all, blame the Government
> for butting in AI research.
I blame the government for most things.
"Wheres my shoes...must be the work of the government!"
after all, 10 years ago we were still mincing about with 386, 25Mhz PCs, 10 years before that it was 48k spectrums.
who knows where we'll be in another 50 years, technologically.
I suppose it is possible, but I would also theorise that such a project would be shortlived even if it came to fruition. To an extreme, along with those involved with it. The Matrix, AI, terminator, and others are all films, and as such paste over the real facts which would prevent such scenarios occuring. One slightly obvious thing missing in the Animatrix's Second Renaissance tale is that the humans could have used EMP weapons - as the resistance does in the Matrix - and as we actually have now, to easily defeat the robots.
It is in no-ones interest to create something which can refuse to do something, only to create something which unquestioningly does what we ask it to, and such a creation would not be true AI.
Were all screwed by the way. Then again I suppose we were as soon as those damn Sumerians invented the 'wheel'.
Thought provoking stuff, and while I make jest it is nethertheless a serious issue, and one which we shouldn't leap into without asking ourselves serious ethical and theorectical questions.
> Right exactly Belldandy - once again it's all assumptions made by
> people who have an over active imagination, failing to realise that
> reality is rather different and in this case, AI harder to achieve.
yes you're right it is imagination playing a big part, but that doesn't mean that it should be disregarded. Look at deVinci(sp). He created drawings of machines like the helicopter, many years before they were seen to be viable by 'those in the know'.
We may be a long, long way off. Copying a human brain is a current impossibility, mainly due to the scale. The human brain outperforms all the computers in the world put together by a huge amount. If we were to advance our computers, both in processing power, memory and storage, to the stage where we had something remotely like the human brain, could we then conceivably create intelligence?