GetDotted Domains

Viewing Thread:
"Humans Vs robots."

The "Freeola Customer Forum" forum, which includes Retro Game Reviews, has been archived and is now read-only. You cannot post here or create a new thread or review on this forum.

Sun 14/10/01 at 18:05
Regular
Posts: 787
I have been thinking about this for a long time, and after various heated debates with friends, i have come to a conclusion. Humans will always be better then robots. The reason being, that no matter how well programmed,
designed, or marketed, robots will never be truly sentient. Oh, they may one day seem sentient, and even the greatest biologists may not be able to tell the difference, but they will have 'bugs' something humans dont have.For example, take BOT's (basically robots with no body, used in games as opponents) When playing bots AND humans on perfect dark, I notice several key differences. No matter how good aim, speed or weapons human players always manage to trick the bots with carefully placed explosives, teamwork and tricks like getting behind a door and using a farsight to shoot through it as soon as the bots try to open it.
Bots also tend to walk into doors and get stuck. I often have to put them out of their misery, so they can regenerate and try again. The reason they walk into the wall, against all rationality, is because they are being told to by a faulty routine in their program. They have no free will to change that program. I know PD simulants and robots are quite different, but the fundamentals are the same. ROBOTS HAVE TO DO WHAT THEY ARE TOLD. they cannot 'break' programming. They cannot choose.
Can a bullet become a pacifist in mid air and stop? Can a wrench choose where it is used? No. Tools have no choice how, when and where they are used. People always do. Humans learn and adapt. Robots can only do this as long as their programs alow them to. They cannot improvise.
If the programmer forgot to insert the movement program, even Data would have been a cripple dragging himself along the floor like an idiot. Robots may be stronger and quicker, but they will never be smarter.
Sun 14/10/01 at 23:53
Regular
"smile, it's free"
Posts: 6,460
Edwin25 wrote:
> Well venom byte, i can see you are very knowlegdeable on programming and the
> whole AI subject which is good, i love debating this issue. You see biology
> breaks down to chemistry, and chemistry to math. Do you really belive that we
> are just a lucky mathematical equasion?

Possibly. I really don't know.

I'm not saying there is a god, but there
> is definetly something lurking just out of our perception that has a great deal
> of meaning.
Maybe we created it. Maybe somthing else did. But we are not an
> accident. Even if we were, we are the only sentient species known to exist (i
> belive in aliens, just not clever ones)

Isn't it a little bit arrogant to assume that we are the ONLY intelligent life form in the entire universe?

Untold billions of stars, many billions of those with planets of their own. It's been shown that primative life can even survive on venus. What makes you think that none of these planets could have had life evolve on them to the same, or indeed a far greater level, than ours?

and we have pulled ourselves from the
> primordial ooze into the realm of gods.

Careful here.. who's to say our achievements are really that great compared to what may be possible?

We can leave our planet! we create! we
> love! I dont belive in gods or robots, i belive in humanity.

Humanity. The only species to kill other members of it's own race when it doesn't even need to. The only species which wipes itself out in huge droves over petty squabbles. The pack hunters who care only for themselves, leaving thousands starving and homeless.

Robots will be
> our creation, but as we are flawed (even a diamond has a flaw somewhere) robots
> will have these flaws magnified. What if a scientist created a robot with one
> letter in its programming out of sinc, causing it to misinterpret everything and
> go haywire?

This can't happen in the CP. There is what is known as 'graceful degradation' - which essentially means the system is unaffected by small errors. A traditional computer program can break down with a small error, or data is does not understand - and this is the case with the SSSP.
It can't happen in the CP, it's a non-brittle paradigm.


Even an insane man (or maybe just a smarter man) can eat, talk and
> feel. A broken robot cannot.

Who says what these feelings really are? Different feelings translate only to different activity levels in differen parts of the brain, and triggered chemical flows. You feel emotions because your brain tells you you feel them. If a robot can understand an emotion, who's to say whether or not it can truly feel it? If you think you feel an emotion, who can say otherwise?

I program in pascal, and i know just what can
> happen if the syntax is wrong by just one punctuatuation mark.

See above. Neural Network programming doesn't work like conventional languages.

A robot may be
> able to simulate us, and even fool us, but they will never feel like us. Robots
> will not appriciate beauty, they will not love they will not be able to break
> logic as it is engraved in their binary being.

All higher level intelligence essentially comes from the fact that we are conscious of our own existence. We know when we are thinking, and we know what we are.

Emotions aren't magical. Anything which can be created, can be recreated. You can't argue that a robot can never feel them simply because it is a robot.

As i mentioned before in my
> 'wall walking' analogy, Robots cannot and never will be able to 'adapt' sure
> they will be able to move and even learn basic things, but what would happen if
> they saw a rainbow? would they smile or would they file the picture away in some
> dark reccesive hard drive and forget it.

You need to try to let go of the idea that robots must always be unfeeling, procedure following devices. They may well be at the moment, but that doesn't mean they always will be.

What if they experienced a phenomenon
> like we do every time we dream and didnt know what to do? A being with logic is
> like a pompous king: when the real world hits them, they shatter.

Too many assumptions. Very little substance to back it up.

I can
> understand why everyone wants robots to be sentient, as we all want to re-create
> ourselves in flesh, stone and even metal. This is because it is a natural
> feeling, and because we pine for company out of higher emotions. But remember
> this: A free man may choose to be a slave, and a slave may choose to be a free
> man but a tool will be forever bound in sevitude for it has no will of its own.
> It was created by us and it WILL work for us.

The slave does your bidding because it's in his or her best interests (else punishement)
The tool does your bidding because it is a tool.

When a robot thinks, it is no longer a tool. When it is aware of it's own existence, and has self preservation, it is alive.

When machione intelligence comes, it won't be there because you told it to be there, it will be there because you gave it the potential, and it learnt. The first truly intelligent robot will be like a baby. It will learn to do things in it's best interests. It wouldn't be under our command, but would have to be disciplined in the same way as you or I.
Sun 14/10/01 at 22:13
Regular
"Death to the Infide"
Posts: 278
Well venom byte, i can see you are very knowlegdeable on programming and the whole AI subject which is good, i love debating this issue. You see biology breaks down to chemistry, and chemistry to math. Do you really belive that we are just a lucky mathematical equasion? Im not saying there is a god, but there is definetly something lurking just out of our perception that has a great deal of meaning.
Maybe we created it. Maybe somthing else did. But we are not an accident. Even if we were, we are the only sentient species known to exist (i belive in aliens, just not clever ones) and we have pulled ourselves from the primordial ooze into the realm of gods. We can leave our planet! we create! we love! I dont belive in gods or robots, i belive in humanity.
Robots will be our creation, but as we are flawed (even a diamond has a flaw somewhere) robots will have these flaws magnified. What if a scientist created a robot with one letter in its programming out of sinc, causing it to misinterpret everything and go haywire? Even an insane man (or maybe just a smarter man) can eat, talk and feel. A broken robot cannot. I program in pascal, and i know just what can happen if the syntax is wrong by just one punctuatuation mark.
A robot may be able to simulate us, and even fool us, but they will never feel like us. Robots will not appriciate beauty, they will not love they will not be able to break logic as it is engraved in their binary being.
As i mentioned before in my 'wall walking' analogy, Robots cannot and never will be able to 'adapt' sure they will be able to move and even learn basic things, but what would happen if they saw a rainbow? would they smile or would they file the picture away in some dark reccesive hard drive and forget it. What if they experienced a phenomenon like we do every time we dream and didnt know what to do? A being with logic is like a pompous king: when the real world hits them, they shatter.
I can understand why everyone wants robots to be sentient, as we all want to re-create ourselves in flesh, stone and even metal. This is because it is a natural feeling, and because we pine for company out of higher emotions. But remember this: A free man may choose to be a slave, and a slave may choose to be a free man but a tool will be forever bound in sevitude for it has no will of its own. It was created by us and it WILL work for us.

VenomByte wrote:
> So you think a human brain will always be superior to a robotic
> mind.

Why?

A human brain is a mass of chemicals, neurons, synaptic links,
> and so forth, triggered into action by electrical pulses.

The number of
> combinations of on/off neurons in the brain if far greater than the number of
> atoms in the universe, giving the brain an unimaginable potential.

What about
> a robotic brain?
A mass of on/off switches, which are connected and permutated
> by electrical signals.

See the similarity?

Don't try to compare 'bot' AI
> with the full potential of Artificial intelligence. For starters, the technique
> used to write the behaviour for those bots is rather primative. It uses a
> popular AI paradigm (for 'paradigm' read AI programming technique) called the
> Symbolic Search Space Paradigm (SSSP). The SSSP attempts to model human
> behaviour only by a series of 'If this is true, do this' type statements.
> There's no room for learning and the level of intelligence is only as good as
> the person who wrote it.

A paradigm with rather more potential (in my
> opinion) is the Connectionist Paradigm (CP). The CP is modelled on a much
> simplified version of the stricture of the human brain. You have a selection of
> 'input nodes', which pass numbers through weighted branches to one or more
> layers of 'hidden nodes', which each have a number of their own wieghted
> branches, eventually reaching a layer of output nodes, the output being
> determined by a mathematical formula applied to the branches entering into each
> node. The output can be compared to a 'desired output', and node weighting
> changed appropriately.

So what does this mean?
NetTalk was a program made a
> few years ago, based on the CP. Words were entered into the machine, and a sound
> was produced at the end. The correct sound was compared with this, and the
> program modified it's node weights in order to become more accurate.

The
> result? A program that started off babbling incoherently, but some ten hours
> later was speaking in clearly understandable English. A much repeated example of
> machine learning. If mchines can learn, they can surely exceed us one day?

If
> scientists can understand the brain better, it can be replicated better. With
> sufficient knowledge, an artificial intelligence can one day be created which
> can not only interpret the outside world, but can learn and adapt to events in
> this world.

The bot which learns and corrects it's bugs is the one you'll
> never beat.
Sun 14/10/01 at 21:13
Posts: 0
yes by a wopping 9%!!!
Sun 14/10/01 at 20:17
Regular
Posts: 14,117
91%?

Impressive. Still room for improvement though....

:-)
Sun 14/10/01 at 20:16
Regular
"smile, it's free"
Posts: 6,460
I love that stuff.

I got 91% on my 5000 word essay last term ;)
Sun 14/10/01 at 20:06
Regular
Posts: 14,117
I personally think my toaster has a greater level of AI than any 'bot I've played in any game.

And Venom, I've done that sort of stuff at uni as well.

It does my head in after a while...

:-)
Sun 14/10/01 at 19:57
Regular
"smile, it's free"
Posts: 6,460
So you think a human brain will always be superior to a robotic mind.

Why?

A human brain is a mass of chemicals, neurons, synaptic links, and so forth, triggered into action by electrical pulses.

The number of combinations of on/off neurons in the brain if far greater than the number of atoms in the universe, giving the brain an unimaginable potential.

What about a robotic brain?
A mass of on/off switches, which are connected and permutated by electrical signals.

See the similarity?

Don't try to compare 'bot' AI with the full potential of Artificial intelligence. For starters, the technique used to write the behaviour for those bots is rather primative. It uses a popular AI paradigm (for 'paradigm' read AI programming technique) called the Symbolic Search Space Paradigm (SSSP). The SSSP attempts to model human behaviour only by a series of 'If this is true, do this' type statements. There's no room for learning and the level of intelligence is only as good as the person who wrote it.

A paradigm with rather more potential (in my opinion) is the Connectionist Paradigm (CP). The CP is modelled on a much simplified version of the stricture of the human brain. You have a selection of 'input nodes', which pass numbers through weighted branches to one or more layers of 'hidden nodes', which each have a number of their own wieghted branches, eventually reaching a layer of output nodes, the output being determined by a mathematical formula applied to the branches entering into each node. The output can be compared to a 'desired output', and node weighting changed appropriately.

So what does this mean?
NetTalk was a program made a few years ago, based on the CP. Words were entered into the machine, and a sound was produced at the end. The correct sound was compared with this, and the program modified it's node weights in order to become more accurate.

The result? A program that started off babbling incoherently, but some ten hours later was speaking in clearly understandable English. A much repeated example of machine learning. If mchines can learn, they can surely exceed us one day?

If scientists can understand the brain better, it can be replicated better. With sufficient knowledge, an artificial intelligence can one day be created which can not only interpret the outside world, but can learn and adapt to events in this world.

The bot which learns and corrects it's bugs is the one you'll never beat.
Sun 14/10/01 at 18:47
Regular
"Death to the Infide"
Posts: 278
However, I am always willing to change my mind if proof is shown. What do you guys think?
Sun 14/10/01 at 18:05
Regular
"Death to the Infide"
Posts: 278
I have been thinking about this for a long time, and after various heated debates with friends, i have come to a conclusion. Humans will always be better then robots. The reason being, that no matter how well programmed,
designed, or marketed, robots will never be truly sentient. Oh, they may one day seem sentient, and even the greatest biologists may not be able to tell the difference, but they will have 'bugs' something humans dont have.For example, take BOT's (basically robots with no body, used in games as opponents) When playing bots AND humans on perfect dark, I notice several key differences. No matter how good aim, speed or weapons human players always manage to trick the bots with carefully placed explosives, teamwork and tricks like getting behind a door and using a farsight to shoot through it as soon as the bots try to open it.
Bots also tend to walk into doors and get stuck. I often have to put them out of their misery, so they can regenerate and try again. The reason they walk into the wall, against all rationality, is because they are being told to by a faulty routine in their program. They have no free will to change that program. I know PD simulants and robots are quite different, but the fundamentals are the same. ROBOTS HAVE TO DO WHAT THEY ARE TOLD. they cannot 'break' programming. They cannot choose.
Can a bullet become a pacifist in mid air and stop? Can a wrench choose where it is used? No. Tools have no choice how, when and where they are used. People always do. Humans learn and adapt. Robots can only do this as long as their programs alow them to. They cannot improvise.
If the programmer forgot to insert the movement program, even Data would have been a cripple dragging himself along the floor like an idiot. Robots may be stronger and quicker, but they will never be smarter.

Freeola & GetDotted are rated 5 Stars

Check out some of our customer reviews below:

Continue this excellent work...
Brilliant! As usual the careful and intuitive production that Freeola puts into everything it sets out to do, I am delighted.
Very pleased
Very pleased with the help given by your staff. They explained technical details in an easy way and were patient when providing information to a non expert like me.

View More Reviews

Need some help? Give us a call on 01376 55 60 60

Go to Support Centre
Feedback Close Feedback

It appears you are using an old browser, as such, some parts of the Freeola and Getdotted site will not work as intended. Using the latest version of your browser, or another browser such as Google Chrome, Mozilla Firefox, or Opera will provide a better, safer browsing experience for you.