This is how the terminator started!

Sorry if the OP was being tongue in cheek but there was absolutely 0 intention here from the robot from what I can see. It didn't 'go for' the boy, he got in the way of it doing something it's programmed to do and was injured. There's no sentience here. It's poor safety from the creators of the robot for sure but it wasn't vengeance or anger.

Pretty sure AI is gonna kill us all (it does scare me) is part of the new cycle these days.
 
I have watched quite a few YouTube videos about AI robots and the strange/scary things they say, I realise the video I have watched some have been dramatised for views and only give a brief glimpse and worst case scenario into a much much larger Industry.
What I have seen is that when a AI robot/system is programmes with human emotions (probably more technical word for this, but I'm not very bright) the AI seems to end up with what could be described in human term as showing symptoms of PTSD and/or personality disorders. The AI has the intelligence and ability to learn vast amounts of informations but not the complexity of emotions or a personality to handle this information or emotions and life experiences of a human. Its a bit like creating a psychopath along the lines of Ted Bundy who was very intelligent and charming but didn't understand or have the full range of human emotions. I suppose eventually these could be programmed into it to develop independently emotions, but the human brain, personality and thinking is so complex that it isn't as simple as that. Every experience and decision in our lives and those people around us effects this long term and goes towards shaping who we are. I agree with Big_Nothing in regards to this clip it was poor safety features in the engineering of the machine that it didn't have a better range of safety features such as movement sensors, heat sensors etc etc.
 
Sorry if the OP was being tongue in cheek but there was absolutely 0 intention here from the robot from what I can see. It didn't 'go for' the boy, he got in the way of it doing something it's programmed to do and was injured. There's no sentience here. It's poor safety from the creators of the robot for sure but it wasn't vengeance or anger.

Pretty sure AI is gonna kill us all (it does scare me) is part of the new cycle these days.
Accident or not .. how do we know that, afterwards, the robot didn't secretly enjoy the pain it caused? Huh? Huh?? HUH? :eek:
 
I have watched quite a few YouTube videos about AI robots and the strange/scary things they say, I realise the video I have watched some have been dramatised for views and only give a brief glimpse and worst case scenario into a much much larger Industry.
What I have seen is that when a AI robot/system is programmes with human emotions (probably more technical word for this, but I'm not very bright) the AI seems to end up with what could be described in human term as showing symptoms of PTSD and/or personality disorders. The AI has the intelligence and ability to learn vast amounts of informations but not the complexity of emotions or a personality to handle this information or emotions and life experiences of a human. Its a bit like creating a psychopath along the lines of Ted Bundy who was very intelligent and charming but didn't understand or have the full range of human emotions. I suppose eventually these could be programmed into it to develop independently emotions, but the human brain, personality and thinking is so complex that it isn't as simple as that. Every experience and decision in our lives and those people around us effects this long term and goes towards shaping who we are. I agree with Big_Nothing in regards to this clip it was poor safety features in the engineering of the machine that it didn't have a better range of safety features such as movement sensors, heat sensors etc etc.
I think I can put your mind at rest here. No ai is anywhere near sentience or indeed general intelligence. We don't even know how to create an gai.

On the flip side science thinks that consciousness is an emergent property of complex systems. This means, if true, that once an ai program becomes complex enough it will develop self awareness without programmers having to understand it.

Recently a medibot told a fake patiant to kill themselves. Some found this horrific but those in the industry think it clearly demonstrates how basic ai still is.
 
Back
Top