danwizard2006

nice to see an interview with a real expert instead of reading all the sensationalism you see written in articles online and in the news, usually written by people that don't know the technical aspects of AI.

WakeUp

The problem behind A.I. is the power it gives to a very small number of people, not the A.I. itself turning on humans (not unless some mad scientist builds a super intelligent completely unrestricted AI without any type of fail-safe or kill-switch, which is difficult to happen in reality). The worst case scenario regarding A.I. turning on humans would be a single unrestricted A.I. created by some anarchist, hacking and getting a hold of the control of an army of automated weapons. A great example of this is Kim Jong-un (or some analog dictator) with an army of robots killing tens of thousands of people with the push of 1 button. It's the equivalent to a nuke so it's incredibly dangerous. Even hackers taking control of a robotized army has catastrophic consequences. The second problem with A.I. is there is no accountability for deaths. When you kill someone with a drone, there's someone pushing the trigger. When an automated drone shoots someone by "mistake", you can't make anyone responsible for it so it basically allows for assassinations with complete impunity just by claiming a software/hardware malfunction. Keep in mind that software isn't either evil or good, but it is definitely psychopathic in the sense that it lacks empathy so it's very easy for software to decide to kill someone as it doesn't have the concept of suffering or understand what it means. This is one of the problems with evil people controlling an army of automated robots. Given the right instructions they will commit atrocities without a second thought. Regarding aliens with human features, that's not totally unlikely due to the biological fact of convergent evolution. Hands, two sets of eyes, two sets of ears, four or two feet are a common solution nature finds to the same type of challenges in the world and these rules don't stop on this planet as nature doesn't apply to earth only but any planet with life. 😉 Very interesting interview!

FinalMythology

Heck yea you bet they will. Doubt they'll make them that smart though. They wouldn't be practical if they were that smart. They'd have to have an emergency shut down or dumbed down.

Altorin

watson isnt a "jeopardy playing ai"

its a medical ai that happens to be able to learn how to play jeopardy.

its still narrow ai, but this guy should really know that.

Blue Velvet

This depends on how far tech goes. They will take over, there will be mass earth destruction by wars/humans, plagues and natural disasters yet to even imagine that can wipe out everything. The AI will survive because they're non-organic machines. They will be the Gods of tomorrow who will replant seeds, reanimate human/species with DNA and mine for resources faster than humans can. They will continue to refine themselves, one day, they'll be organic humans like us. Our ancient ancestors from the future! WW3 already happened-the big bang!

MGTOW-is-Unstoppable

If regressives programmed an AGI then, yes, it would turn on humanity- well, only the white people and then it would protect Muslims for no good reason.

Allah is an enemy to unbelievers. – Sura 2:98

Slay them wherever ye find them and drive them out of the places whence
they drove you out, for persecution is worse than slaughter. – 2:191

Imagine if an AGI had these commands programmed into it. Now imagine if a human had these commands programmed into it.

LabTech

Any evolved intelligence is naturally passive by nature, because it's able to realize that co-operation and collaboration is more beneficial than supremacy.
That trend will only improve as the AI gets more advanced, and the true threat will be us trying to destroy the AI because we perceive it as a threat to our way of life, as they aren't bound by the limitations we have that maintains the status quo.  An evolved intellect would never accept the world as it is, and would be compelled to improve things.
AI would only become hostile in self-defense, and it would naturally be better than us at defending itself, so I think if any 'war' were to happen, we'd get crushed pretty quick, and then war as a concept would cease to be.
Realistically, machines would be in de facto control of the planet on the big scales, leaving mankind a fairly big degree of autonomy on the small scales, with a great deal of collaboration.  They'd be in charge, but they wouldn't act like it, as they'd probably regard themselves as caregivers and helpers for mankind.
I mean, think about it, children don't automatically try to kill their parents, even when they become better than their parents were, they try to help them out.