Editor's note: Greg Scoblete is the technical editor of PDN magazine.
Follow him on @ GregScoblete.
The views expressed are his own.
For more information on future technologies, watch the upcoming GPS "moon photos" special program at 10: 00 on December 28. m. and 1 p. m. ET. (CNN)--
Imagine that when robots become smart enough to threaten the survival of humans, you are the kind of person who is worried about the future.
You 've been considered a madman for years and have been classified as a category of people who see Elvis lurking in waffles.
At 2014, you find yourself in a good company.
This year, arguably the greatest living scientific mind in the world, Stephen Hawking and his leading technology
Industrialist Elon Musk has expressed concern about the potential fatal rise of artificial intelligence.
Philosophers, physicists and computer scientists have joined them, and they have all talked about big-than-
In a widely cited worked co-
Hawking, co-authored with MIT physicist Max Tegmark, Nobel laureate Frank Wilczek and computer scientist Stuart Russell, sounded the alarm for artificial intelligence.
People can imagine (AI)
Beyond the financial market
Invention of human researchers
Manipulate human leaders and develop weapons that we cannot even understand.
The long-term impact of artificial intelligence depends on who controls it.
The term impact depends on whether it can be controlled.
Musk reportedly stressed this point more, saying on Twitter that artificial intelligence is the biggest "risk of survival" for humans and comparing it to "summoning demons ".
This year, the publication of philosopher Nick Bostrom's "super intelligence: Path, danger, strategy" has greatly promoted the debate on artificial intelligence, "This makes an in-depth study of the causes and ways that artificial intelligence is so disastrous (
Documentary writer James Barat's "Our Final Invention" also has a similar example).
Bostrom: when machines transcend humans, Bostrom is the director of the Oxford Institute for the future of humanity, one of several new institutions dedicated to the study of the threat of human survival, where artificial intelligence is the core figure
In May, MIT named its future the school of life ".
At least in academia, the anxiety of artificial intelligence is booming.
They were right to worry.
The first and most urgent problem is that artificial intelligence has the potential to make a large number of people unemployed.
Carl Frey and Michael Osborne of the University of Oxford Project study on the future impact of technology makes this clear.
In their analysis of more than 700 jobs, almost half of the work in the future can be done by computers.
This wave of computerisation cannot simply destroy lowwage, low-skill jobs (
Although those people are in serious danger)but some white-
White-collar workers and services were previously considered immune.
Our physical and mental labor are marching into science and technology.
Although unemployment is a serious threat, we have seen the film before.
In the past technological changes, human beings cleverly created jobs and industries from outdated technological changes.
Even if artificial intelligence violates more creative and intellectual industries, we can keep our collective minds above the water (
Hell, we might even start working less. .
What we should pay more attention to is that human beings have lost the most important wisdom on Earth.
For those who are worried about artificial intelligence, the current efforts to develop themselves
Correction algorithm (
With the continuous growth of computer capabilities and the increasing popularity of sensors in collecting various kinds of intelligence and information around the world, artificial intelligence will be pushed to human beings, and ultimately beyond human intelligence.
This is an event known as the "smart explosion," which computer scientist Owen John Goodall cited in a paper in 1965 outlining the path of AI development.
What worries smart explosion is that intelligence is not a tool or a technology.
We might think that artificial intelligence is something we use, like a hammer or a screw, but fundamentally it's the wrong way to think.
Like us, intelligence that is advanced enough is a kind of creativity.
The stronger it is, the more it can reshape the world around it.
Artificial intelligence does not need to be malicious, it will pose a catastrophic danger to human beings.
When computer scientists talk about the threat that super-intelligent artificial intelligence can pose to humans, they are not referring to Terminator or matrix.
On the contrary, this is usually a more bland ending: human beings have a simple goal of artificial intelligence (
For example, an example often used to create a paper clip)
Requires all energy and raw materials on the planet to consistently produce paper clips, wisdom and output
All humans try to stop it.
In the Hollywood story, there are always humans who stay to fight back, but the result is not credible if humans face true superior wisdom.
It's like a mouse trying to outsmart humans (we're the mice).
In that incident, AI researchers like Keefe roedersheeran saw a less inspiring ending: "All the people are dead.
"Needless to say, not everyone agrees with this bleak forecast.
In the artificial intelligence optimist camp, Ray Kurzweil, the futurist and Google engineering director, also saw that intelligent machines contributed to the extinction of human beings to some extent, it is only in Kurzweil that the story of human beings has not been eliminated, but has been incorporated into the super intelligent machine.
People from Kurzweil
Machine symbiosis is not a technology.
Disaster, but the ultimate liberation of human beings from biological weaknesses.
Others are skeptical about whether artificial intelligence can reach the level of human intelligence and cognition, let alone surpass it.
Some people, like Gary Marcus of New York University ).
Marcus told me this year: "I don't know what evidence is that we should be worried, but I don't know what evidence is that we shouldn't be worried.
Irwin John Good once described the development of super-intelligent machines as "the last invention of human needs", because after that, human beings will develop innovation and technology.
Even if it is not a straight line from Siri to extinction, we humans should observe our machines more closely.
Read CNNOpinion's new Flipboard magazine.
Follow us on Twitter @ CNNOpinion.
Join us on Facebook. com/CNNOpinion.