| |

Craxme.com

 Forgot password?
 Register
View: 1660|Reply: 8
Collapse the left

[Articles & News] Can AI escape our control and destroy us?

 Close [Copy link]
Post time: 21-5-2019 05:09:56 Posted From Mobile Phone
| Show all posts |Read mode
━━━━━━━━━━━━━━━━━

Skype cofounder Jaan Tallinn bankrolls efforts to keep superintelligent AI under control.
Image
"Preparing for the event of general AI surpassing human intelligence is one of the top tasks for humanity." —Jaan Tallinn
Illustration by Leon Dijkstra
▼ “It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life. It began four million years ago, when brain volumes began climbing rapidly in the hominid line. Fifty thousand years ago with the rise of Homo sapiens.Ten thousand years ago with the invention of civilization. Five hundred years ago with the invention of the printing press. Fifty years ago with the invention of the computer. In less than thirty years, it will end.”
Jaan Tallinn stumbled acrossthese words in 2007, in an online essay called “Staring into the  Singularity.”The “it” is human civilization. Humanity would cease to exist, predicted the essay’s author, with the emergence of superintelligence, or AI that surpasses the human intellect in a broad array of areas.
Tallinn, an Estonia-born computer programmer, has a background in physics and a propensity to approach life like one big programming problem. In 2003, he had co-founded Skype, developing the backend for the app. He cashed in his shares after eBay bought it two years later, and now he was casting about for something to do. “Staring into the Singularity” mashed up computer code, quantum physics, and ­Calvin and Hobbesquotes. He was hooked.
Tallinn soon discovered that the essay’s author, self-taught theorist Eliezer Yudkowsky, had written more than 1,000 articles and blog posts, many of them devoted to superintelligence. Tallinn wrote a program to scrape Yudkowsky’s writings from the internet, order them chronologically, and format them for his iPhone. Then he spent the ­better part of a year reading them.
The term “artificial intelligence,” or the simulation of intelligence in computers or machines, was coined back in 1956, only a decade after the creation of the first electronic digital computers. Hope for the field was initially high, but by the 1970s, when early predictions did not pan out, an “ AI winter” set in. When Tallinn found Yudkowsky’s essays, AI was undergoing a renaissance. Scientists were developing AIs that excelled in specific areas, such as winning at chess, cleaning the kitchen floor, and recognizing human speech. (In 2007, the resounding win at­Jeopardy!of IBM’s Watson was still four years away, while the triumph at Go of DeepMind’s AlphaGowas eight years off.) Such “narrow” AIs, as they’re called, have superhuman capabilities, but only in their specific areas of dominance. A chess-playing AI can’t clean the floor or take you from point A to point B. But super-intelligent AI, Tallinn came to believe, will combine a wide range of skills in one entity. More darkly, it also might use data generated by smartphone-toting humans to excel at social manipulation.
Reading Yudkowsky’s articles, Tallinn became convinced that super­intelligence could lead to an explosion or “breakout” of AI that could threaten human existence—that ultrasmart AIs will take our place on the evolutionary ladder and dominate us the way we now dominate apes. Or, worse yet, exterminate us.
After finishing the last of the essays, Tallinn shot off an email to Yudkowsky—all lowercase, as is his style. “i’m jaan, one of the founding engineers of skype,” he wrote. Eventually he got to the point: “i do agree that…preparing for the event of general AI surpassing human intelligence is one of the top tasks for humanity.” He wanted to help. When he flew to the Bay Area for other meetings soon after, he met Yudkowsky at a Panera Bread in Millbrae, California, near where he lives. Their get-together stretched to four hours. “He actually, genuinely understood the underlying concepts and the details,” Yudkowsky recalls. “This is very rare.” Afterward, Tallinn wrote a check for $5,000 to the Singularity Institute for Artificial Intelligence, the nonprofit where Yudkowsky was a research fellow. (The organization changed its name to Machine Intelligence Research Institute, or MIRI, in 2013.) Tallinn has since given it more than $600,000.
The encounter with Yudkowsky brought Tallinn purpose, sending him on a mission to save us from our own creations. As he connected on the issue with other theorists and computer scientists, he embarked on a life of travel, giving talks around the world on the threat posed by superintelligence. Mostly, though, he began funding research into methods that might give humanity a way out: so-called friendly AI. That doesn’t mean a machine or agent is particularly skilled at chatting about the weather, or that it remembers the names of your kids—though super-intelligent AI might be able to do both of those things. It doesn’t mean it is motivated by altruism or love. A common fallacy is assuming that AI has human urges and values. “Friendly” means something much more fundamental: that the machines of ­tomorrow will not wipe us out in their quest to attain their goals.
Nine years after his meeting with­Yudkowsky, Tallinn joins me for a meal in the dining hall of Cambridge University’s Jesus College. The churchlike space is bedecked with stained-glass windows, gold molding, and oil paintings of men in wigs. Tallinn sits at a heavy mahogany table, wearing the casual garb of Silicon Valley: black jeans, T-shirt, canvas sneakers. A vaulted timber ceiling extends high above his shock of gray-blond hair.
At 46, Tallinn is in some ways your textbook tech entrepreneur. He thinks that thanks to advances in science (and provided AI doesn’t destroy us), he will live for “many, many years.” His concern about superintelligence is common among his cohort. PayPal co-founder Peter Thiel’s foundation has given $1.6 million to MIRI, and in 2015, Tesla founder Elon Musk donated $10 million to the Future of Life Institute, a technology safety organization in Cambridge, Massachusetts. Tallinn’s entrance to this rarefied world came behind the Iron Curtain in the 1980s, when a classmate’s father with a government job gave a few bright kids access to mainframe computers. After Estonia became independent, he founded a video-game company. Today, Tallinn still lives in its capital city—which by a quirk of etymology is also called Tallinn—with his wife and the youngest of his six kids. When he wants to meet with researchers, he ­often just flies them to the Baltic region. (▪ ▪ ▪)

Please, continue reading this article here: Source
Reply

Use magic Report

Post time: 23-5-2019 02:23:43 Posted From Mobile Phone
| Show all posts
Nice
Reply

Use magic Report

Post time: 23-5-2019 21:50:37
| Show all posts
Its paradoxical but if humans with supposedly superior intelligence allow AI to rout them out they surely cant b intelligent ....so the logical conclusion is AI generated from such an inferior intelligence can never reach the altruisic benefit granting AI thereby it may remain risk ridden till the first rule is surmounted.....!!

Rate

Number of participants 1Money +35 Collapse Reason
Pedro_P + 35 Hmm... Interesting.

View Rating Log

Reply

Use magic Report

 Author| Post time: 24-5-2019 02:26:32 Posted From Mobile Phone
| Show all posts
Image ssangra Image 23-5-2019 11:20 AM
Its paradoxical but if humans with supposedly superior intelligence allow AI to rout them out they s ...

Well, a long time ago, machines (with or without intelligence), have surpassed human beings in certain fields, from a simple scissors or a pocket calculator, to an assembly robot in a factory.
Artificial Intelligence, assumes in basic terms, that machines have the ability to understand the information they obtain and use it to solve problems... But human intelligence does not only include those capabilities, but also the ability to be aware... Awareness of her/his own existence and awareness of their actions and the consequences of them.
Most likely, that level of consciousness, can be "programmed" in the machines, but I think that there is still much to do... The machines can learn to solve problems by themselves, they can develop creativity and language to express themselves, they can process a lot of information and perceive their environment, but are they conscious?
I believe that machines are still very far from that, when machines acquire consciousness, maybe in that moment the human being should worry.
Reply

Use magic Report

Post time: 25-5-2019 17:16:34
| Show all posts
Edited by ssangra at 25-5-2019 05:54 PM
Pedro_P 24-5-2019 02:26 AM
Well, a long time ago, machines (with or without intelligence), have surpassed human beings in cer ...

True ...but before that happens we need to begin to look for patterns....could we b master machines that evolved to the pt of conciousness..after all its only some basic rules that may make up even what we call our awareness......though I suspect machinery is not the way to go to make AI....it would b gene manipulation instead.

Reply

Use magic Report

 Author| Post time: 26-5-2019 11:58:13 Posted From Mobile Phone
| Show all posts
Image ssangra Image 25-5-2019 06:46 AM
Edited by ssangra at 25-5-2019 05:54 PM



Hmm, I think that at that point, and applying the theory of evolution, we could expect Artificial Intelligence to evolve to such an extreme that some signs of consciousness may spontaneously appear in it.
At that point, machines equipped with AI, could become aware of their existence and their situation, and decide to perform actions in this regard, they could also develop instincts, and one of the strongest instincts is the survival instinct.
On the other hand, an entity with Artificial Intelligence and consciousness, confined in a computer, isolated and without possibilities of moving or executing physical actions, is probably not a major problem... But, if that entity had a mechanical (robotic) body, it could take control of its actions.
Certainly the pattern detection would help a lot, the question is: Could man himself recognize those patterns or would he use computer technology to do it?
Reply

Use magic Report

Post time: 27-5-2019 23:05:27
| Show all posts
Edited by ssangra at 27-5-2019 11:25 PM

as u rightly said and as a corrollary  - all one needs to do is pgm in the facts that...I must survive .....after me the community must survive...then comes sustenance and then selective destruction (The brahma - creation, the vishnu- sustenance and the mahesh or shive - destruction)....any life form whatever it b follows these rules.

We are ruled by the immobile gametes which developed a brain and a body on their own to move from place to place as well as across generations ....so if a bioengineered robot is created it can surely come to a stage where it can become destructive as well as self destructive as we have.

patterns exist ...independent of whether we see them or pgm them
Reply

Use magic Report

Post time: 29-5-2019 10:09:48
| Show all posts
Anything is possible.
Reply

Use magic Report

Post time: 30-5-2019 10:54:56
| Show all posts
This is long cherished dream of Science Fiction Writers and Movie Makers... As per as I think this won't happen. As any AI could go as far as it's programmed earlier. It can harm human interests but only if it was programmed to stop one thing in earlier and it cannot differentiate between what's good at different point of time...
But otherwise an AI taking control of Human life is not possible....
Reply

Use magic Report

You have to log in before you can reply Login | Register

Points Rules

Mobile|Dark room|Forum

16-6-2025 10:47 PM GMT+5.5

Powered by Discuz! X3.4

Copyright © 2001-2025, Tencent Cloud.

MultiLingual version, Release 20211022, Rev. 1662, © 2009-2025 codersclub.org

Quick Reply To Top Return to the list