Adonis Diaries

Caution: Artificial Intelligence may become a Frankenstein

Posted on: January 30, 2015

 

Caution: Artificial Intelligence is a Frankenstein

In the late 1980’s, Artificial Intelligence programs relied on practicing experts in practical fields in order to extract the “How to, and how to go about when a problem hits the system” using a series of questions: “What if“. These programs were designed to foresee going many experts into retirement  and the need to train new comers with the least cost and hire the minimum numbers of new employees.

Artificial Intelligence has progress and branched into many fields and this time around it is the professionals in labs who are designing the sophisticated software.

An open letter calling for caution to ensure intelligent machines do not run beyond our control has been signed by a large and growing number of people, including some of the leading figures in artificial intelligence.

“There is now a broad consensus that (AI) research is progressing steadily, and that its impact on society is likely to increase,” the letter said.

“The potential benefits are huge, since everything that civilization has to offer is a product of ; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable,” it added.

“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

How to handle the prospect of automatic weapons that might kill indiscriminately, the liabilities of automatically driven cars and the prospect of losing control of AI systems so that they no longer align with human wishes, were among the concerns raised in the letter that signees said deserve further research

Scientists urge artificial intelligence safety focus

Jan 12, 2015

Roboy, a humanoid robot developed at the University of Zurich,at the 2014 CeBIT technology trade fair on March 9, 2014 in Hanove
Roboy, a humanoid robot developed at the University of Zurich,at the 2014 CeBIT technology trade fair on March 9, 2014 in Hanover, Germany

Scientists and Engineers Warn Of The Dangers Of Artificial Intelligence

January 13, 2015 | by Stephen Luntz

Fears of our creations turning on us stretch back at least as far as Frankenstein, and films such as The Terminator gave us a whole new language to discuss what would happen when robots stopped taking orders.

However, as computers beat (most of) us at Jeopardy and self-driving cars appear on our roads, we may be getting closer to the point where we will have to tackle these issues.

In December, Stephen Hawking kicked off a renewed debate on the topic.

As someone whose capacity to communicate depends on advanced computer technology, Hawking can hardly be dismissed as a Luddite, and his thoughts tend to attract attention.

The letter was initiated by the Future of Life Institute, a volunteer organization that describes itself as “working to mitigate existential risks facing humanity.” The letter notes:

“As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research.

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.

Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

The authors add that “our AI systems must do what we want them to do,” and have set out research priorities they believe will help “maximize the societal benefit of AI.”

Anyone can sign, and at the time of this writing well over a thousand people have done so. While many did not indicate an affiliation, names such as Elon Musk and Hawking himself are easily recognized.

Many of the other names on the list are leading researchers in IT or philosophy, including the IBM team behind the Watson supercomputer.

So much intellectual and financial heft may make their prospects good for conducting research in the areas proposed. Musk has said he invests in companies researching AI in order to keep an eye on them.

Musk worries that even if most researchers behave responsibly, in the absence of international regulation, a single rogue nation or corporation could produce self-replicating machines whose priorities might be very different to humanity’s, and once industries become established they become resistant to control.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

adonis49

adonis49

adonis49

January 2015
M T W T F S S
« Dec   Feb »
 1234
567891011
12131415161718
19202122232425
262728293031  

Blog Stats

  • 1,335,968 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 683 other followers

Advertisements
%d bloggers like this: