Adonis Diaries

Posts Tagged ‘Artificial Intelligence

Why the death of Moore’s Law

could give birth to more human-like machines

Futurists and AI experts reveal why the demise of Moore’s Law will help us jump from AI to natural machine intelligence


For decades, chip makers have excelled in meeting a recurring set of deadlines to double their chip performance.

These deadlines were set by Moore’s Law and have been in place since the 1960s. But as our demand for smaller, faster and more efficient devices soars, many are predicting the Moore’s Law era might finally be over.

Only last month, a report from the International Technology Roadmap for Semiconductors (ITRS) – which includes chip giants Intel and Samsung – claimed transistors will get to a point where they can shrink no further by as soon as 2021.

The companies argue that, by that time, it will be no longer economically viable to make them smaller.

But while the conventional thinking is that the law’s demise would be bad news, it could have its benefits – namely fuelling the rise of AI.

Bassam Jalgha shared this link. August 14 at 11:25am ·

Why the death of Moore’s Law could give birth to more human like machines… Or not.

“If you care about the development of artificial intelligence, you should pray for that prediction to be true,” John Smart, a prominent futurist and writer told WIRED.

“Moore’s law ending allows us to jump from artificial machine intelligence – a top down, human engineered approach; to natural machine intelligence – one that is bottom up and self-improving.”

As AIs no longer emerge from explicitly programmed designs, engineers are focused on building self-evolving systems like deep learning, an AI technique modelled from biological systems.

In his Medium series, Smart is focusing attention on a new suite of deep learning-powered chat-bots, similar to Microsoft’s Cortana and Apple’s Siri, that are emerging as one of the most notable IT developments in the coming years.

“Conversational smart agents, particularly highly personalised ones that I call personal sims, are the most important software advance I can foresee in the coming generation as they promise to be so individually useful to everyone who employs them,” Smart continued

These may soon be integrating with our daily lives as they come to know our preferences and take over routine tasks. Many in Silicon Valley already use these AIs to manage increasingly busy schedules, while others claim they may soon bring an end to apps.

To get there, chatbots will need to become intelligent. As a result companies are relying on deep learning neural nets; algorithms made to approximate the way neurons in the brain process information.

The challenge for AI engineers, however, is that brain-inspired deep learning requires processing power far beyond today’s consumer chip capabilities.

In 2012, when a Google neural net famously taught itself to recognise cats, the system required the computer muscle of 16,000 processors spread across 1,000 different machines.

More recently, AI researchers have turned to the processing capabilities of GPUs; like the ones in graphics cards used in video games.

The benefit of GPUs is they allow for more parallel computing, a type of processing in which computational workload is divided among several processors at the same time.

As data processing tasks are chunked into smaller pieces, computers can divide the workload across their processing units. This divide and conquer approach is critical to the development of AI.

“High-end computing will be all about how much parallelism can be stuffed on a chip,” said Naveen Rao, chief executive of Nervana Systems – a company working to build chips tailor-made for deep learning.

Rao believes the chip industry has relied on a strict definition of chip innovation. “We stopped doing chip architecture exploration and essentially moved everything into software innovation. This can be limiting as hardware can do things that software can’t.”

By piggybacking off the advances of video game technology, AI researchers found a short-term solution, but market conditions are now suitable for companies like Nervana Systems to innovate. “Things like our chip are a product of Moore’s law ending,” says Rao.

Rao’s chip design packages the parallel computing of graphic cards into their hardware, without unnecessary features like caches – a component used to store data on GPUs.

Instead, the chip moves data around very quickly, while leveraging more available computing power. According to Rao, the result will be a chip able to run deep learning algorithms “with much less power per operation and have many more operational elements stuffed on the chip.”

“Individual exponentials [like Moore’s Law] always end,” Smart added. “But when a critical exponential ends, it creates opportunities for new exponentials to emerge.”

For Smart, Moore’s Law ending is old news.

Back in 2005, engineers reached the limits of Dennard scaling, meaning that while chips were continuing to shrink, they started leaking current and got too hot; forcing chip makers to build multi-core CPUs rather than continuing to shrink their size. This heat build-up issue is exactly the sort of challenge that chip designs like Nervana Systems promise to address.

“As exponential miniaturisation ended in 2005, we created the opportunity for exponential parallelisation.” For Smart, that’s the new exponential trend to watch out for. We just don’t have a law named for it yet.

Advertisements

Why sarcasm is such a problem in artificial intelligence

“Any computer which could reliably perform this kind of filtering could be argued to have developed a sense of humor.”

Thu 11 Feb 2016

Automatic Sarcasm Detection: A Survey [PDF] outlines ten years of research efforts from groups interested in detecting sarcasm in online sources.

The problem is not an abstract one, nor does it centre around the need for computers to entertain or amuse humans, but rather the need to recognise that sarcasm in online comments, tweets and other internet material should not be interpreted as sincere opinion.

The need applies both in order for AIs to accurately assess archive material or interpret existing datasets, and in the field of sentiment analysis, where a neural network or other model of AI seeks to interpret data based on publicly posted web material.

Attempts have been made to ring-fence sarcastic data by the use of hash-tags such as #not on Twitter, or by noting the authors who have posted material identified as sarcastic, in order to apply appropriate filters to their future work.

Some research has struggled to quantify sarcasm, since it may not be a discrete property in itself – i.e. indicative of a reverse position to the one that it seems to put forward – but rather part of a wider gamut of data-distorting humour, and may need to be identified as a subset of that in order to be found at all.

Most of the dozens of research projects which have addressed the problem of sarcasm as a hindrance to machine comprehension have studied the problem as it relates to the English and Chinese languages, though some work has also been done in identifying sarcasm in Italian-language tweets, whilst another project has explored Dutch sarcasm.

The new report details the ways that academia has approached the sarcasm problem over the last decade, but concludes that the solution to the problem is not necessarily one of pattern recognition, but rather a more sophisticated matrix that has some ability to understand context.

Any computer which could reliably perform this kind of filtering could be argued to have developed a sense of humor.

Note: For AI machine to learn, it has to be confronted with genuine sarcastic people. And this species is a rarity

Does it matter to Debate Machine Consciousness?

“I think therefore I am.”

“What about thinking? Here I make my discovery: thought exists; it alone cannot be separated from me.

I am; I exist – this is certain. But for how long?

For as long as I am thinking; for perhaps it could also come to pass that if I were to cease all thinking I would then utterly cease to exist.

At this time I admit nothing that is not necessarily true.

I am therefore precisely nothing but a thinking thing; that is a mind, or intellect, or understanding, or reason – words of whose meanings I was previously ignorant.

Yet I am a true thing and am truly existing; but what kind of thing? I have said it already: a thinking thing.” – René Descartes 

In 1637, when he published, The Discourse on Method, René Descartes unleashed a philosophical breakthrough, which later became a fundamental principle that much of modern philosophy now stands upon.

Nearly 400 years later, if a machine says these five powerful words, “I think therefore I am,” does the statement still hold true?

If so, who then is this “I” that is doing the thinking?

In a recent talk, Ray Kurzweil showed the complexity of measuring machine consciousness, “We can’t just ask an entity, ‘Are you conscious?’ because we can ask entities in video games today, and they’ll say, ‘Yes, I’m conscious and I’m angry at you.

But we don’t believe them because they don’t have the subtle cues that we associate with really having that subjective state. My prediction that computers will pass the Turing test and be indistinguishable from humans by 2029 is that they really will have those convincing cues.”

If artificial intelligence becomes indistinguishable from human intelligence, how then will we determine which entities are, or are not, conscious—specifically, when consciousness is not quantifiable?

Though the word consciousness has many commonly held definitions, this question can be answered quite differently when filtered through the many existing philosophical and religious frameworks.

With two particularly conflicting viewpoints being the common Eastern and Western notions of what exactly consciousness is—and how it comes to exist.

At the heart of many Eastern philosophies is the belief that consciousness is our fundamental reality; it is what brings the physical world into existence.

By contrast, the Western notion of consciousness holds that it arises only at a certain level of development.

Looking at these two opposing belief systems, we can see that to answer, “What and who is conscious?” can pull drastically different responses.

“Fundamentally, there’s no scientific experiment that doesn’t have philosophical assumptions about the nature of consciousness,” Kurzweil says.

We’d like to have an objective scientific understanding of consciousness, but such a view remains elusive.

“Some scientists say, ‘Well, it’s just an illusion. We shouldn’t waste time with it,” Kurzweil says. “But that’s not my view because, as I say, morality is based on consciousness.”

Why does all this matter?

Because as technological evolution begins intersecting our biological evolution as a species, the lines between “human” and “non-human” entities will begin blurring more so than humanity has ever encountered, and a new era of identity, and the surrounding ethics and philosophy, will take center stage.

What happens if a non-human conscious entity travels into another region of the world where its consciousness is not believed to be real?

Or more broadly, how will we treat intelligent machines ethically as their intelligence approaches our own?

If morality is based on consciousness, does a machine become an “I” if it has one?

(If Morality is a set of behaviors disseminated by the power-to-be and communities are coaxed to follow suit, then an artificial intelligent consciousness is political by nature)

Note: It is Not a matter of morality: It is this lapse of time  of uncertainty we need to determine whether our act is Good or Bad. Give a machine the illusion of needing some time to decide and you can be fooled. 

“If artificial intelligence becomes indistinguishable from human intelligence, how then will we determine which entities are, or are not, conscious?”

“I think therefore I am.”
In 1637, when he published, The Discourse on Method, René Descartes unleashed a philosophical breakthrough, which later became a…
singularityhub.com

 Ban on (artificial Intelligent) AI and autonomous weapons: Killer Robots?

Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue.

Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.

Toby Walsh, professor of AI at the University of New South Wales said:

“We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”

Musk and Hawking have warned that AI is “our biggest existential threat” and that the development of full AI could “spell the end of the human race”.

But others, including Wozniak have recently changed their minds on AI, with the Apple co-founder saying that robots would be good for humans, making them like the “family pet and taken care of all the time”.

At a UN conference in Geneva in April discussing the future of weaponry, including so-called “killer robots”, the UK opposed a ban on the development of autonomous weapons, despite calls from various pressure groups, including the Campaign to Stop Killer Robots.

The Guardian view on robots as weapons: the human factor

Andrew Bossone shared this link

No killer robots please

More than 1,000 experts and leading robotics researchers sign open letter warning of military artificial intelligence arms race
theguardian.com|By Samuel Gibbs

 

It’s No Myth:

Robots and Artificial Intelligence Will Erase Jobs in Nearly Every Industry

With the unemployment rate falling to 5.3 percent, the lowest in seven years, policy makers are heaving a sigh of relief.  (Every time these short terms sighs of relief to fool the people)

Indeed, with the technology boom in progress, there is a lot to be optimistic about.

Manufacturing will be returning to U.S. shores with robots doing the job of Chinese workers;

American carmakers will be mass-producing self-driving electric vehicles;

technology companies will develop medical devices that greatly improve health and longevity;

we will have unlimited clean energy and 3D print our daily needs.

The cost of all of these things will plummet and make it possible to provide for the basic needs of every human being.

(No sweat hoping. And what of the people who needs a job?)

I am talking about technology advances that are happening now, which will bear fruit in the 2020s. (How about in 50 years if no major calamities hit us before then?)

But policy makers will have a big new problem to deal with: the disappearance of human jobs.

Not only will there be fewer jobs for people doing manual work, the jobs of knowledge workers will also be replaced by computers.

(So even educated people will have no jobs?)

Almost every industry and profession will be impacted and this will create a new set of social problems — because most people can’t adapt to such dramatic change. (Why should they?)

If we can develop the economic structures necessary to distribute the prosperity we are creating (here we go again), most people will no longer have to work to sustain themselves. They will be free to pursue other creative endeavors (assuming that the opportunities are there?

The problem is that without jobs, they will not have the dignity, social engagement, and sense of fulfillment that comes from work.

The life, liberty and pursuit of happiness that the constitution entitles us to won’t be through labor, it will have to be through other means. (How about we start thinking of these other means?)

It is imperative that we understand the changes that are happening and find ways to cushion the impacts.

The technology elite who are leading this revolution will reassure you that there is nothing to worry about because we will create new jobs just as we did in previous centuries when the economy transitioned from agrarian to industrial to knowledge-based. (kids were working in the mining industries, in tunnels deep inside the bowel of earth)

Tech mogul Marc Andreessen has called the notion of a jobless future a “Luddite fallacy,” referring to past fears that machines would take human jobs away. Those fears turned out to be unfounded because we created newer and better jobs and were much better off. (Please expand on these new jobs and their cumulative trauma disorders)

True, we are living better lives. But what is missing from these arguments is the timeframe over which the transitions occurred.

The industrial revolution unfolded over centuries. Today’s technology revolutions are happening within years.

We will surely create a few intellectually-challenging jobs, but we won’t be able to retrain the workers who lose today’s jobs.

The working people will experience the same unemployment and despair that their forefathers did. It is they who we need to worry about. (Which means 6 billion people?)

The first large wave of unemployment will be caused by self-driving cars. (if the technology is here, who in his right head will give up control over the car?)

These will provide tremendous benefit by eliminating traffic accidents and congestion, making commuting time more productive, and reducing energy usage. But they will eliminate the jobs of millions of taxi and truck drivers and delivery people.

Fully-automated robotic cars are no longer in the realm of science fiction; you can see Google’s cars on the streets of Mountain View, Calif. There are also self-driving trucks on our highways (they should be banned) and self-driving tractors on farms.

Uber just hired away dozens of engineers from Carnegie Mellon University to build its own robotic cars. It will surely start replacing its human drivers as soon as its technology is ready — later in this decade (And give away the profit it is amassing by hiring drivers?).

As Uber CEO Travis Kalanick reportedly said in an interview, “The reason Uber could be expensive is you’re paying for the other dude in the car. When there is no other dude in the car, the cost of taking an Uber anywhere is cheaper. Even on a road trip. (Assuming the clients will prefer Not having this dude on the wheel)”

The dude in the driver’s seat will go away.

Manufacturing will be the next industry to be transformed.

Robots have, for many years, been able to perform surgery, milk cows, do military reconnaissance and combat, and assemble goods. But they weren’t dexterous enough to do the type of work that humans do in installing circuit boards.

The latest generation of industrial robots by ABB of Switzerland and Rethink Robotics of Boston can do this however. ABB’s robot, Yumi, can even thread a needle. It costs only $40,000.

China, fearing the demise of its industry, is setting up fully-automated robotic factories in the hope that by becoming more price-competitive, it can continue to be the manufacturing capital of the world.

But its advantage only holds up as long as the supply chains are in China and shipping raw materials and finished goods over the oceans remains cost-effective.

Don’t forget that our robots are as productive as theirs are; they too don’t join labor unions (yet) and will work around the clock without complaining.

Supply chains will surely shift and the trickle of returning manufacturing will become a flood.

But there will be few jobs for humans once the new, local factories are built.

With advances in artificial intelligence, any job that requires the analysis of information can be done better by computers. This includes the jobs of physicians, lawyers, accountants, and stock brokers.

We will still need some humans to interact with the ones who prefer human contact, but the grunt work will disappear. The machines will need very few humans to help them.

This jobless future will surely create social problems — but it may be an opportunity for humanity to uplift itself. (That is the biggest lie that has been perpetrated this century)

Why do we need to work 40, 50, or 60 hours a week, after all? Just as we were better off leaving the long and hard agrarian and factory jobs behind, we may be better off without the mindless work at the office.

What if we could be working 10 or 15 hours per week from anywhere we want and have the remaining time for leisure, social work, or attainment of knowledge?

Yes, there will be a booming tourism and recreation industry and new jobs will be created in these — for some people. (Assuming the money are there for these entertainment)

There are as many things to be excited about as to fear.

If we are smart enough to develop technologies that solve the problems of disease, hunger, energy, and education, we can — and surely will — develop solutions to our social problems.  (Then why famine is still harvesting millions of people?)

But we need to start by understanding where we are headed and prepare for the changes.

We need to get beyond the claims of a Luddite fallacy — to a discussion about the new future.

Vivek-Wadhwa-41

Vivek Wadhwa is a fellow at Rock Center for Corporate Governance at Stanford University, director of research at Center for Entrepreneurship and Research Commercialization at Duke, and distinguished fellow at Singularity University.

His past appointments include Harvard Law School, University of California Berkeley, and Emory University. Follow him on Twitter @wadhwa.

Though robots and artificial intelligence will erase jobs and bring social challenges, they may also provide an opportunity for humanity to uplift itself.
singularityhub.com

adonis49

adonis49

adonis49

August 2019
M T W T F S S
« Jul    
 1234
567891011
12131415161718
19202122232425
262728293031  

Blog Stats

  • 1,317,235 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 682 other followers

Advertisements
%d bloggers like this: