Adonis Diaries

Death of Moore’s Law: No more doubling chips power every decade?

Posted on: August 16, 2016

Why the death of Moore’s Law

could give birth to more human-like machines

Futurists and AI experts reveal why the demise of Moore’s Law will help us jump from AI to natural machine intelligence


For decades, chip makers have excelled in meeting a recurring set of deadlines to double their chip performance.

These deadlines were set by Moore’s Law and have been in place since the 1960s. But as our demand for smaller, faster and more efficient devices soars, many are predicting the Moore’s Law era might finally be over.

Only last month, a report from the International Technology Roadmap for Semiconductors (ITRS) – which includes chip giants Intel and Samsung – claimed transistors will get to a point where they can shrink no further by as soon as 2021.

The companies argue that, by that time, it will be no longer economically viable to make them smaller.

But while the conventional thinking is that the law’s demise would be bad news, it could have its benefits – namely fuelling the rise of AI.

Bassam Jalgha shared this link. August 14 at 11:25am ·

Why the death of Moore’s Law could give birth to more human like machines… Or not.

“If you care about the development of artificial intelligence, you should pray for that prediction to be true,” John Smart, a prominent futurist and writer told WIRED.

“Moore’s law ending allows us to jump from artificial machine intelligence – a top down, human engineered approach; to natural machine intelligence – one that is bottom up and self-improving.”

As AIs no longer emerge from explicitly programmed designs, engineers are focused on building self-evolving systems like deep learning, an AI technique modelled from biological systems.

In his Medium series, Smart is focusing attention on a new suite of deep learning-powered chat-bots, similar to Microsoft’s Cortana and Apple’s Siri, that are emerging as one of the most notable IT developments in the coming years.

“Conversational smart agents, particularly highly personalised ones that I call personal sims, are the most important software advance I can foresee in the coming generation as they promise to be so individually useful to everyone who employs them,” Smart continued

These may soon be integrating with our daily lives as they come to know our preferences and take over routine tasks. Many in Silicon Valley already use these AIs to manage increasingly busy schedules, while others claim they may soon bring an end to apps.

To get there, chatbots will need to become intelligent. As a result companies are relying on deep learning neural nets; algorithms made to approximate the way neurons in the brain process information.

The challenge for AI engineers, however, is that brain-inspired deep learning requires processing power far beyond today’s consumer chip capabilities.

In 2012, when a Google neural net famously taught itself to recognise cats, the system required the computer muscle of 16,000 processors spread across 1,000 different machines.

More recently, AI researchers have turned to the processing capabilities of GPUs; like the ones in graphics cards used in video games.

The benefit of GPUs is they allow for more parallel computing, a type of processing in which computational workload is divided among several processors at the same time.

As data processing tasks are chunked into smaller pieces, computers can divide the workload across their processing units. This divide and conquer approach is critical to the development of AI.

“High-end computing will be all about how much parallelism can be stuffed on a chip,” said Naveen Rao, chief executive of Nervana Systems – a company working to build chips tailor-made for deep learning.

Rao believes the chip industry has relied on a strict definition of chip innovation. “We stopped doing chip architecture exploration and essentially moved everything into software innovation. This can be limiting as hardware can do things that software can’t.”

By piggybacking off the advances of video game technology, AI researchers found a short-term solution, but market conditions are now suitable for companies like Nervana Systems to innovate. “Things like our chip are a product of Moore’s law ending,” says Rao.

Rao’s chip design packages the parallel computing of graphic cards into their hardware, without unnecessary features like caches – a component used to store data on GPUs.

Instead, the chip moves data around very quickly, while leveraging more available computing power. According to Rao, the result will be a chip able to run deep learning algorithms “with much less power per operation and have many more operational elements stuffed on the chip.”

“Individual exponentials [like Moore’s Law] always end,” Smart added. “But when a critical exponential ends, it creates opportunities for new exponentials to emerge.”

For Smart, Moore’s Law ending is old news.

Back in 2005, engineers reached the limits of Dennard scaling, meaning that while chips were continuing to shrink, they started leaking current and got too hot; forcing chip makers to build multi-core CPUs rather than continuing to shrink their size. This heat build-up issue is exactly the sort of challenge that chip designs like Nervana Systems promise to address.

“As exponential miniaturisation ended in 2005, we created the opportunity for exponential parallelisation.” For Smart, that’s the new exponential trend to watch out for. We just don’t have a law named for it yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

adonis49

adonis49

adonis49

Blog Stats

  • 1,522,489 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 770 other subscribers
%d bloggers like this: