Adonis Diaries

Posts Tagged ‘Artificial Intelligence

What Is Natural Language Processing And What Is It Used For?

Terence Mills 

Terence Mills, CEO of AI.io and Moonshot is an AI pioneer and digital technology specialist. Connect with him about AI or mobile on LinkedIn

Artificial intelligence (AI) is changing the way we look at the world. AI “robots” are everywhere. (Mostly in Japan and China)

From our phones to devices like Amazon’s Alexa, we live in a world surrounded by machine learning.

Google, Netflix, data companies, video games and more, all use AI to comb through large amounts of data. The end result is insights and analysis that would otherwise either be impossible or take far too long.

It’s no surprise that businesses of all sizes are taking note of large companies’ success with AI and jumping on board. Not all AI is created equal in the business world, though. Some forms of artificial intelligence are more useful than others.

Today, I’m touching on something called natural language processing (NLP).

It’s a form of artificial intelligence that focuses on analyzing the human language to draw insights, create advertisements, help you text (yes, really) and more. (And what of body language?)

But Why Natural Language Processing?

NLP is an emerging technology that drives many forms of AI you’re used to seeing.

The reason I’ve chosen to focus on this technology instead of say, AI for math-based analysis, is the increasingly large application for NLP.

Think about it this way.

Every day, humans say thousands of words that other humans interpret to do countless things. At its core, it’s simple communication, but we all know words run much deeper than that. (That’s the function of slang in community)

There’s a context that we derive from everything someone says. Whether they imply something with their body language or in how often they mention something.

While NLP doesn’t focus on voice inflection, it does draw on contextual patterns. (Meaning: currently it doesn’t care about the emotions?)

This is where it gains its value (As if in communication people lay out the context first?).

Let’s use an example to show just how powerful NLP is when used in a practical situation. When you’re typing on an iPhone, like many of us do every day, you’ll see word suggestions based on what you type and what you’re currently typing. That’s natural language processing in action.

It’s such a little thing that most of us take for granted, and have been taking for granted for years, but that’s why NLP becomes so important. Now let’s translate that to the business world.

Some company is trying to decide how best to advertise to their users. They can use Google to find common search terms that their users type when searching for their product. (In a nutshell, that’s the most urgent usage of NLP?)

NLP then allows for a quick compilation of the data into terms obviously related to their brand and those that they might not expect. Capitalizing on the uncommon terms could give the company the ability to advertise in new ways.

So How Does NLP Work?

As mentioned above, natural language processing is a form of artificial intelligence that analyzes the human language. It takes many forms, but at its core, the technology helps machine understand, and even communicate with, human speech.

But understanding NLP isn’t the easiest thing. It’s a very advanced form of AI that’s only recently become viable. That means that not only are we still learning about NLP but also that it’s difficult to grasp.

I’ve decided to break down NLP in layman’s term. I might not touch on every technical definition, but what follows is the easiest way to understand how natural language processing works.

The first step in NLP depends on the application of the system. Voice-based systems like Alexa or Google Assistant need to translate your words into text. That’s done (usually) using the Hidden Markov Models system (HMM).

The HMM uses math models to determine what you’ve said and translate that into text usable by the NLP system. Put in the simplest way, the HMM listens to 10- to 20-millisecond clips of your speech and looks for phonemes (the smallest unit of speech) to compare with pre-recorded speech.

Next is the actual understanding of the language and context. Each NLP system uses slightly different techniques, but on the whole, they’re fairly similar. The systems try to break each word down into its part of speech (noun, verb, etc.).

This happens through a series of coded grammar rules that rely on algorithms that incorporate statistical machine learning to help determine the context of what you said.

If we’re not talking about speech-to-text NLP, the system just skips the first step and moves directly into analyzing the words using the algorithms and grammar rules.

The end result is the ability to categorize what is said in many different ways. Depending on the underlying focus of the NLP software, the results get used in different ways.

For instance, an SEO application could use the decoded text to pull keywords associated with a certain product.

Semantic Analysis

When explaining NLP, it’s also important to break down semantic analysis. It’s closely related to NLP and one could even argue that semantic analysis helps form the backbone of natural language processing.

Semantic analysis is how NLP AI interprets human sentences logically. When the HMM method breaks sentences down into their basic structure, semantic analysis helps the process add content.

For instance, if an NLP program looks at the word “dummy” it needs context to determine if the text refers to calling someone a “dummy” or if it’s referring to something like a car crash “dummy.”

If the HMM method breaks down text and NLP allows for human-to-computer communication, then semantic analysis allows everything to make sense contextually.

Without semantic analysts, we wouldn’t have nearly the level of AI that we enjoy. As the process develops further, we can only expect NLP to benefit.

NLP And More

As NLP develops we can expect to see even better human to AI interaction. Devices like Google’s Assistant and Amazon’s Alexa, which are now making their way into our homes and even cars, are showing that AI is here to stay.

The next few years should see AI technology increase even more, with the global AI market expected to push $60 billion by 2025 (registration required). Needless to say, you should keep an eye on AI.

Cost of Lemonade stand, Artificial Intelligence program,

Three decades ago, I audited an Artificial Intelligence programming course. The professor never programmed one, but it was the rage.

The prof. was candid and said: “The only way to learn these kinds of programming is by doing it.  The engine was there to logically arrange the “If this ,then do that” questions in order to answer a technical problem. Nothing to it.

I failed even to try a very simple program: I had a heavy workload and didn’t have the passion for any engineering project at the time.

I cannot say that I know Artificial Intelligence, regardless of the many fancy technical and theoretical books I read on the subject

Studying entrepreneurship without doing it is like studying the appreciation of music without listening to it.

(Actually, I did study so many subject matters, and the ones supposed to be of the practical ones, but failed to do or practice any skills. My intention was to stay away from theoretical subjects and ended up sticking to the theories. For example, I enrolled in Industrial Engineering, thinking it was mostly of the hand-on discipline. Wrong: it was mostly theoretical simply because the university lacked labs. and technical staff and machineries)

The cost of setting up a lemonade stand (or whatever metaphorical equivalent you dream up) is almost 100% internal. Until you confront the fear and discomfort of being in the world and saying, “here, I made this,” it’s impossible to understand anything at all about what it means to be a entrepreneur. Or an artist.

Never enough

There’s never enough time to be as patient as we need to be.

Not enough slack to focus on the long-term, too much urgency in the now to take the time and to plan ahead.

That urgent sign post just ahead demands all of our intention (and attention), and we decide to invest in, “down the road,” down the road.

It’s not only more urgent, but it’s easier to run to the urgent meeting than it is to sit down with a colleague and figure out the truth of what matters and the why of what’s before us.

And there’s never enough money to easily make the investments that matter.

Not enough surplus in the budget to take care of those that need our help, too much on our plate to be generous right now.

The short term bills make it easy to ignore the long-term opportunities.

Of course, the organizations that get around the universal and insurmountable problems of not enough time and not enough money are able to create innovations, find resources to be generous and prepare for a tomorrow that’s better than today. It’s not easy, not at all, but probably (okay, certainly) worth it.

We’re going to spend our entire future living in tomorrow—investing now, when it’s difficult, is the single best moment.

Posted by Seth Godin on March 11, 2013

 

 

How fast are Robotics and Artificial Intelligence progressing?

& Are Progressing Fast

Close

Google launched an initiative to improve how users work with artificial intelligence

  • The research initiative will involve collaborations with people in multiple Google product groups, as well as professors from Harvard and MIT.
  • More informative explanations of recommendations could result from the research over time.
| Monday, 10 Jul 2017 | 12:00 PM ET

Google CEO Sundar Pichai speaks during Google I/O 2016 at Shoreline Amphitheatre

Justin Sullivan | Getty Images

Alphabet on Monday said it has kicked off a new research initiative aimed at improving human interaction with artificial intelligence systems.

The People + AI Research (PAIR) program currently encompasses a dozen people who will collaborate with Googlers in various product groups — as well as outsiders like Harvard University professor Brendan Meade and Massachusetts Institute of Technology professor Hal Abelson.

The research could eventually lead to refinements in the interfaces of the smarter components of some of the world’s most popular apps. And Google’s efforts here could inspire other companies to adjust their software, too.

“One of the things we’re going to be looking into is this notion of explanation — what might be a useful on-time, on-demand explanation about why a recommendation system did something it did,” Google Brain senior staff research scientist Fernanda Viegas told CNBC in an interview.

The PAIR program takes inspiration from the concept of design thinking, which highly prioritizes the needs of people who will use the products being developed.

While end users — such as YouTube’s 1.5 billion monthly users — can be the target of that, the research is also meant to improve the experience of working with AI systems for AI researchers, software engineers and domain experts as well, Google Brain senior staff research scientist Martin Wattenberg told CNBC.

The new initiative fits in well with Google’s increasing focus on AI.

Google CEO Sundar Pichai has repeatedly said the world is transitioning from being mobile-first to AI-first, and the company has been taking many steps around that thesis.

Recently, for example, Google formed a venture capital group to invest in AI start-ups.

Meanwhile Amazon, Apple, Facebook and Microsoft have been active in AI in the past few years as well.

The company implemented a redesign for several of its apps in 2011 and in more recent years has been sprucing up many of its properties with its material design principles.

in 2016 John Maeda, then the design partner at Kleiner Perkins Caufield & Byers, pointed out in his annual report on design in technology that Google had been perceived as improving the most in design.

What is new is that Googlers are trying to figure out how to improve design specifically for AI components. And that’s important because AI is used in a whole lot of places around Google apps, even if you might not always realize it.

Video recommendations in YouTube, translations in Google Translate, article suggestions in the Google mobile app and even Google search results are all enhanced with AI.

Note: with no specific examples to understand what Justin is talking about, consider this article as free propaganda to Google

Can a Robot emulate human emotions? That should not be the question

A robot programmed with an artificial intelligence that can learn how to love and express emotions is feasible, and highly welcomed.

A child robot David can acquire and follow the various stages of kids emotional development, all the way to adulthood.

The question is why scientists should invest time and energy creating robot that would exacerbate the current calamities experienced and witnessed of human emotions and love consequences and trials?

Have we not gotten enough of negative jealousy that generates serious pains, frustrations, beating, castration, killing…?

It is getting evident that parents will no longer enjoy the adequate quality time and opportunities to caring full-time for nurturing the kids.

A kid nurturing robot at home will be the best invention for the stability and healthy emotional development of isolated kids in the future…

If robots have to convey emotions and feeling, they had better extend proper nurturing examples that kids at home may emulate…

Robots must learn to listen to the kids, ask questions, circumvent human shortcomings in failure to communicate, overcome the tendency of kids in building negative fictitious myths and role played empathy projected in relationship…

The movie “AI” of Steven Spielberg investigated the limits of man and machines confronted at the ineluctable problems:

1. The child separation from family members, particularly the mother early emotional attachment…The moment we discover that our mother is Not perfect and our father is a coward…

2. The moment it dawn on the child that we are Not unique, perfect, really loved…as we wished it should be…

2. The moment we realize that we are no longer the center of the universe and that community is too busy to care for our future…

4. The moment we accept that we are “All alone” and we have to fend for our health, safety, mental sanity…

5. The moment we feel that we were left bare and unprepared to face the desolate world around us…

Should the kid robot replace the myth of the “Blue Fairy?”  This fairy supposed to:

1. Heal the torn parts in the separation with family members…

2. Render possible what we came to learn as irreversible, irreparable, and almost unfeasible…?

3. Convince us that there is always a person out there who will love us, be a true friend for life

4. Bring our way this person who suffered and felt wounded as we are…

5. Keep at bay those cannibals, ever ready to sacrifice man and animal under the pretense of “celebrating life

A child robot with unconditional devotion, soft-spoken, cultured, patient, and willing to listen to our lucubrations…

The happy ending that teaches us to grasp and grab on the fleeting moments of rich happiness, to taste the powerful instants of tenderness…

Freed at last from illusion, myths and these comfortable peaceful world views we thought we had acquired in childhood…

We do live on the assumption of recovering what we had lost, learning that what we lost “Never existed” in the first place…

At least, a compassionate kid robot would extend, now and then, at critical difficult moments, a glimpse of our childhood innocent belief system, of a world of goodness, sensibility, and wonder…

Little robot David should learn how and when to inject a healthy dose of emotional adrenaline to keep us sane, and ready to face the real world with more courage, more determination to disseminate what is good in us, the compassion needed to sustain and maintain our hope in a better future…

Note: This post was inspired from an article in the monthly Lebanese magazine Sante/Beaute #21. The article was not signed, but the source maybewww.shaomi blog.net

Can we Not lose control over Artificial Intelligence?

Scared of super-intelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way.

We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants

Sam Harris. Neuroscientist, philosopher. Full bio

I’m going to talk about a failure of intuition that many of us suffer from. It’s really a failure to detect a certain kind of danger.

I’m going to describe a scenario that I think is both terrifying and likely to occur, and that’s not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what I’m talking about is kind of cool.

0:36 I’m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it’s very difficult to see how they won’t destroy us or inspire us to destroy ourselves.

And yet if you’re anything like me, you’ll find that it’s fun to think about these things. That response is part of the problem. OK?

That response should worry you. And if I were to convince you in this talk that we were likely to suffer a global famine, either because of climate change or some other catastrophe, and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldn’t think, “Interesting. I like this TED Talk.”

Famine isn’t fun. Death by science fiction, on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead.

I am unable to marshal this response, and I’m giving this talk.

Patsy Z and TEDxSKE shared a link.
ted.com|By Sam Harris
It’s as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software just stops getting better for some reason.
Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to.
What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming president of the United States?

The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation.

Almost by definition, this is the worst thing that’s ever happened in human history.

the only alternative, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves.

And we risk what the mathematician IJ Good called an “intelligence explosion,” that the process could get away from us.

this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario.

It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk.

But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.

I suspect this seems far-fetched to many of you. I bet there are those of you who doubt that superintelligent AI is possible, much less inevitable. But then you must find something wrong with one of the following assumptions. And there are only three of them.

Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption. We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already.

And we know that mere matter can give rise to what is called “general intelligence,” an ability to think flexibly across multiple domains, because our brains have managed it. Right?

I mean, there’s just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines.

It’s crucial to realize that the rate of progress doesn’t matter, because any progress is enough to get us into the end zone. We don’t need Moore’s law to continue. We don’t need exponential progress. We just need to keep going.

The second assumption is that we will keep going. We will continue to improve our intelligent machines. And given the value of intelligence — I mean, intelligence is either the source of everything we value or we need it to safeguard everything we value.

It is our most valuable resource. So we want to do this. We have problems that we desperately need to solve. We want to cure diseases like Alzheimer’s and cancer.

We want to understand economic systems. We want to improve our climate science.

So we will do this, if we can. The train is already out of the station, and there’s no brake to pull.

Finally, we don’t stand on a peak of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable.

just consider the smartest person who has ever lived. On almost everyone’s shortlist here is John von Neumann.

I mean, the impression that von Neumann made on the people around him, and this included the greatest mathematicians and physicists of his time, is fairly well-documented. If only half the stories about him are half true, there’s no question he’s one of the smartest people who has ever lived.

So consider the spectrum of intelligence. Here we have John von Neumann. And then we have you and me. And then we have a chicken.

There’s no reason for me to make this talk more depressing than it needs to be.

It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can’t imagine, and exceed us in ways that we can’t imagine.

And it’s important to recognize that this is true by virtue of speed alone. Right?

So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT.

Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it.

you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

The other thing that’s worrying, frankly, is that, imagine the best case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around.

It’s as though we’ve been handed an oracle that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials. So we’re talking about the end of human drudgery. We’re also talking about the end of most intellectual work.

what would apes like ourselves do in this circumstance? Well, we’d be free to play Frisbee and give each other massages. Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.

 that might sound pretty good, but ask yourself what would happen under our current economic and political order?

It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before. Absent a willingness to immediately put this new wealth to the service of all humanity, a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.

And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power.

This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.

one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we’re told not to worry is time.

This is all a long way off, don’t you know. This is probably 50 or 100 years away. One researcher has said, “Worrying about AI safety is like worrying about overpopulation on Mars.” This is the Silicon Valley version of “don’t worry your pretty little head about it.”

No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence.

And we have no idea how long it will take us to create the conditions to do that safely. Let me say that again. We have no idea how long it will take us to create the conditions to do that safely.

 if you haven’t noticed, 50 years is not what it used to be. This is 50 years in months. This is how long we’ve had the iPhone. This is how long “The Simpsons” has been on television. Fifty years is not that much time to meet one of the greatest challenges our species will ever face.

Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.

The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization, which read: “People of Earth, we will arrive on your planet in 50 years. Get ready.” And now we’re just counting down the months until the mothership lands? We would feel a little more urgency than we do.

Another reason we’re told not to worry is that these machines can’t help but share our values because they will be literally extensions of ourselves.

They’ll be grafted onto our brains, and we’ll essentially become their limbic systems. Now take a moment to consider that the safest and only prudent path forward, recommended, is to implant this technology directly into our brains.

this may in fact be the safest and only prudent path forward, but usually one’s safety concerns about a technology have to be pretty much worked out before you stick it inside your head.

The deeper problem is that building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it.

And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.

I don’t have a solution to this problem, apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence.

Not to build it, because I think we’ll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you’re talking about superintelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right.

13:44 But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of a God. Now would be a good time to make sure it’s a god we can live with.

A system that can read your hidden excitement, happiness, anger, or sadness. With or without your cooperation?

It’s called “EQ-Radio,” and it’s the creation of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

September 23, 2016 by Robby Berman

They claim it’s accurate 87% of the time. It reads your feelings by bouncing ordinary WiFi signals off of you that can track your heart rate. There are no on-skin sensors involved with EQ-Radio.

How EQ-Radio Works

WiFi is a two-way form of communication: Your router carries internet data to your laptop, which then transmits data back to the router en route to the internet.

An EQ-Radio measures the speed at which data completes a round trip to its target — for example, you — and analyzes fluctuations in that speed to measure your heart rate. It’s your heart rate that gives away your emotional state. (Is there Not a wide array of emotions?)

The correlation of heartbeat to emotion in each person is unique to some extent, but MIT says they can accurately assess the emotional state even of people they’ve never before studied 70% of the time.

Mingmin Zhao, on the MIT team, told MIT News, “Just by knowing how people breathe and how their hearts beat in different emotional states, we can look at a random person’s heartbeat and reliably detect their emotions.”

One of the challenges the team faced was filtering out extraneous “noise” such as breath sounds to clearly detect the heart rate.

Bear in mind that it’s not audio that EQ-Radio has to analyze, but instead data that reflects the speed of the WiFi bounceback.

So “noise” refers to irrelevant data, not the actual sound of, say, your breath. That they’re able to measure heart rate with about a .3% margin of error is remarkable. That’s as good as an ECG monitor.

The EQ-Radio software is based on previous work the lab has done using WiFi to detect human movement. The goal of the earlier work was to use WiFi in smart homes that could do things like control heat and lighting based on your location, and detect if an elderly person has fallen. (It’s also seen as having potential use for animation motion-capture in films.)

The junction of that earlier project and EQ-Radio was the exploration of more-accurate health-tracking wearable devices.

The Possible Uses of EQ-Radio

There are a number of obvious applications for EQ-Radio, such as:

  • Far more accurate test screenings and focus groups for ad agencies and film studios
  • Smart homes that can adjust lighting and environmental controls to match, or help you out of, your mood
  • Smart hotels that could continually customize a guest’s environment according to mood
  • Non-invasive healthcare and psychiatric monitoring, with office or home-installed systems
  • Directed advertising based on an assessment of a target’s mood
  • Interrogations

Hopefully, EQ-Radio won’t turn up in personal devices that let you “read” the emotions of people around you.

(And after all these emotional diagnosis? How are our health to be treated? By EQ-Radio also? Can the people receive any feedback? Is this a one-way technique to accumulate meta data for ad. agencies?)

EQ-Radio and Privacy

When EQ-Radio moves beyond its current laboratory setting, there’ll be obvious privacy concerns: Do you have the right to keep your feelings to yourself? (Are you kidding?)

If you’re in a public place — say, a hospital or theater — where an EQ-Radio system is in operation, will a signed release from you be required before your emotional state can be tracked?

Would you have to give a police department permission to monitor your feelings during an investigation, or could you refuse as you can a polygraph test?

Could an authoritarian government “read” its citizenry at will?

Will this become a standard tool to anti-terrorism authorities?

It may be that the right to private emotion is the next personal freedom. It remains to be seen whether we’ll be asked to surrender it.

(Again. Welcome to this absurd future).

Why the death of Moore’s Law

could give birth to more human-like machines

Futurists and AI experts reveal why the demise of Moore’s Law will help us jump from AI to natural machine intelligence


For decades, chip makers have excelled in meeting a recurring set of deadlines to double their chip performance.

These deadlines were set by Moore’s Law and have been in place since the 1960s. But as our demand for smaller, faster and more efficient devices soars, many are predicting the Moore’s Law era might finally be over.

Only last month, a report from the International Technology Roadmap for Semiconductors (ITRS) – which includes chip giants Intel and Samsung – claimed transistors will get to a point where they can shrink no further by as soon as 2021.

The companies argue that, by that time, it will be no longer economically viable to make them smaller.

But while the conventional thinking is that the law’s demise would be bad news, it could have its benefits – namely fuelling the rise of AI.

Bassam Jalgha shared this link. August 14 at 11:25am ·

Why the death of Moore’s Law could give birth to more human like machines… Or not.

“If you care about the development of artificial intelligence, you should pray for that prediction to be true,” John Smart, a prominent futurist and writer told WIRED.

“Moore’s law ending allows us to jump from artificial machine intelligence – a top down, human engineered approach; to natural machine intelligence – one that is bottom up and self-improving.”

As AIs no longer emerge from explicitly programmed designs, engineers are focused on building self-evolving systems like deep learning, an AI technique modelled from biological systems.

In his Medium series, Smart is focusing attention on a new suite of deep learning-powered chat-bots, similar to Microsoft’s Cortana and Apple’s Siri, that are emerging as one of the most notable IT developments in the coming years.

“Conversational smart agents, particularly highly personalised ones that I call personal sims, are the most important software advance I can foresee in the coming generation as they promise to be so individually useful to everyone who employs them,” Smart continued

These may soon be integrating with our daily lives as they come to know our preferences and take over routine tasks. Many in Silicon Valley already use these AIs to manage increasingly busy schedules, while others claim they may soon bring an end to apps.

To get there, chatbots will need to become intelligent. As a result companies are relying on deep learning neural nets; algorithms made to approximate the way neurons in the brain process information.

The challenge for AI engineers, however, is that brain-inspired deep learning requires processing power far beyond today’s consumer chip capabilities.

In 2012, when a Google neural net famously taught itself to recognise cats, the system required the computer muscle of 16,000 processors spread across 1,000 different machines.

More recently, AI researchers have turned to the processing capabilities of GPUs; like the ones in graphics cards used in video games.

The benefit of GPUs is they allow for more parallel computing, a type of processing in which computational workload is divided among several processors at the same time.

As data processing tasks are chunked into smaller pieces, computers can divide the workload across their processing units. This divide and conquer approach is critical to the development of AI.

“High-end computing will be all about how much parallelism can be stuffed on a chip,” said Naveen Rao, chief executive of Nervana Systems – a company working to build chips tailor-made for deep learning.

Rao believes the chip industry has relied on a strict definition of chip innovation. “We stopped doing chip architecture exploration and essentially moved everything into software innovation. This can be limiting as hardware can do things that software can’t.”

By piggybacking off the advances of video game technology, AI researchers found a short-term solution, but market conditions are now suitable for companies like Nervana Systems to innovate. “Things like our chip are a product of Moore’s law ending,” says Rao.

Rao’s chip design packages the parallel computing of graphic cards into their hardware, without unnecessary features like caches – a component used to store data on GPUs.

Instead, the chip moves data around very quickly, while leveraging more available computing power. According to Rao, the result will be a chip able to run deep learning algorithms “with much less power per operation and have many more operational elements stuffed on the chip.”

“Individual exponentials [like Moore’s Law] always end,” Smart added. “But when a critical exponential ends, it creates opportunities for new exponentials to emerge.”

For Smart, Moore’s Law ending is old news.

Back in 2005, engineers reached the limits of Dennard scaling, meaning that while chips were continuing to shrink, they started leaking current and got too hot; forcing chip makers to build multi-core CPUs rather than continuing to shrink their size. This heat build-up issue is exactly the sort of challenge that chip designs like Nervana Systems promise to address.

“As exponential miniaturisation ended in 2005, we created the opportunity for exponential parallelisation.” For Smart, that’s the new exponential trend to watch out for. We just don’t have a law named for it yet.

Why sarcasm is such a problem in artificial intelligence

“Any computer which could reliably perform this kind of filtering could be argued to have developed a sense of humor.”

Thu 11 Feb 2016

Automatic Sarcasm Detection: A Survey [PDF] outlines ten years of research efforts from groups interested in detecting sarcasm in online sources.

The problem is not an abstract one, nor does it centre around the need for computers to entertain or amuse humans, but rather the need to recognise that sarcasm in online comments, tweets and other internet material should not be interpreted as sincere opinion.

The need applies both in order for AIs to accurately assess archive material or interpret existing datasets, and in the field of sentiment analysis, where a neural network or other model of AI seeks to interpret data based on publicly posted web material.

Attempts have been made to ring-fence sarcastic data by the use of hash-tags such as #not on Twitter, or by noting the authors who have posted material identified as sarcastic, in order to apply appropriate filters to their future work.

Some research has struggled to quantify sarcasm, since it may not be a discrete property in itself – i.e. indicative of a reverse position to the one that it seems to put forward – but rather part of a wider gamut of data-distorting humour, and may need to be identified as a subset of that in order to be found at all.

Most of the dozens of research projects which have addressed the problem of sarcasm as a hindrance to machine comprehension have studied the problem as it relates to the English and Chinese languages, though some work has also been done in identifying sarcasm in Italian-language tweets, whilst another project has explored Dutch sarcasm.

The new report details the ways that academia has approached the sarcasm problem over the last decade, but concludes that the solution to the problem is not necessarily one of pattern recognition, but rather a more sophisticated matrix that has some ability to understand context.

Any computer which could reliably perform this kind of filtering could be argued to have developed a sense of humor.

Note: For AI machine to learn, it has to be confronted with genuine sarcastic people. And this species is a rarity


adonis49

adonis49

adonis49

September 2020
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
282930  

Blog Stats

  • 1,416,933 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 768 other followers

%d bloggers like this: