Adonis Diaries

Posts Tagged ‘Artificial Intelligence

What’s that concept of Human factors in Design?

Posted on September 20, 2008 (written from 2003-2006)

 What is this Human Factors profession?

 Summary of Articles numbers                                                                                              

1.  “What is your job?”                                                                                                  

2.  “Sorry, you said Human Factors in Engineering?”                                                

3.  “So, you want systems to fit people?”                                                                   

4.  “The rights of the beast of burden; like a donkey?”                                              

5.  “Who could afford to hire Human Factors engineers?”                             

6.  “In peace time, why and how often are Human Factors hired?                    

7.  “What message should the Human Factors profession transmit?”                          

8.  “What do you design again?”                                                                                         

9.  “Besides displays and controls, what other interfaces do you design?”                 

10.  “How Human Factors gets involved in the Safety and Health of end users?”    

11.  “What kind of methods will I have to manipulate and start worrying about?” 

12.  “What are the error taxonomies in Human Factors?”                                 

13.  “What are the task taxonomies and how basic are they in HF?”                           

14.  “How useful are taxonomies of methods?”                                                               

15.  “Are occupational safety and health standards and regulations of any concern for the HF professionals?”                                                                                                       

16.  “Are there any major cross over between HF and safety engineering?”            

17.  “Tell us about a few of your teaching methods and anecdotes”                            

18.  “What this general course in Human Factors covers?”                                         

19.  “Could one general course in Human Factors make a dent in a career behavior?”                                                                                                                     

20.  “How would you like to fit Human Factors in the engineering curriculum?”  

21.  “How to restructure engineering curriculum to respond to end users demands?”

22.  “How can a class assimilate a course material of 1000 pages?”                             

23.  “What undergraduate students care about university courses?”                            

24.  “Students’ feedback on my teaching method”                                              

25.  “My pet project for undergraduate engineering curriculum”                      

26.  “Guess what my job is”                                                                                              

27.  “Do you know what your folk’s jobs are?”                                                              

28.  “How do you perceive the inspection job to mean?”                                               

29.  “How objective and scientific is a research?” 

30.  “How objective and scientific are experiments?”                                                   

31.   “A seminar on a multidisciplinary view of design”                                                  

32.  “Consumer Product Liability Engineering”                                                                 

33.  “How could you tell long and good stories from HF graphs?”                               

34.  “What message has the Human Factors profession been sending?”       

35. “Who should be in charge of workspace design?”                                                  

36.  “Efficiency of the human body structure and mind”                                                

37.  “Psycho-physical method”                                                                                       

38.  “Human factors performance criteria”                                                      

39.  “Fundamentals of controlled experimentation methods”                                      

40.  “Experimentation: natural sciences versus people’s behavior sciences”            

41.  “What do Human Factors measure?”                                                                   

42.  “New semester, new approach to teaching the course”                                       

43.  “Controlled experimentation versus Evaluation and Testing methods”               

44.  “Phases in the process of system/mission analyses”                                             

45.   “Main errors and mistakes in controlled experimentations”                   

46.  “Human Factors versus Industrial, Computer, and traditional engineering” 

47.  “How Human Factors are considered at the NASA jet propulsion laboratory”

48.  “Efficiency of the human cognitive power or mind”                                               

49.  “Human Factors versus Artificial Intelligence”                                                     

50.  Computational Rationality in Artificial Intelligence                                               

51.  “Basic Engineering and Physics Problems Transformed Mathematically”     

52.  Mathematics: a unifying abstraction for Engineering and Physics                        

53. How to optimize human potentials in businesses for profit                                      

Is sarcasm such a problem in artificial intelligence research

Posted on March 1, 2016

Automatic Sarcasm Detection: A Survey 

[PDF] outlines ten years of research efforts from groups interested in detecting sarcasm in online sources.

If a text is devoid of detailed context to the story, there is no way to detect a sense of humor. And the major problem is that most stories or documentary pieces do Not bother to provide substantive context that are Not based on biases.

“Any computer which could reliably perform this kind of filtering could be argued to have developed a sense of humor.”

Martin Anderson Thu 11 Feb 2016

The problem is not an abstract one, nor does it centre around the need for computers to entertain or amuse humans, but rather the need to recognise that sarcasm in online comments, tweets and other internet material should Not be interpreted as sincere opinion.

Why sarcasm baffles AIs thestack.com|By The Stack.com

‪#‎sarcasm‬‪#‎humor‬‪#‎AI‬

The need applies both in order for AIs to accurately assess archive material or interpret existing datasets, and in the field of sentiment analysis, where a neural network or other model of AI seeks to interpret data based on publicly posted web material.

Attempts have been made to ring-fence sarcastic data by the use of hash-tags such as #not on Twitter, or by noting the authors who have posted material identified as sarcastic, in order to apply appropriate filters to their future work.

Some research has struggled to quantify sarcasm, since it may not be a discrete property in itself – i.e. indicative of a reverse position to the one that it seems to put forward – but rather part of a wider gamut of data-distorting humour, and may need to be identified as a subset of that in order to be found at all.

Most of the dozens of research projects which have addressed the problem of sarcasm as a hindrance to machine comprehension have studied the problem as it relates to the English and Chinese languages, though some work has also been done in identifying sarcasm in Italian-language tweets, whilst another project has explored Dutch sarcasm.

The new report details the ways that academia has approached the sarcasm problem over the last decade, but concludes that the solution to the problem is Not necessarily one of pattern recognition, but rather a more sophisticated matrix that has some ability to understand context.

Any computer which could reliably perform this kind of filtering could be argued to have developed a sense of humor.

Note: For AI machine to learn, it has to be confronted with genuine sarcastic people. And this species is a rarity

An exercise: taxonomy of methods

Posted on: June 10, 2009

Article #14 in Human Factors

I am going to let you have a hand at classifying methods by providing a list of various methods that could be used in Industrial engineering, Human Factors, Ergonomics, and Industrial Psychology.

This first list of methods is organized in the sequence used to analyzing part of a system or a mission;

The second list is not necessarily randomized, though thrown in without much order; otherwise it will not be an excellent exercise.

First, let us agree that a method is a procedure or a set of step by step process that our forerunners of geniuses and scholars have tested, found it good, agreed on it on consensus basis and offered it for you to use for the benefit of progress and science.

Many of you will still try hard to find short cuts to anything, including methods, for the petty argument that the best criterion to discriminating among clever people is who waste time on methods and who are nerds.

Actually, the main reason I don’t try to teach many new methods in this course (Human Factors in Engineering) is that students might smack run into a real occupational stress, which they are Not immune of, especially that methods in human factors are complex and time consuming.

Here is this famous list of a few methods and you are to decide which ones are still in the conceptual phases and which have been “operationalized“.

The first list contains the following methods:

Operational analysis, activity analysis, critical incidents, function flow, decision/action, action/information analyses, functional allocation, task, fault tree, failure modes and effects analyses, timeline, link analyses, simulation, controlled experimentation,  operational sequence analysis, and workload assessment.

The second list is constituted of methods that human factors are trained to utilize if need be such as:

Verbal protocol, neural network, utility theory, preference judgments, psycho-physical methods, operational research, prototyping, information theory, cost/benefit methods, various statistical modeling packages, and expert systems.

Just wait, let me resume.

There are those that are intrinsic to artificial intelligence methodology such as:

Fuzzy logic, robotics, discrimination nets, pattern matching, knowledge representation, frames, schemata, semantic network, relational databases, searching methods, zero-sum games theory, logical reasoning methods, probabilistic reasoning, learning methods, natural language understanding, image formation and acquisition, connectedness, cellular logic, problem solving techniques, means-end analysis, geometric reasoning system, algebraic reasoning system.

If your education is multidisciplinary you may catalog the above methods according to specialty disciplines such as:

Artificial intelligence, robotics, econometrics, marketing, human factors, industrial engineering, other engineering majors, psychology or mathematics.

The most logical grouping is along the purpose, input, process/procedure, and output/product of the method. Otherwise, it would be impossible to define and understand any method.

Methods could be used to analyze systems, provide heuristic data about human performance, make predictions, generate subjective data, discover the cause and effects of the main factors, or evaluate the human-machine performance of products or systems.

The inputs could be qualitative or quantitative such as declarative data, categorical, or numerical and generated from structured observations, records, interviews, questionnaires, computer generated or outputs from prior methods.

The outputs could be point data, behavioral trends, graphical in nature, context specific, generic, or reduction in alternatives.

The process could be a creative graphical or pictorial model, logical hierarchy or in network alternative, operational, empirical, informal, or systematic.

You may also group these methods according to their mathematical branches such as algebraic, probabilistic, or geometric.

You may collect them as to their deterministic, statistical sampling methods and probabilistic characters.

You may differentiate the methods as belonging to categorical, ordinal, discrete or continuous measurements.

You may wish to investigate the methods as parametric, non parametric, distribution free population or normally distributed.

You may separate them on their representation forms such as verbal, graphical, pictorial, or in table.

You may discriminate them on heuristic, observational, or experimental scientific values.

You may bundle these methods on qualitative or quantitative values.

You may as well separate them on their historical values or modern techniques based on newer technologies.

You may select them as to their state of the art methods such as ancient methods that new information and new paradigms have refuted their validity or recently developed.

You may define the methods as those digitally or analytically amenable for solving problems.

You may choose to draw several lists of those methods that are economically sounds, esoteric, or just plainly fuzzy sounding.

You may opt to differentiate these methods on requiring high level of mathematical reasoning that are out of your capability and those that can be comprehended through persistent efforts.

You could as well sort them according to which ones fit nicely into the courses that you have already taken, but failed to recollect that they were indeed methods worth acquiring for your career.

You may use any of these taxonomies to answer an optional exam question with no guarantees that you might get a substantial grade.

It would be interesting to collect statistics on how often these methods are being used, by whom, for what rational and by which line of business and by which universities.

It would be interesting to translate these methods into Arabic, Chinese, Japanese, Hindu, or Russian.

Which machine learning algorithm should I use? How many and which one is best?

Note: in the early 1990’s, I took graduate classes in Artificial Intelligence (AI) (The if…Then series of questions and answer of experts in their fields of work) and neural networks developed by psychologists. 

The concepts are the same, though upgraded with new algorithms and automation.

I recall a book with a Table (like the Mendeleev table in chemistry) that contained the terms, mental processes, mathematical concepts behind the ideas that formed the AI trend…

There are several lists of methods, depending on the field of study you are more concerned with.

One list of methods is constituted of methods that human factors are trained to utilize if need be, such as:

Verbal protocol, neural network, utility theory, preference judgments, psycho-physical methods, operational research, prototyping, information theory, cost/benefit methods, various statistical modeling packages, and expert systems.

There are those that are intrinsic to artificial intelligence methodology such as:

Fuzzy logic, robotics, discrimination nets, pattern matching, knowledge representation, frames, schemata, semantic network, relational databases, searching methods, zero-sum games theory, logical reasoning methods, probabilistic reasoning, learning methods, natural language understanding, image formation and acquisition, connectedness, cellular logic, problem solving techniques, means-end analysis, geometric reasoning system, algebraic reasoning system.

Hui Li on Subconscious Musings posted on April 12, 2017 Advanced Analytics | Machine Learning

This resource is designed primarily for beginner to intermediate data scientists or analysts who are interested in identifying and applying machine learning algorithms to address the problems of their interest.

typical question asked by a beginner, when facing a wide variety of machine learning algorithms, is “which algorithm should I use?”

The answer to the question varies depending on many factors, including:

  • The size, quality, and nature of data.
  • The available computational time.
  • The urgency of the task.
  • What you want to do with the data.

Even an experienced data scientist cannot tell which algorithm will perform the best before trying different algorithms.

We are not advocating a one and done approach, but we do hope to provide some guidance on which algorithms to try first depending on some clear factors.

The machine learning algorithm cheat sheet

Flow chart shows which algorithms to use when

The machine learning algorithm cheat sheet helps you to choose from a variety of machine learning algorithms to find the appropriate algorithm for your specific problems.

This article walks you through the process of how to use the sheet.

Since the cheat sheet is designed for beginner data scientists and analysts, we will make some simplified assumptions when talking about the algorithms.

The algorithms recommended here result from compiled feedback and tips from several data scientists and machine learning experts and developers.

There are several issues on which we have not reached an agreement and for these issues we try to highlight the commonality and reconcile the difference.

Additional algorithms will be added in later as our library grows to encompass a more complete set of available methods.

How to use the cheat sheet

Read the path and algorithm labels on the chart as “If <path label> then use <algorithm>.” For example:

  • If you want to perform dimension reduction then use principal component analysis.
  • If you need a numeric prediction quickly, use decision trees or logistic regression.
  • If you need a hierarchical result, use hierarchical clustering.

Sometimes more than one branch will apply, and other times none of them will be a perfect match.

It’s important to remember these paths are intended to be rule-of-thumb recommendations, so some of the recommendations are not exact.

Several data scientists I talked with said that the only sure way to find the very best algorithm is to try all of them.

(Is that a process to find an algorithm that matches your world view on an issue? Or an answer that satisfies your boss?)

Types of machine learning algorithms

This section provides an overview of the most popular types of machine learning. If you’re familiar with these categories and want to move on to discussing specific algorithms, you can skip this section and go to “When to use specific algorithms” below.

Supervised learning

Supervised learning algorithms make predictions based on a set of examples.

For example, historical sales can be used to estimate the future prices. With supervised learning, you have an input variable that consists of labeled training data and a desired output variable.

You use an algorithm to analyze the training data to learn the function that maps the input to the output. This inferred function maps new, unknown examples by generalizing from the training data to anticipate results in unseen situations.

  • Classification: When the data are being used to predict a categorical variable, supervised learning is also called classification. This is the case when assigning a label or indicator, either dog or cat to an image. When there are only two labels, this is called binary classification. When there are more than two categories, the problems are called multi-class classification.
  • Regression: When predicting continuous values, the problems become a regression problem.
  • Forecasting: This is the process of making predictions about the future based on the past and present data. It is most commonly used to analyze trends. A common example might be estimation of the next year sales based on the sales of the current year and previous years.

Semi-supervised learning

The challenge with supervised learning is that labeling data can be expensive and time consuming. If labels are limited, you can use unlabeled examples to enhance supervised learning. Because the machine is not fully supervised in this case, we say the machine is semi-supervised. With semi-supervised learning, you use unlabeled examples with a small amount of labeled data to improve the learning accuracy.

Unsupervised learning

When performing unsupervised learning, the machine is presented with totally unlabeled data. It is asked to discover the intrinsic patterns that underlies the data, such as a clustering structure, a low-dimensional manifold, or a sparse tree and graph.

  • Clustering: Grouping a set of data examples so that examples in one group (or one cluster) are more similar (according to some criteria) than those in other groups. This is often used to segment the whole dataset into several groups. Analysis can be performed in each group to help users to find intrinsic patterns.
  • Dimension reduction: Reducing the number of variables under consideration. In many applications, the raw data have very high dimensional features and some features are redundant or irrelevant to the task. Reducing the dimensionality helps to find the true, latent relationship.

Reinforcement learning

Reinforcement learning analyzes and optimizes the behavior of an agent based on the feedback from the environment.  Machines try different scenarios to discover which actions yield the greatest reward, rather than being told which actions to take. Trial-and-error and delayed reward distinguishes reinforcement learning from other techniques.

Considerations when choosing an algorithm

When choosing an algorithm, always take these aspects into account: accuracy, training time and ease of use. Many users put the accuracy first, while beginners tend to focus on algorithms they know best.

When presented with a dataset, the first thing to consider is how to obtain results, no matter what those results might look like. Beginners tend to choose algorithms that are easy to implement and can obtain results quickly. This works fine, as long as it is just the first step in the process. Once you obtain some results and become familiar with the data, you may spend more time using more sophisticated algorithms to strengthen your understanding of the data, hence further improving the results.

Even in this stage, the best algorithms might not be the methods that have achieved the highest reported accuracy, as an algorithm usually requires careful tuning and extensive training to obtain its best achievable performance.

When to use specific algorithms

Looking more closely at individual algorithms can help you understand what they provide and how they are used. These descriptions provide more details and give additional tips for when to use specific algorithms, in alignment with the cheat sheet.

Linear regression and Logistic regression

Linear regressionLogistic regression

Linear regression is an approach for modeling the relationship between a continuous dependent variable [Math Processing Error]y and one or more predictors [Math Processing Error]X. The relationship between [Math Processing Error]y and [Math Processing Error]X can be linearly modeled as [Math Processing Error]y=βTX+ϵ Given the training examples [Math Processing Error]{xi,yi}i=1N, the parameter vector [Math Processing Error]β can be learnt.

If the dependent variable is not continuous but categorical, linear regression can be transformed to logistic regression using a logit link function. Logistic regression is a simple, fast yet powerful classification algorithm.

Here we discuss the binary case where the dependent variable [Math Processing Error]y only takes binary values [Math Processing Error]{yi∈(−1,1)}i=1N (it which can be easily extended to multi-class classification problems).

In logistic regression we use a different hypothesis class to try to predict the probability that a given example belongs to the “1” class versus the probability that it belongs to the “-1” class. Specifically, we will try to learn a function of the form:[Math Processing Error]p(yi=1|xi)=σ(βTxi) and [Math Processing Error]p(yi=−1|xi)=1−σ(βTxi).

Here [Math Processing Error]σ(x)=11+exp(−x) is a sigmoid function. Given the training examples[Math Processing Error]{xi,yi}i=1N, the parameter vector [Math Processing Error]β can be learnt by maximizing the Pyongyang said it could call off the talks, slated for June 12, if the US continues to insist that it give up its nuclear weapons. North Korea called the military drills between South Korea and the US a “provocation,” and canceled a meeting planned for today with South Korea.of [Math Processing Error]β given the data set.Group By Linear RegressionLogistic Regression in SAS Visual Analytics

Linear SVM and kernel SVM

Kernel tricks are used to map a non-linearly separable functions into a higher dimension linearly separable function. A support vector machine (SVM) training algorithm finds the classifier represented by the normal vector [Math Processing Error]w and bias [Math Processing Error]b of the hyperplane. This hyperplane (boundary) separates different classes by as wide a margin as possible. The problem can be converted into a constrained optimization problem:
[Math Processing Error]minimizew||w||subject toyi(wTXi−b)≥1,i=1,…,n.

A support vector machine (SVM) training algorithm finds the classifier represented by the normal vector  and bias  of the hyperplane. This hyperplane (boundary) separates different classes by as wide a margin as possible. The problem can be converted into a constrained optimization problem:

Linear and kernel SVM charts

When the classes are not linearly separable, a kernel trick can be used to map a non-linearly separable space into a higher dimension linearly separable space.

When most dependent variables are numeric, logistic regression and SVM should be the first try for classification. These models are easy to implement, their parameters easy to tune, and the performances are also pretty good. So these models are appropriate for beginners.

Trees and ensemble trees

A decision tree for prediction model.

Decision trees, random forest and gradient boosting are all algorithms based on decision trees.

There are many variants of decision trees, but they all do the same thing – subdivide the feature space into regions with mostly the same label. Decision trees are easy to understand and implement.

However, they tend to over fit data when we exhaust the branches and go very deep with the trees. Random Forrest and gradient boosting are two popular ways to use tree algorithms to achieve good accuracy as well as overcoming the over-fitting problem.

Neural networks and deep learning

Neural networks flourished in the mid-1980s due to their parallel and distributed processing ability.

Research in this field was impeded by the ineffectiveness of the back-propagation training algorithm that is widely used to optimize the parameters of neural networks. Support vector machines (SVM) and other simpler models, which can be easily trained by solving convex optimization problems, gradually replaced neural networks in machine learning.

In recent years, new and improved training techniques such as unsupervised pre-training and layer-wise greedy training have led to a resurgence of interest in neural networks.

Increasingly powerful computational capabilities, such as graphical processing unit (GPU) and massively parallel processing (MPP), have also spurred the revived adoption of neural networks. The resurgent research in neural networks has given rise to the invention of models with thousands of layers.

A neural network

Shallow neural networks have evolved into deep learning neural networks.

Deep neural networks have been very successful for supervised learning.  When used for speech and image recognition, deep learning performs as well as, or even better than, humans.

Applied to unsupervised learning tasks, such as feature extraction, deep learning also extracts features from raw images or speech with much less human intervention.

A neural network consists of three parts: input layer, hidden layers and output layer. 

The training samples define the input and output layers. When the output layer is a categorical variable, then the neural network is a way to address classification problems. When the output layer is a continuous variable, then the network can be used to do regression.

When the output layer is the same as the input layer, the network can be used to extract intrinsic features.

The number of hidden layers defines the model complexity and modeling capacity.

Deep Learning: What it is and why it matters

k-means/k-modes, GMM (Gaussian mixture model) clustering

K Means ClusteringGaussian Mixture Model

Kmeans/k-modes, GMM clustering aims to partition n observations into k clusters. K-means define hard assignment: the samples are to be and only to be associated to one cluster. GMM, however define a soft assignment for each sample. Each sample has a probability to be associated with each cluster. Both algorithms are simple and fast enough for clustering when the number of clusters k is given.

DBSCAN

A DBSCAN illustration

When the number of clusters k is not given, DBSCAN (density-based spatial clustering) can be used by connecting samples through density diffusion.

Hierarchical clustering

Hierarchical partitions can be visualized using a tree structure (a dendrogram). It does not need the number of clusters as an input and the partitions can be viewed at different levels of granularities (i.e., can refine/coarsen clusters) using different K.

PCA, SVD and LDA

We generally do not want to feed a large number of features directly into a machine learning algorithm since some features may be irrelevant or the “intrinsic” dimensionality may be smaller than the number of features. Principal component analysis (PCA), singular value decomposition (SVD), andlatent Dirichlet allocation (LDA) all can be used to perform dimension reduction.

PCA is an unsupervised clustering method which maps the original data space into a lower dimensional space while preserving as much information as possible. The PCA basically finds a subspace that most preserves the data variance, with the subspace defined by the dominant eigenvectors of the data’s covariance matrix.

The SVD is related to PCA in the sense that SVD of the centered data matrix (features versus samples) provides the dominant left singular vectors that define the same subspace as found by PCA. However, SVD is a more versatile technique as it can also do things that PCA may not do.

For example, the SVD of a user-versus-movie matrix is able to extract the user profiles and movie profiles which can be used in a recommendation system. In addition, SVD is also widely used as a topic modeling tool, known as latent semantic analysis, in natural language processing (NLP).

A related technique in NLP is latent Dirichlet allocation (LDA). LDA is probabilistic topic model and it decomposes documents into topics in a similar way as a Gaussian mixture model (GMM) decomposes continuous data into Gaussian densities. Differently from the GMM, an LDA models discrete data (words in documents) and it constrains that the topics are a priori distributed according to a Dirichlet distribution.

Conclusions

This is the work flow which is easy to follow. The takeaway messages when trying to solve a new problem are:

  • Define the problem. What problems do you want to solve?
  • Start simple. Be familiar with the data and the baseline results.
  • Then try something more complicated.
  • Dr. Hui Li is a Principal Staff Scientist of Data Science Technologies at SAS. Her current work focuses on Deep Learning, Cognitive Computing and SAS recommendation systems in SAS Viya. She received her PhD degree and Master’s degree in Electrical and Computer Engineering from Duke University.
  • Before joining SAS, she worked at Duke University as a research scientist and at Signal Innovation Group, Inc. as a research engineer. Her research interests include machine learning for big, heterogeneous data, collaborative filtering recommendations, Bayesian statistical modeling and reinforcement learning.

What Is Natural Language Processing And What Is It Used For?

Terence Mills 

Terence Mills, CEO of AI.io and Moonshot is an AI pioneer and digital technology specialist. Connect with him about AI or mobile on LinkedIn

Artificial intelligence (AI) is changing the way we look at the world. AI “robots” are everywhere. (Mostly in Japan and China)

From our phones to devices like Amazon’s Alexa, we live in a world surrounded by machine learning.

Google, Netflix, data companies, video games and more, all use AI to comb through large amounts of data. The end result is insights and analysis that would otherwise either be impossible or take far too long.

It’s no surprise that businesses of all sizes are taking note of large companies’ success with AI and jumping on board. Not all AI is created equal in the business world, though. Some forms of artificial intelligence are more useful than others.

Today, I’m touching on something called natural language processing (NLP).

It’s a form of artificial intelligence that focuses on analyzing the human language to draw insights, create advertisements, help you text (yes, really) and more. (And what of body language?)

But Why Natural Language Processing?

NLP is an emerging technology that drives many forms of AI you’re used to seeing.

The reason I’ve chosen to focus on this technology instead of say, AI for math-based analysis, is the increasingly large application for NLP.

Think about it this way.

Every day, humans say thousands of words that other humans interpret to do countless things. At its core, it’s simple communication, but we all know words run much deeper than that. (That’s the function of slang in community)

There’s a context that we derive from everything someone says. Whether they imply something with their body language or in how often they mention something.

While NLP doesn’t focus on voice inflection, it does draw on contextual patterns. (Meaning: currently it doesn’t care about the emotions?)

This is where it gains its value (As if in communication people lay out the context first?).

Let’s use an example to show just how powerful NLP is when used in a practical situation. When you’re typing on an iPhone, like many of us do every day, you’ll see word suggestions based on what you type and what you’re currently typing. That’s natural language processing in action.

It’s such a little thing that most of us take for granted, and have been taking for granted for years, but that’s why NLP becomes so important. Now let’s translate that to the business world.

Some company is trying to decide how best to advertise to their users. They can use Google to find common search terms that their users type when searching for their product. (In a nutshell, that’s the most urgent usage of NLP?)

NLP then allows for a quick compilation of the data into terms obviously related to their brand and those that they might not expect. Capitalizing on the uncommon terms could give the company the ability to advertise in new ways.

So How Does NLP Work?

As mentioned above, natural language processing is a form of artificial intelligence that analyzes the human language. It takes many forms, but at its core, the technology helps machine understand, and even communicate with, human speech.

But understanding NLP isn’t the easiest thing. It’s a very advanced form of AI that’s only recently become viable. That means that not only are we still learning about NLP but also that it’s difficult to grasp.

I’ve decided to break down NLP in layman’s term. I might not touch on every technical definition, but what follows is the easiest way to understand how natural language processing works.

The first step in NLP depends on the application of the system. Voice-based systems like Alexa or Google Assistant need to translate your words into text. That’s done (usually) using the Hidden Markov Models system (HMM).

The HMM uses math models to determine what you’ve said and translate that into text usable by the NLP system. Put in the simplest way, the HMM listens to 10- to 20-millisecond clips of your speech and looks for phonemes (the smallest unit of speech) to compare with pre-recorded speech.

Next is the actual understanding of the language and context. Each NLP system uses slightly different techniques, but on the whole, they’re fairly similar. The systems try to break each word down into its part of speech (noun, verb, etc.).

This happens through a series of coded grammar rules that rely on algorithms that incorporate statistical machine learning to help determine the context of what you said.

If we’re not talking about speech-to-text NLP, the system just skips the first step and moves directly into analyzing the words using the algorithms and grammar rules.

The end result is the ability to categorize what is said in many different ways. Depending on the underlying focus of the NLP software, the results get used in different ways.

For instance, an SEO application could use the decoded text to pull keywords associated with a certain product.

Semantic Analysis

When explaining NLP, it’s also important to break down semantic analysis. It’s closely related to NLP and one could even argue that semantic analysis helps form the backbone of natural language processing.

Semantic analysis is how NLP AI interprets human sentences logically. When the HMM method breaks sentences down into their basic structure, semantic analysis helps the process add content.

For instance, if an NLP program looks at the word “dummy” it needs context to determine if the text refers to calling someone a “dummy” or if it’s referring to something like a car crash “dummy.”

If the HMM method breaks down text and NLP allows for human-to-computer communication, then semantic analysis allows everything to make sense contextually.

Without semantic analysts, we wouldn’t have nearly the level of AI that we enjoy. As the process develops further, we can only expect NLP to benefit.

NLP And More

As NLP develops we can expect to see even better human to AI interaction. Devices like Google’s Assistant and Amazon’s Alexa, which are now making their way into our homes and even cars, are showing that AI is here to stay.

The next few years should see AI technology increase even more, with the global AI market expected to push $60 billion by 2025 (registration required). Needless to say, you should keep an eye on AI.

Cost of Lemonade stand, Artificial Intelligence program,

Three decades ago, I audited an Artificial Intelligence programming course. The professor never programmed one, but it was the rage.

The prof. was candid and said: “The only way to learn these kinds of programming is by doing it.  The engine was there to logically arrange the “If this ,then do that” questions in order to answer a technical problem. Nothing to it.

I failed even to try a very simple program: I had a heavy workload and didn’t have the passion for any engineering project at the time.

I cannot say that I know Artificial Intelligence, regardless of the many fancy technical and theoretical books I read on the subject

Studying entrepreneurship without doing it is like studying the appreciation of music without listening to it.

(Actually, I did study so many subject matters, and the ones supposed to be of the practical ones, but failed to do or practice any skills. My intention was to stay away from theoretical subjects and ended up sticking to the theories. For example, I enrolled in Industrial Engineering, thinking it was mostly of the hand-on discipline. Wrong: it was mostly theoretical simply because the university lacked labs. and technical staff and machineries)

The cost of setting up a lemonade stand (or whatever metaphorical equivalent you dream up) is almost 100% internal. Until you confront the fear and discomfort of being in the world and saying, “here, I made this,” it’s impossible to understand anything at all about what it means to be a entrepreneur. Or an artist.

Never enough

There’s never enough time to be as patient as we need to be.

Not enough slack to focus on the long-term, too much urgency in the now to take the time and to plan ahead.

That urgent sign post just ahead demands all of our intention (and attention), and we decide to invest in, “down the road,” down the road.

It’s not only more urgent, but it’s easier to run to the urgent meeting than it is to sit down with a colleague and figure out the truth of what matters and the why of what’s before us.

And there’s never enough money to easily make the investments that matter.

Not enough surplus in the budget to take care of those that need our help, too much on our plate to be generous right now.

The short term bills make it easy to ignore the long-term opportunities.

Of course, the organizations that get around the universal and insurmountable problems of not enough time and not enough money are able to create innovations, find resources to be generous and prepare for a tomorrow that’s better than today. It’s not easy, not at all, but probably (okay, certainly) worth it.

We’re going to spend our entire future living in tomorrow—investing now, when it’s difficult, is the single best moment.

Posted by Seth Godin on March 11, 2013

 

 

How fast are Robotics and Artificial Intelligence progressing?

& Are Progressing Fast

Close

Google launched an initiative to improve how users work with artificial intelligence

  • The research initiative will involve collaborations with people in multiple Google product groups, as well as professors from Harvard and MIT.
  • More informative explanations of recommendations could result from the research over time.
| Monday, 10 Jul 2017 | 12:00 PM ET

Google CEO Sundar Pichai speaks during Google I/O 2016 at Shoreline Amphitheatre

Justin Sullivan | Getty Images

Alphabet on Monday said it has kicked off a new research initiative aimed at improving human interaction with artificial intelligence systems.

The People + AI Research (PAIR) program currently encompasses a dozen people who will collaborate with Googlers in various product groups — as well as outsiders like Harvard University professor Brendan Meade and Massachusetts Institute of Technology professor Hal Abelson.

The research could eventually lead to refinements in the interfaces of the smarter components of some of the world’s most popular apps. And Google’s efforts here could inspire other companies to adjust their software, too.

“One of the things we’re going to be looking into is this notion of explanation — what might be a useful on-time, on-demand explanation about why a recommendation system did something it did,” Google Brain senior staff research scientist Fernanda Viegas told CNBC in an interview.

The PAIR program takes inspiration from the concept of design thinking, which highly prioritizes the needs of people who will use the products being developed.

While end users — such as YouTube’s 1.5 billion monthly users — can be the target of that, the research is also meant to improve the experience of working with AI systems for AI researchers, software engineers and domain experts as well, Google Brain senior staff research scientist Martin Wattenberg told CNBC.

The new initiative fits in well with Google’s increasing focus on AI.

Google CEO Sundar Pichai has repeatedly said the world is transitioning from being mobile-first to AI-first, and the company has been taking many steps around that thesis.

Recently, for example, Google formed a venture capital group to invest in AI start-ups.

Meanwhile Amazon, Apple, Facebook and Microsoft have been active in AI in the past few years as well.

The company implemented a redesign for several of its apps in 2011 and in more recent years has been sprucing up many of its properties with its material design principles.

in 2016 John Maeda, then the design partner at Kleiner Perkins Caufield & Byers, pointed out in his annual report on design in technology that Google had been perceived as improving the most in design.

What is new is that Googlers are trying to figure out how to improve design specifically for AI components. And that’s important because AI is used in a whole lot of places around Google apps, even if you might not always realize it.

Video recommendations in YouTube, translations in Google Translate, article suggestions in the Google mobile app and even Google search results are all enhanced with AI.

Note: with no specific examples to understand what Justin is talking about, consider this article as free propaganda to Google

Can a Robot emulate human emotions? That should not be the question

A robot programmed with an artificial intelligence that can learn how to love and express emotions is feasible, and highly welcomed.

A child robot David can acquire and follow the various stages of kids emotional development, all the way to adulthood.

The question is why scientists should invest time and energy creating robot that would exacerbate the current calamities experienced and witnessed of human emotions and love consequences and trials?

Have we not gotten enough of negative jealousy that generates serious pains, frustrations, beating, castration, killing…?

It is getting evident that parents will no longer enjoy the adequate quality time and opportunities to caring full-time for nurturing the kids.

A kid nurturing robot at home will be the best invention for the stability and healthy emotional development of isolated kids in the future…

If robots have to convey emotions and feeling, they had better extend proper nurturing examples that kids at home may emulate…

Robots must learn to listen to the kids, ask questions, circumvent human shortcomings in failure to communicate, overcome the tendency of kids in building negative fictitious myths and role played empathy projected in relationship…

The movie “AI” of Steven Spielberg investigated the limits of man and machines confronted at the ineluctable problems:

1. The child separation from family members, particularly the mother early emotional attachment…The moment we discover that our mother is Not perfect and our father is a coward…

2. The moment it dawn on the child that we are Not unique, perfect, really loved…as we wished it should be…

2. The moment we realize that we are no longer the center of the universe and that community is too busy to care for our future…

4. The moment we accept that we are “All alone” and we have to fend for our health, safety, mental sanity…

5. The moment we feel that we were left bare and unprepared to face the desolate world around us…

Should the kid robot replace the myth of the “Blue Fairy?”  This fairy supposed to:

1. Heal the torn parts in the separation with family members…

2. Render possible what we came to learn as irreversible, irreparable, and almost unfeasible…?

3. Convince us that there is always a person out there who will love us, be a true friend for life

4. Bring our way this person who suffered and felt wounded as we are…

5. Keep at bay those cannibals, ever ready to sacrifice man and animal under the pretense of “celebrating life

A child robot with unconditional devotion, soft-spoken, cultured, patient, and willing to listen to our lucubrations…

The happy ending that teaches us to grasp and grab on the fleeting moments of rich happiness, to taste the powerful instants of tenderness…

Freed at last from illusion, myths and these comfortable peaceful world views we thought we had acquired in childhood…

We do live on the assumption of recovering what we had lost, learning that what we lost “Never existed” in the first place…

At least, a compassionate kid robot would extend, now and then, at critical difficult moments, a glimpse of our childhood innocent belief system, of a world of goodness, sensibility, and wonder…

Little robot David should learn how and when to inject a healthy dose of emotional adrenaline to keep us sane, and ready to face the real world with more courage, more determination to disseminate what is good in us, the compassion needed to sustain and maintain our hope in a better future…

Note: This post was inspired from an article in the monthly Lebanese magazine Sante/Beaute #21. The article was not signed, but the source maybewww.shaomi blog.net


adonis49

adonis49

adonis49

August 2021
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Blog Stats

  • 1,476,391 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 809 other followers

%d bloggers like this: