Adonis Diaries

Archive for the ‘Human Factors/Ergonomics’ Category

A good time to die (October 16, 2008)

We know by now that decisions for resuming experiments on atomic explosions, in open air or underground, are bad news.  

We know that decisions to leave man out of the loop of programmed launching of guided ballistic missiles are wrong decisions.  

We are learning that the ozone layer is good and protects the living organisms from lethal doses of ultraviolet radiations; that the depletion of ozone over the Antarctic is very bad news.  

We recognize that the increased concentration of CO2 may be causing the “Greenhouse Effect”, melting the North Pole and increasing the Oceans water level.  (Methane increased emission from the poles from the melting of permafrost layer is extremely bad news)

We have this gut feeling that the deforestation of the virgin forests in the Equator is degrading the quality of air and increasing the numbers of tsunamis or cyclones or tidal waves or hurricanes.  

We blame those who still insist on residing around the targeted sea shores as if these cataclysms would disappear any time soon.  

We are less sure how the high tension pylons amidst towns alter the health of children. Active citizens must have learned the lesson to no longer wait for the results of funded research by multinationals and experiments when health and safety are of concern.

We know that our intelligence is intrinsically malignant, but the most malignant are those vicious, lengthy and recurring cycles of the decision processes to settle on remedial plans of actions.

We frequently don’t know the mechanisms to resolve what we initiated and much less these processes that takes decades to recognize the problems and reach agreements to act upon and persevere in our programs.  

Earth has mechanisms to stabilize harms done to it, but it requires man to leave it alone for hundreds and thousands of years.

Every time man creates a problem to earth’s quality and stability we have to wait for a valiant scientist to sound the alarm.  

Then we have to wait for this scientist to affiliate with a recognized international figure to give credit and weight for his discovery.  

Then we have to wait for the convinced scientists and professionals to sign up a manifest and present it to the UN so that the UN might receives a wake up call to take on its responsibilities in order to preserve human rights for clean air, clean potable water, clean environment and human rights for health and safety and security.  

Then we have to wait for one superpower to admit that what is happening is bad, that the level of tolerance, invariably set by unprofessional specialists in the field, is no longer acceptable.  

Then we have to wait for one superpower to unilaterally agree to distance itself from the pack of wolves and actively remediate.

Then we have to hear the complaints of economic infeasibility of regulations to remedial actions and

Then we have to set a period that lengthens to decades to start an effective program that agrees to everyone concerned.

Albert Schweitzer in his book of selected three calls to action “Peace or atomic war” describes the fundamental process that was initiated to put a halt on live atomic explosion experimentations.  

You discover that physicists and not medical specialists volunteer to set levels of tolerances to radioactive emissions.  

You hear Edward Teller, the “eminent” physicist and “father” of the hydrogen bomb say “We have got for our national security to keep testing for a harmless hydrogen bomb”; as if States at war intend not to inflict harms!  

The UN had to wait for 9235 scientists and headed by Linus Pauling to sign a manifest in January 1958 explaining the lethal harm to the next generations of radioactive emissions.  

Then the US Administration gradually stopped financing apologetics in Newspapers that the experiments constitute no tangible harms.

De Gaulle of France sank an entire atole in the Pacific to test His open nuclear bomb. The French operators (in shorts and naked chest) and the people in the adjacent islands were Not warned. Most of them died from Not natural causes.

16,000 US navy personnels on a destroyer were ordered to turn their faces into a direction and cover the faces. They were Not warned that a nuclear test is going to be experimented. The marines could see the bones of their comrades from the X-rays and many were blown off. 15,000 of them died, and Not from natural causes.

After the US, Britain and the Soviet Union were forced to agree on a moratorium to open air explosions they resumed their nuclear explosions in “controlled, secure, and safe” underground testing fields

I never stumbled on a manuscript describing the consequences for underground nuclear testing.  

Usually the consequences are of long term nature and time-line researches are too expensive to follow up.  

My gut feeling is that these underground testing are directly linked to the current drastic increase in large scale seism, volcano eruptions and tidal wave catastrophes.  

Earth may sustain one major destructive factor but it requires more than one main factor to destabilize earth and its environment.

Which machine learning algorithm should I use? How many and which one is best?

Note: in the early 1990’s, I took graduate classes in Artificial Intelligence (AI) (The if…Then series of questions and answer of experts in their fields of work) and neural networks developed by psychologists. 

The concepts are the same, though upgraded with new algorithms and automation.

I recall a book with a Table (like the Mendeleev table in chemistry) that contained the terms, mental processes, mathematical concepts behind the ideas that formed the AI trend…

There are several lists of methods, depending on the field of study you are more concerned with.

One list of methods is constituted of methods that human factors are trained to utilize if need be, such as:

Verbal protocol, neural network, utility theory, preference judgments, psycho-physical methods, operational research, prototyping, information theory, cost/benefit methods, various statistical modeling packages, and expert systems.

There are those that are intrinsic to artificial intelligence methodology such as:

Fuzzy logic, robotics, discrimination nets, pattern matching, knowledge representation, frames, schemata, semantic network, relational databases, searching methods, zero-sum games theory, logical reasoning methods, probabilistic reasoning, learning methods, natural language understanding, image formation and acquisition, connectedness, cellular logic, problem solving techniques, means-end analysis, geometric reasoning system, algebraic reasoning system.

Hui Li on Subconscious Musings posted on April 12, 2017 Advanced Analytics | Machine Learning

This resource is designed primarily for beginner to intermediate data scientists or analysts who are interested in identifying and applying machine learning algorithms to address the problems of their interest.

typical question asked by a beginner, when facing a wide variety of machine learning algorithms, is “which algorithm should I use?”

The answer to the question varies depending on many factors, including:

  • The size, quality, and nature of data.
  • The available computational time.
  • The urgency of the task.
  • What you want to do with the data.

Even an experienced data scientist cannot tell which algorithm will perform the best before trying different algorithms.

We are not advocating a one and done approach, but we do hope to provide some guidance on which algorithms to try first depending on some clear factors.

The machine learning algorithm cheat sheet

Flow chart shows which algorithms to use when

The machine learning algorithm cheat sheet helps you to choose from a variety of machine learning algorithms to find the appropriate algorithm for your specific problems.

This article walks you through the process of how to use the sheet.

Since the cheat sheet is designed for beginner data scientists and analysts, we will make some simplified assumptions when talking about the algorithms.

The algorithms recommended here result from compiled feedback and tips from several data scientists and machine learning experts and developers.

There are several issues on which we have not reached an agreement and for these issues we try to highlight the commonality and reconcile the difference.

Additional algorithms will be added in later as our library grows to encompass a more complete set of available methods.

How to use the cheat sheet

Read the path and algorithm labels on the chart as “If <path label> then use <algorithm>.” For example:

  • If you want to perform dimension reduction then use principal component analysis.
  • If you need a numeric prediction quickly, use decision trees or logistic regression.
  • If you need a hierarchical result, use hierarchical clustering.

Sometimes more than one branch will apply, and other times none of them will be a perfect match.

It’s important to remember these paths are intended to be rule-of-thumb recommendations, so some of the recommendations are not exact.

Several data scientists I talked with said that the only sure way to find the very best algorithm is to try all of them.

(Is that a process to find an algorithm that matches your world view on an issue? Or an answer that satisfies your boss?)

Types of machine learning algorithms

This section provides an overview of the most popular types of machine learning. If you’re familiar with these categories and want to move on to discussing specific algorithms, you can skip this section and go to “When to use specific algorithms” below.

Supervised learning

Supervised learning algorithms make predictions based on a set of examples.

For example, historical sales can be used to estimate the future prices. With supervised learning, you have an input variable that consists of labeled training data and a desired output variable.

You use an algorithm to analyze the training data to learn the function that maps the input to the output. This inferred function maps new, unknown examples by generalizing from the training data to anticipate results in unseen situations.

  • Classification: When the data are being used to predict a categorical variable, supervised learning is also called classification. This is the case when assigning a label or indicator, either dog or cat to an image. When there are only two labels, this is called binary classification. When there are more than two categories, the problems are called multi-class classification.
  • Regression: When predicting continuous values, the problems become a regression problem.
  • Forecasting: This is the process of making predictions about the future based on the past and present data. It is most commonly used to analyze trends. A common example might be estimation of the next year sales based on the sales of the current year and previous years.

Semi-supervised learning

The challenge with supervised learning is that labeling data can be expensive and time consuming. If labels are limited, you can use unlabeled examples to enhance supervised learning. Because the machine is not fully supervised in this case, we say the machine is semi-supervised. With semi-supervised learning, you use unlabeled examples with a small amount of labeled data to improve the learning accuracy.

Unsupervised learning

When performing unsupervised learning, the machine is presented with totally unlabeled data. It is asked to discover the intrinsic patterns that underlies the data, such as a clustering structure, a low-dimensional manifold, or a sparse tree and graph.

  • Clustering: Grouping a set of data examples so that examples in one group (or one cluster) are more similar (according to some criteria) than those in other groups. This is often used to segment the whole dataset into several groups. Analysis can be performed in each group to help users to find intrinsic patterns.
  • Dimension reduction: Reducing the number of variables under consideration. In many applications, the raw data have very high dimensional features and some features are redundant or irrelevant to the task. Reducing the dimensionality helps to find the true, latent relationship.

Reinforcement learning

Reinforcement learning analyzes and optimizes the behavior of an agent based on the feedback from the environment.  Machines try different scenarios to discover which actions yield the greatest reward, rather than being told which actions to take. Trial-and-error and delayed reward distinguishes reinforcement learning from other techniques.

Considerations when choosing an algorithm

When choosing an algorithm, always take these aspects into account: accuracy, training time and ease of use. Many users put the accuracy first, while beginners tend to focus on algorithms they know best.

When presented with a dataset, the first thing to consider is how to obtain results, no matter what those results might look like. Beginners tend to choose algorithms that are easy to implement and can obtain results quickly. This works fine, as long as it is just the first step in the process. Once you obtain some results and become familiar with the data, you may spend more time using more sophisticated algorithms to strengthen your understanding of the data, hence further improving the results.

Even in this stage, the best algorithms might not be the methods that have achieved the highest reported accuracy, as an algorithm usually requires careful tuning and extensive training to obtain its best achievable performance.

When to use specific algorithms

Looking more closely at individual algorithms can help you understand what they provide and how they are used. These descriptions provide more details and give additional tips for when to use specific algorithms, in alignment with the cheat sheet.

Linear regression and Logistic regression

Linear regressionLogistic regression

Linear regression is an approach for modeling the relationship between a continuous dependent variable [Math Processing Error]y and one or more predictors [Math Processing Error]X. The relationship between [Math Processing Error]y and [Math Processing Error]X can be linearly modeled as [Math Processing Error]y=βTX+ϵ Given the training examples [Math Processing Error]{xi,yi}i=1N, the parameter vector [Math Processing Error]β can be learnt.

If the dependent variable is not continuous but categorical, linear regression can be transformed to logistic regression using a logit link function. Logistic regression is a simple, fast yet powerful classification algorithm.

Here we discuss the binary case where the dependent variable [Math Processing Error]y only takes binary values [Math Processing Error]{yi∈(−1,1)}i=1N (it which can be easily extended to multi-class classification problems).

In logistic regression we use a different hypothesis class to try to predict the probability that a given example belongs to the “1” class versus the probability that it belongs to the “-1” class. Specifically, we will try to learn a function of the form:[Math Processing Error]p(yi=1|xi)=σ(βTxi) and [Math Processing Error]p(yi=−1|xi)=1−σ(βTxi).

Here [Math Processing Error]σ(x)=11+exp(−x) is a sigmoid function. Given the training examples[Math Processing Error]{xi,yi}i=1N, the parameter vector [Math Processing Error]β can be learnt by maximizing the Pyongyang said it could call off the talks, slated for June 12, if the US continues to insist that it give up its nuclear weapons. North Korea called the military drills between South Korea and the US a “provocation,” and canceled a meeting planned for today with South Korea.of [Math Processing Error]β given the data set.Group By Linear RegressionLogistic Regression in SAS Visual Analytics

Linear SVM and kernel SVM

Kernel tricks are used to map a non-linearly separable functions into a higher dimension linearly separable function. A support vector machine (SVM) training algorithm finds the classifier represented by the normal vector [Math Processing Error]w and bias [Math Processing Error]b of the hyperplane. This hyperplane (boundary) separates different classes by as wide a margin as possible. The problem can be converted into a constrained optimization problem:
[Math Processing Error]minimizew||w||subject toyi(wTXi−b)≥1,i=1,…,n.

A support vector machine (SVM) training algorithm finds the classifier represented by the normal vector  and bias  of the hyperplane. This hyperplane (boundary) separates different classes by as wide a margin as possible. The problem can be converted into a constrained optimization problem:

Linear and kernel SVM charts

When the classes are not linearly separable, a kernel trick can be used to map a non-linearly separable space into a higher dimension linearly separable space.

When most dependent variables are numeric, logistic regression and SVM should be the first try for classification. These models are easy to implement, their parameters easy to tune, and the performances are also pretty good. So these models are appropriate for beginners.

Trees and ensemble trees

A decision tree for prediction model.

Decision trees, random forest and gradient boosting are all algorithms based on decision trees.

There are many variants of decision trees, but they all do the same thing – subdivide the feature space into regions with mostly the same label. Decision trees are easy to understand and implement.

However, they tend to over fit data when we exhaust the branches and go very deep with the trees. Random Forrest and gradient boosting are two popular ways to use tree algorithms to achieve good accuracy as well as overcoming the over-fitting problem.

Neural networks and deep learning

Neural networks flourished in the mid-1980s due to their parallel and distributed processing ability.

Research in this field was impeded by the ineffectiveness of the back-propagation training algorithm that is widely used to optimize the parameters of neural networks. Support vector machines (SVM) and other simpler models, which can be easily trained by solving convex optimization problems, gradually replaced neural networks in machine learning.

In recent years, new and improved training techniques such as unsupervised pre-training and layer-wise greedy training have led to a resurgence of interest in neural networks.

Increasingly powerful computational capabilities, such as graphical processing unit (GPU) and massively parallel processing (MPP), have also spurred the revived adoption of neural networks. The resurgent research in neural networks has given rise to the invention of models with thousands of layers.

A neural network

Shallow neural networks have evolved into deep learning neural networks.

Deep neural networks have been very successful for supervised learning.  When used for speech and image recognition, deep learning performs as well as, or even better than, humans.

Applied to unsupervised learning tasks, such as feature extraction, deep learning also extracts features from raw images or speech with much less human intervention.

A neural network consists of three parts: input layer, hidden layers and output layer. 

The training samples define the input and output layers. When the output layer is a categorical variable, then the neural network is a way to address classification problems. When the output layer is a continuous variable, then the network can be used to do regression.

When the output layer is the same as the input layer, the network can be used to extract intrinsic features.

The number of hidden layers defines the model complexity and modeling capacity.

Deep Learning: What it is and why it matters

k-means/k-modes, GMM (Gaussian mixture model) clustering

K Means ClusteringGaussian Mixture Model

Kmeans/k-modes, GMM clustering aims to partition n observations into k clusters. K-means define hard assignment: the samples are to be and only to be associated to one cluster. GMM, however define a soft assignment for each sample. Each sample has a probability to be associated with each cluster. Both algorithms are simple and fast enough for clustering when the number of clusters k is given.

DBSCAN

A DBSCAN illustration

When the number of clusters k is not given, DBSCAN (density-based spatial clustering) can be used by connecting samples through density diffusion.

Hierarchical clustering

Hierarchical partitions can be visualized using a tree structure (a dendrogram). It does not need the number of clusters as an input and the partitions can be viewed at different levels of granularities (i.e., can refine/coarsen clusters) using different K.

PCA, SVD and LDA

We generally do not want to feed a large number of features directly into a machine learning algorithm since some features may be irrelevant or the “intrinsic” dimensionality may be smaller than the number of features. Principal component analysis (PCA), singular value decomposition (SVD), andlatent Dirichlet allocation (LDA) all can be used to perform dimension reduction.

PCA is an unsupervised clustering method which maps the original data space into a lower dimensional space while preserving as much information as possible. The PCA basically finds a subspace that most preserves the data variance, with the subspace defined by the dominant eigenvectors of the data’s covariance matrix.

The SVD is related to PCA in the sense that SVD of the centered data matrix (features versus samples) provides the dominant left singular vectors that define the same subspace as found by PCA. However, SVD is a more versatile technique as it can also do things that PCA may not do.

For example, the SVD of a user-versus-movie matrix is able to extract the user profiles and movie profiles which can be used in a recommendation system. In addition, SVD is also widely used as a topic modeling tool, known as latent semantic analysis, in natural language processing (NLP).

A related technique in NLP is latent Dirichlet allocation (LDA). LDA is probabilistic topic model and it decomposes documents into topics in a similar way as a Gaussian mixture model (GMM) decomposes continuous data into Gaussian densities. Differently from the GMM, an LDA models discrete data (words in documents) and it constrains that the topics are a priori distributed according to a Dirichlet distribution.

Conclusions

This is the work flow which is easy to follow. The takeaway messages when trying to solve a new problem are:

  • Define the problem. What problems do you want to solve?
  • Start simple. Be familiar with the data and the baseline results.
  • Then try something more complicated.
  • Dr. Hui Li is a Principal Staff Scientist of Data Science Technologies at SAS. Her current work focuses on Deep Learning, Cognitive Computing and SAS recommendation systems in SAS Viya. She received her PhD degree and Master’s degree in Electrical and Computer Engineering from Duke University.
  • Before joining SAS, she worked at Duke University as a research scientist and at Signal Innovation Group, Inc. as a research engineer. Her research interests include machine learning for big, heterogeneous data, collaborative filtering recommendations, Bayesian statistical modeling and reinforcement learning.

Is it the less information the better in critical split-second decision cases?

ER of Cook County Hospital (Chicago) on West Harriston Street, close to downtown, was built at the turn of last century.

I was home of the world’s first blood bank, cobalt-beam therapy, surgeons attaching severed fingers, famous trauma center for gangs’ gunshot wounds and injuries…and most famous for the TV series ER, and George Clooney

In the mid 90’s. the ER welcomed 250,000 patients a year, mostly homeless and health non-insured patients…

Smart patients would come the first thing in the morning to the ER and pack a lunch and a dinner.  Long lines crowded the walls of the cavernous corridors…

There were no air-conditioners: During the summer heat waves, the heat index inside the hospital reached 120 degrees. 

An administrator didn’t last 8 seconds in the middle of one of the wards.

There were no private rooms and patients were separated by plywood dividers.

There were no cafeteria or private phones: The single public phone was at the end of the hall.

One bathroom served all that crowd of patients.

There was a single light switch: You wanted to light a room and the entire hospital had to light up…

The big air fans, the radios and TV that patients brought with them (to keep company), the nurses’ bell buzzing non-stop and no free nurses around… rendered the ER a crazy place to treat emergency cases

Asthma cases were numerous: Chicago was the world worst in patients suffering from asthma…

Protocols had to be created to efficiently treat asthma cases, chest pain cases, homeless patients…

About 30 patients a day converged to the ER complaining of chest pains (potential heart attack worries) and there were only 20 beds in two wards for these cases.

It cost $2,000 a night per bed for serious intensive care, and about $1,000 for the lesser care (nurses instead of cardiologists tending to the chest pain patient…)

A third ward was created as observation unit for half a day patients.  

Was there any rational protocol to decide in which ward the chest-pain patient should be allocated to?

It was the attending physician call, and most of the decisions were wrong, except for the most obvious heart attack cases…

In the 70’s, cardiologist Lee Goldman borrowed the statistical rules of a group of mathematicians for telling apart subatomic particles. Goldman fed a computer data of hundreds of files of heart attack cases and crunched the numbers into a “predictive equation” or model.

Four key risk factors emerged as the most critical telltale of a real heart attack case:

1. ECG (the ancient electrocardiogram graph) showing acute ischemia

2. unstable angina pain

3, fluid in the lungs

4. systolic blood pressure under 100…

decision tree was fine-tuned to decide on serious cases. For example:

1. ECG is normal but at least two key risk factors are positive

2. ECG is abnormal with at leat one risk factor positive…

These kinds of decision trees… (The early artificial programs)

The trouble was that physicians insisted on letting discriminating factors muddle their decisions. For example, statistics had shown that “normally” females do not suffer heart attack until old age, and thus a young female might be sent home (and die the same night) more often than middle-aged black or older white males patients…

Brendan Reilly, chairman of the hospital department of Medicine, decided to try Goldman decision tree.  Physicians were to try the tree and their own instincts for a period.  The results were overwhelmingly in favor of the Goldman algorithm…

It turned out that, if the physician was not bombarded with dozens of pieces of intelligence and just followed the decision tree, he was better off in the allocation to ward process…

For example, a nurse should record all the necessary information of the patients (smoker, age, gender, overweight, job stress, physical activities, high blood pressure, blood sugar content, family history for heart attacks, sweating tendencies, prior heart surgeries,…), but the attending physician must receive quickly the results of the 4 key risk factors to decide on…

Basically, the physician could allocate the patient to the proper ward without even seeing the individual and be influenced by extraneous pieces of intelligence that are not serious today, but could be potential hazards later on or even tomorrow…

Mind you that in order to save on medical malpractice suits, physicians and nurses treating a patient must Not send the patient any signals that can be captured as “contempt”, like feeling invisible and insignificant  https://adonis49.wordpress.com/2012/07/26/what-type-of-hated-surgeons-gets-harassed-with-legal-malpractice-suits/

Many factors are potential predictors for heart attack cases, but they are minor today, for quick decisions…

No need to overwhelm with irrelevant information at critical time.  Analytic reasoning and snap judgment are neither good or bad: Either method is bad at the inappropriate circumstances.

In the “battle field” the less the information coming in, the less the communication streams and the better the rapid cognition decisions of field commanders…

All you need to know is the “forecast” and not the numbers of temperature, wind speed, barometric pressure…

Note: post inspired from a chapter in “Blink” by Malcolm Gladwell

Some have it very easy in life: They are mostly attractive

This Halo effect

A century ago, Edward Lee Thorndike realized that “A single quality or characteristic (beauty, social stature, height…) produces a positive or negative impression that outshine everything else, and the overall effect is disproportionate”

Attractive people have it relatively easy in their professional life and even get better grades from teachers who are affected by the hallo.

Attractive people gets more frequent second chance in life and are believed more frequently than ordinary people.

They get away with many “disappointing” behaviors and performances.

One need not be racist, sexist, chauvinist… to feel victim of this subconscious unjust stereotype.

Otherwise, how can teenagers fall in love and marry quickly?

I have watched many documentaries on the matting processes among animals.

And it was not automatic that the male who danced better, had a louder booming voice, nicer feathers… that won over the females.

Apparently, female animals have additional finer senses to select the appropriate mate.

Have you ever wondered why CEO’s are mostly attractive, tall, with a full chuck of hairs?

Probably because the less attractive are not deemed appropriate for the media?

Soft skills? Broad learning skills: Bye bye STEM skills?

Google finds STEM skills aren’t the most important skills


  • Lou Glazer is President and co-founder of Michigan Future, Inc., a non-partisan, non-profit organization. Michigan Future’s mission is to be a source of new ideas on how Michigan can succeed as a world class community in a knowledge-driven economy. Its work is funded by Michigan foundations.

Washington Post column on research done by Google on the skills that matter most to its employees success. Big surprise: it wasn’t STEM. The Post writes:

Sergey Brin and Larry Page, both brilliant computer scientists, founded their company on the conviction that only technologists can understand technology.

Google originally set its hiring algorithms to sort for computer science students with top grades from elite science universities.

In 2013, Google decided to test its hiring hypothesis by crunching every bit and byte of hiring, firing, and promotion data accumulated since the company’s incorporation in 1998.

Project Oxygen shocked everyone by concluding that, among the 8 most important qualities of Google’s top employees, STEM expertise comes in dead last.

The 7 top characteristics of success at Google are all soft skills:

Like being a good coach; communicating and listening well; possessing insights into others (including others different values and points of view); having empathy toward and being supportive of one’s colleagues; being a good critical thinker and problem solver; and being able to make connections across complex ideas.

Those traits sound more like what one gains as an English or theater major than as a programmer.

Could it be that top Google employees were succeeding despite their technical training, not because of it?

After bringing in anthropologists and ethnographers to dive even deeper into the data, the company enlarged its previous hiring practices to include humanities majors, artists, and even the MBAs that, initially, Brin and Page viewed with disdain.

This is consistent with the findings of the employer-led Partnership for 21st Century Learning who describe the foundation skills for worker success as the 4Cs: collaboration, communication, critical thinking and creativity.

And the book Becoming Brilliant which adds to those four content and confidence for the 6Cs.

And consistent with the work on the value of a liberal arts degree of journalist George Anders laid out in his book You Can Do Anything and in a Forbes article entitled That Useless Liberal Arts Degree Has Become Tech’s Hottest Ticket.

It’s far past time that Michigan policymakers and business leaders stop telling our kids if they don’t get a STEM related degree they are better off not getting a four-year degree. It simply is not accurate.

(Not to mention that many of their kids are getting non-STEM related four-year degrees.)

And instead begin to tell all kids what is accurate that the foundation skills––as Google found out––are Not narrow occupation-specific skills, but rather are broad skills related to the ability to work with others, think critically and be a lifelong learner.

The kind of skills that are best built with a broad liberal arts education.

The Post concludes:

No student should be prevented from majoring in an area they love based on a false idea of what they need to succeed.

Broad learning skills are the key to long-term, satisfying, productive careers.

What helps you thrive in a changing world isn’t rocket science. It may just well be social science, and, yes, even the humanities and the arts that contribute to making you not just workforce ready but world ready.

Note: About time students takes seriously the importance of general knowledge in everything they undertake. Most important of all is to learn designing experiments, developing the experimental mind that does Not come naturally, but with training.

Efficiency of the human cognitive power or mind

Written in March 6, 2006

Cognitively, human is excellent in simple detection tasks or null indicator such as whether a sensation exist or not;.

The mind is fairly good in differentiating the direction of strength of a sensation such as bigger or smaller than a standard, but he is bad in evaluating whether a sensation is twice or three times stronger, and he is worst as a meter for exact measurements.

Human is more accurate in feeling than when relying on his mind. That is why a subject is forced to make a choice between two stimuli rather than responding that the sensation between a standard and a variable stimulus is equally strong.

Man is Not a good observer of complex events;.

Even when viewers are forewarned that they are to see a movie about a crime and that they are to answer questions about details later on, the accuracy of the observers are very low.

The mind is unable to be an objective recorder of the events that transpire because he gets involved in the scene actions.

The mind has a very narrow range of attention and barely can satisfactorily attend to a couple of stimuli.

This observation deficiency is compounded by our sensory differences and illusions. For example, one in sixteen subjects is color blind, many suffer from tone deafness, taste blindness and so on.

The mind does Not think of himself objectively but rather has convictions, feelings, and explanations based on very restricted experiences, hearsays, memories and he tends to generalize and develop a set of beliefs concerning the operation of the mind.

The human usually expects to see and then see what he wants to see and hardly deviates from his beliefs, and sometimes, even when faced with facts.

Many scientists have overlooked obvious data because they clanged to their hypotheses and theories.

Human has to generate an abundance of reliable information and assimilate them before he could eliminate a few systematic biases that he acquired from previous generations and his personal experiences.

This lack of objectivity in human is referred to by the term “common sense”.

The fact is common sense ideas change and are undergoing continual revision, mainly because of the results of research and controlled experimentations and paradigm shifts. away from traditional knowledge

For example, common sense says heavy objects cannot fly until airplanes are common realities.

Common sense says that human cannot see in the dark until infrared goggles have been tested.

Common sense says that it is laughable to use earplugs in order to hear people talking in very noisy backgrounds, until it is experimented and proven to be correct.

The fact that your father or forerunners have always done something in a particular way does not prove that this is the best way of doing it.

The fact that famous people purchase a product from the best known firm does not permit the manufacturer to state that there cannot be very much wrong with the product since the famous people have bought it.

Process of system/mission analyses? What are the phases?

Written in April 14, 2006

Systems, missions, and products that involve human operators to run, maintain, and keep up-to-date, as societies evolve and change, need to be analyzed at intervals for its consistency with the latest technology advances, people’s expectations, government regulations, and international standards.

To that end, the latest development in the body of knowledge of human physical and cognitive capabilities, along with the latest advancement in the methods applied for analyzing and designing systems have to be revisited, tested, and evaluated for better predictive aptitude of specific human-machine performance criteria.

This article is a refresher tutorial of the necessary sequence of human factors methods offered to analyze each stages in system development.

In general, the basic milestones in system development begin with the exploration concept, demonstration of the concept, validation, full-scale engineering development, testing and debugging for errors and malfunctions, production, and finally operations and support systems for marketing.

Each one of these stages requires the contribution of human factors professionals and experts from the extensive array of methods they dispose of and are trained for, to their vast store of data on human capabilities and limitations, and to their statistical and experimental formation.

Human factors professionals can also contribute to the baseline documentation, instructions, training programs, and operations manuals.

There is a mission for each stage of development concerning the end product of the stage to the next and the sequence follows 7 steps.

The first step is constituted of four analyses requirements; mainly, operational or the projected operations that will confront operators and maintainers, then comparing similar systems in operations and functions, measuring and quantifying the activities involved in the operations, and then identifying the sources of difficulties or critical incidents that may have to be overcome among the interactions of operators and machines.

The second phase is to figure out the flow of functions and the kinds of action/decision or binary choices at each junction of two successive functions. There are no equipments in mind at this phase of analyses.

The third phase is concerned with the types of information necessary to undertake each action identified in the second phase.

The fourth phase is the study of allocating operators to sets of functions and activities and how many operators and skill levels might be needed to fulfill the mission.

The fifth phase is to construct detailed analyses of the required tasks for each activity/function and basically trying to integrate among people, software, and hardware for smooth operations.

The sixth phase might call for an assortment of methods in order to collect detailed data for the network of tasks such as faulty events, mode of failures, the effects or seriousness of the failures, timeline from beginning to ending a task/activity, how the tasks are linked and how often two tasks come to be interacted, simulation techniques whether a computer simulation of virtual real world or prototyping, and eventually conducting controlled experimentations when the previous traditional methods cannot answer specific problems of cause and effects among the variables.

The seventh and final phase in the analysis of a stage of development is to study the sequence of operations and the physical and mental workload of each operator and to finalize the number and capabilities of the crew operating as a team.

The last five phases are time consuming and it is imperative that the first two phases be well planned, analyzed and firm decisions made for the remaining phases in funding, duration of study, and level of details.

In all these phases human factors are well trained to undertake the analyses because they have the knowledge and methods to extract the capabilities and limitations of human operators interacting with the software and hardware so that the design, trade-off studies, and prediction of human performance match the requirements for achieving a mission.

The ultimate output/product of the sequence of analyses becomes inputs to specifications, reviews, and for design guidelines.

Art of thinking clear?

Non Transferable Domain Dependence:

Profession, talents, skills, book smart, street smart…

You talk to medical professionals on medical matters and they “intuitively” understand you.

Talk to them on related medical examples based on economics or business perspectives and their attention falter.

Apparently, insights do not pass well from one field to another, unless you are not a professional in any specific field

This knowledge transfer is also domain dependent such as working in the public domain or in private.

Or coming from academia and having to switch to enterprise environment and having to deal with real life problems.

Same tendency when taking a job selling services instead of products.

Or taking a CEO job coming from a marketing department: the talents and skills are not the same and you tend to adopt previous and irrelevant skills that you are familiar with.

Book smart people do not transfer to street smart individuals.

Novel published by Literary critics get the poorest reviews.

Physicians are more prone to smoke than non-medical professionals.

For example, police officers are twice as violent at home compared to other normal people.

Nobel Prize in economics Harry Markowitz for his “portfolio selection” theory and applications could not think better than investing his saving 50/50 in bonds and stocks.

Decision making mathematical theoreticians feel confounded when deciding on their own personal issues.

Many disciplines require mainly skills and talents, such as plumbers, carpenters, pilots, lawyers…

As for financial marketing, financial investors and start -up companies… luck plays the bigger role than do skills.

Actually, in over 40% of the cases, weak CEO leads strong companies.

As Warren Buffet eloquently stated: “A good management record is far more a function of what business boat you get into it, rather than of how effectively you row”

Note: Read “The art of thinking clear”. I conjecture that people with vast general knowledge do better once they are inducted into a specific field that they feel comfortable in. These people feel that many fields of disciplines can be bundled in a category of “same methods” with basically different terms for the varied specialties.

 

Your sense of smell controls what you spend and who you love

Does this means when you lose this sense of smell your spending and falling in love habits are thrown into chaos?

By Georgia Frances King 

Smell is the ugly stepchild of the sense family.

Sight gives us sunsets and Georgia O’Keefe.

Sound gives us Brahms and Aretha Franklin.

Touch gives us silk and hugs.

Taste gives us butter and ripe tomatoes.

But what about smell?

It doesn’t exist only to make us gag over subway scents or tempt us into a warm-breaded stupor.

Flowers emit it to make them more attractive to pollinators. Rotting food might reek of it so we don’t eat it.

And although scientists haven’t yet pinned down a human sex pheromone, many studies suggest smell influences who we want to climb in bed with. (Not a brainer. what of foul breath, sweat, soiled clothes, unclean hair…)

Olivia Jezler studies the science and psychology that underpins our olfactory system.

For the past decade, she has worked with master perfumers, developed fragrances for luxury brands, researched olfactory experience at the SCHI lab at University of Sussex, and now is the CEO of Future of Smell, which works with brands and new technologies to design smellable concepts that bridge science and art.

In this interview, Jezler reveals the secret life of smell. Some topics covered include:

  • how marketers use our noses to sell to us
  • why “new car smell” is so pervasive
  • how indoor air is often more polluted than outdoor air
  • the reason why luxury perfume is so expensive
  • why babies smell so damn good
  • how Plato and Aristotle poo-pooed our sense of smell

This interview has been condensed and edited for clarity.

Quartz: On a scientific level, why is smell such an evocative sense?

Olivia Jezler: Our sense of smell is rooted in the most primal part of our brain for survival. It’s not linked through the thalamus, which is where all other sensory information is integrated: It’s directly and immediately relayed to another area, the amygdala.

None of our other senses have this direct and intimate connection to the areas of the brain that process emotion, associative learning, and memory. (That’s why we don’t dream “smell”)

Why? Because the structure of this part of the brain—the limbic system—grew out of tissue that was first dedicated to processing the sense of smell.

Our chemical senses were the first that emerged when we were single-cell organisms, because they would help us understand our surroundings, find food, and reproduce.

Still today, emotionally driven responses through our senses of taste and smell make an organism react appropriately to its environment, maximizing its chances for basic survival and reproduction.

Beauty products like lotions and perfumes obviously have their own smells. But what businesses use scent in their branding?

It’s common for airlines to have scents developed for them. Air travel is interesting because, as it’s high stress, you want to make people feel connected to your brand in a positive way.

For example, British Airways has diffusers in the bathrooms and a smell for their towels. That way you walk in and you can smell the “British Airways smell.”

It’s also very common in food.

You can design food so that the smell evaporates in different ways. Nespresso capsules, for instance, are designed to create a lot of odor when you’re using one, so that you feel like you’re in a coffee shop.

I’m sure a lot of those make-at-home frozen pizza brands are designed to let out certain smells while they’re in the oven to feel more authentic, too.

That’s an example of the “enhancement of authenticity.” Another example might be when fake leather is made to smell like real leather instead of plastic.

So we got used to the smell of natural things, but then as production became industrialized, we now have to fabricate the illusion of naturalness back into the chemical and unnatural things?

Yes, that’s it. People will feel more comfortable and they’ll pay more for products that smell the way we imagine them to smell.

For example: “new car smell.” When Rolls Royce became more technologically advanced, they started using plastic instead of wood for some parts of the car—and for some reason, sales started going down. They asked people what was wrong, and they said it was because the car didn’t smell the same. It repelled people from the brand. So then they had to design that smell back into the car.

New car smell is therefore a thing, but not in the way we think. It is a mix of smells that emanate from the plastics and interiors of a car.

The cheaper the car, the stronger and more artificial it smells. German automakers have entire olfactory teams that sniff every single component that goes into the interior of the car with their nose and with machines.

The problem then is if one of these suppliers changes any element of their product composition without telling the automaker, it throws off the entire indoor odor of the car, which was carefully designed for safety, quality, and branding—just another added complexity to the myriad of challenges facing automotive supply chains!

Are these artificial smells bad for us?

Designed smells are not when they fulfill all regulatory requirements. This question touches on a key concern of mine: indoor air. Everybody talks about pollution.

Like in San Francisco, a company called Aclima works with Google to map pollution levels block by block at different times of the day—but what about our workplaces? Our homes? People are much less aware of this.

We are all buying inexpensive furniture and carpets and things that are filled with chemicals, and we’re putting them in a closed environment with often no air filtration.

Then there are the old paints and varnishes that cover all the surfaces! Combine that with filters in old buildings that are rarely or never changed, and it gets awful.

When people use cleaning products in their home, it’s also putting a lot more chemicals into the house than before. (You should open your windows after you clean.)

We’re therefore inhaling all these fumes in our closed spaces. In cities like New York, we spend 90% of our time indoors and the air is three times worse than outdoors.

The World Health Organization says it’s one of the world’s greatest environmental health risks.

There are a few start-ups working on consumer home appliances that help you monitor your indoor air, but I am still waiting to see the one that can integrate air monitoring with filtering and scenting.

Manufacturing smell seems to fall into two camps. The first is fabricating a smell when you’ve taken the authenticity out of the product and other brands simply enhance an existing smell. That’s not fake, but it still doesn’t seem honest.

To me they seem like the same thing: Because they are both designed to enhance authenticity.

There’s an interesting Starbucks case related to smell experiences and profits.

In 2008 they introduced their breakfast menu, which included sandwiches that needed to be reheated. The smell of the sandwiches interfered with the coffee aroma so much that it completely altered the customer experience in store: It smelled of food rather than of coffee.

During that time, repeat customer visits declined as core coffee customers went elsewhere, and therefore sales at their stores also declined, and this impacted their stock. The sandwiches have since been redesigned to smell less when being reheated.

This is starting to feel a bit like propaganda or false advertising. Are there laws around this?

No, there aren’t laws for enhancing authenticity through smell. Maybe once people become more aware of these things, there will be. I think it’s hard at this point to quantify what is considered false advertising.

There aren’t even laws for copyrighting perfumes!

This is a reason why everything on the market usually kind of smells the same: Basically you can just take a perfume that’s on the market and analyze it in a machine that can tell you its composition. It’s easily recreated, and there’s no law to protect the original creation. Music has copyright laws, fragrance does not.

That’s crazy. That’s intellectual property.

It is. As soon as there’s a blockbuster, every brand just goes, “We want one like that!” Let’s make a fragrance that smells exactly like that, then lets put it in the shampoo. Put it in the deodorant. Put it in this. Put it in that.

If the perfume smells the same and is made with the same ingredients, why do we pay so much more for designer perfumes?

High fashion isn’t going to make [luxury brands] money—it’s the perfumes and accessories.

What differs is the full complexity of the fragrance and how long it lasts.

As for pricing, It’s very much the brand. Perfume is sold at premium for what it is—but what isn’t?

Your Starbucks coffee, Nike shoes, designer handbags… There can be a difference in the quality of the ingredients, yeah, but if it’s owned by a luxury brand and you’re paying $350, then you’re paying for the brand.

The margins are also really high: That’s why all fashion brands have a perfume as a way of making money. High fashion isn’t going to make them money—it’s the perfumes and accessories. They play a huge, huge role in the bottom line.

How do smell associations differ from culture to culture?

Because of what was culturally available—local ingredients, trade routes et cetera—countries had access to very specific ingredients that they then decided to use for specific purposes.

Because life was lived very locally, these smells and their associations remained generation after generation.

Now if we wanted to change them, it would not happen overnight; people are not being inundated with different smell associations the way they are with fashion and music.

Once a scent is developed for a product in a certain market, the cultural associations of the scent of “beauty,” “well-being,” or “clean” stick around. The fact that smells can’t yet transmit through the internet means that scent associations also keep pretty local.

For example, multinational companies want to develop specific fragrances and storylines for the Brazilian market. Brazilian people shower 3.5 times a day. If somebody showers that much, then scent becomes really important. When they get out of the shower, especially in the northeast of Brazil, they splash on a scented water—it’s often lavender water, which is also part of a holy ritual to clean a famous church, so it has positive cultural connotations.

Companies want to understand what role each ingredient already plays in that person’s life so that they can use it with a “caring” or “refreshing” claim, like the lavender water.

Lavender is an interesting one. In the US, lavender is more of a floral composition versus true lavender. People like the “relaxing lavender” claim, but Americans don’t actually like the smell of real lavender.

On the other hand, in Europe and Brazil, when it says “lavender” on the packaging, it will smell like the true lavender from the fields; in Brazil, lavender isn’t relaxing—it’s invigorating!

In the UK, florals are mostly used in perfumes, especially rose, which is tied to tradition.

Yet in the US, a rose perfume is considered quite old-fashioned—you rarely smell it on the subway, whereas the London Tube smells like a rose garden.

In Brazil, however, florals are used for floor and toilet cleaners; the smell of white flowers like jasmine, gardenia, and tuberose are considered extremely old-fashioned and unrelatable. However, in Europe and North America, these very expensive ingredients are a sign of femininity and luxury.

Traditional Chinese medicine influences the market in China: Their smells are a bit more herbal or medicinal because those ingredients are associated with health and well-being. You see that in India with Ayurvedic medicine as well. By comparison, in the US, the smell of health and cleanliness is the smell of Tide detergent.

Are there smells we can all agree on biologically, no matter where we’re from, that smell either good or bad?

Yes: Body fluids, disease, and rotten foods are biological no-nos.

Natural gas, which you can smell in your kitchen if you leave the gas on by mistake, is in reality odorless: A harmless chemical is added to give gas a distinctive malodor that is often described as rotten eggs—and therefore act as a warning!

The smell of babies, on the other hand? Everybody loves the smell of babies: It’s the next generation.

Do you wear perfume yourself?

I wear tons of perfume. However, if I’m working in a fragrance house or a place where I smell fragrances all the time, I don’t wear perfume, because it then becomes difficult to smell what is being created around me. There is also a necessity for “clean skin” to test fragrances on—one without any scented lotions or fragrances.

Why does perfume smell different on different people? Is it because it reacts differently with our skin, or is it because of the lotions and fabric softeners or whatever other smells we douse ourselves in?

Cancers and diabetes can be identified through body odor.

Generally, it’s our DNA. But there are different layers to how we smell. Of course, the first layer is based on the smells we put on: soaps and deodorants and whatever we use. Then there’s our diet, hydration level, and general health.

An exciting development in the medical world is in diagnostics: Depending upon if we’re sick or not, we smell different.

Cancers and diabetes can be identified through body odor, for instance. Then on the most basic level, our body odor is linked to the “major histocompatibility complex” (MHC), which is a part of the genome linked to our immune system. It is extremely unique and a better identifier than a retinal scan because it is virtually impossible to replicate.

Why don’t we care more about smell?

The position that our sense of smell holds is rooted in the foundation of Western thought, which stems from the ancient Greeks. Plato assigned the sense of sight as the foundation for philosophy, and Aristotle provided a clear hierarchy where he considered sight and hearing nobler in comparison to touch, taste, and smell.

Both philosophers placed the sense of smell at the bottom of their hierarchy; logic and reason could be seen and heard, but not smelt.

The Enlightenment philosophers and the Industrial Revolution did not help, either, as the stenches that emerged at that time due to terrible living conditions without sewage systems reminded us of where we came from, not where we were headed.

Smell was not considered something of beauty nor a discipline worth studying.

It’s also a bit too real and too closely tied to our evolutionary past. We are disconnected from this part of ourselves, so of course we don’t feel like it is something worth talking about.

As society becomes more emotionally aware, I do think smell will gain a new role in our daily lives.

This article is part of Quartz Ideas, our home for bold arguments and big thinkers.

Restructuring engineering curriculums to respond to end users demands, safety and health

In 1987, Alphonse Chapanis, a renowned Human Factors professional, urged that published Human Factors research papers target the practical design need of the various engineering disciplines so that the research data be readily used by engineers.

Dr. Chapanis was trying to send a clear message that Human Factors main discipline was to design interfaces between systems and end users and thus, research papers have to include sections directing the engineers as to the applicability of the results of the paper to design purposes.

In return, it is appropriate to send the message that all engineering disciplines should include sections in their research papers orienting the engineering practitioners to the applicability of the results of the papers to the end users and how Human Factors professionals can judiciously use the data in their interface designs.

As it was difficult for the Human Factors professional to send the right message to the engineering practitioners, and still has enormous difficulty disseminating the proper purpose and goals, it would be a steep road for the engineers to send the right message that what they design is actually targeting the needs and new trends of the end users.

As long as the engineering curriculums fail to include the Human Factors field as an integral part in their structures it would not be realistic to contemplate any shift in their designs toward the end users.

Systems would become even more complex and testing and evaluation more expensive in order to make end users accept any system and patronize it.

So why not design anything right from the first time by being initiated and exposed to human capabilities and limitations, their safety and health?

Instead of recognizing from the early phases in the design process that reducing human errors and risks to the safety and health of end users are the best marketing criteria for encouraging end users to adopt and apply a system, we see systems are still being designed by different engineers who cannot relate to the end users because their training is not explicitly directed toward them.

What is so incongruous with the engineering curriculums to include courses that target end users?

Why would not these curriculums include courses in occupational safety and health, consumer product liability, engineers as expert witnesses, the capabilities and limitations of human, marketing, psychophysics and experimental design?

Are the needs and desires of end users beneath the objectives of designing systems?

If that was true, why systems are constantly being redesigned, evaluated and tested in order to match the market demands?

Why do companies have to incur heavy expenses in order to rediscover the wheel that the basis of any successful design ultimately relies on the usefulness, acceptability and agreement with the end users desires and dreams?

Why not start from the foundation that any engineering design is meant for human and that designed objects or systems are meant to fit the human behavior and not vice versa?

What seem to be the main problems for implementing changes in the philosophy of engineering curriculums?

Is it the lack to find enough Human Factors, ergonomics and industrial psychologist professionals to teach these courses?

Is it the need to allow the thousands of psychologists, marketing and business graduates to find outlet “debouches” in the marketplace for estimating users’ needs, desires, demands and retesting and re-evaluating systems after the damages were done?

May be because the Human factors professionals failed so far to make any significant impact to pressure government to be part and parcel of the engineering practices?

Note: I am Not sure if this discipline Human Factors/Ergonomics is still a separate field in Engineering or has been integrated in all engineering disciplines.

From my experience in teaching a few courses at universities, I propose that courses in Experimental Design be an integral course in all engineering disciplines: students graduate without having a serious idea how to run “sophisticated” experiments or know how to discriminate among the independent variables, the dependent variables, the control variable…and how to interpret complex graphs.


adonis49

adonis49

adonis49

December 2020
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

Blog Stats

  • 1,442,066 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 784 other followers

%d bloggers like this: