Adonis Diaries

Posts Tagged ‘risk

Social loafing effect: Two horses do not pull twice the force of a single one

When 8 people pull on a rope, each one tends to apply only 50% of his potential

How much idleness can we get away when working in a team, and how big the team before we notice the idle people?

Teams should be composed of different specialized professionals so that failure could be traced back to the culprit.

Otherwise, each member will easily cover for the deficient member since it would require too much time and effort to investigate all the members.

Consequently, teams take bigger risk than individual do because of the diffusion of responsibility effect.

Read: The Art of Thinking Clear

Risk or Uncertainty? Does the difference make any difference in our behavior?

Risk involve known probabilities of what we decide to do or gamble on. For example playing in casinos, tossing a coin and studying probability in textbook.

Uncertainty is total ignorance of the outcome, and thus, people prefer to work within known probability.

Consider this experiment known as the Ellsberg Paradox:

We have tow boxes A and B.

Box A contains 50 red balls and 50 black balls. Box B contains 100 balls but the number of exact color-type is unknown.

Given the choice, people will select a red ball from box A. After this first selection, the subject will also prefer box A to select a black ball in the second trial.

Logically, if you opted to select a red ball from box A in the first trial, this should mean that you assumed black balls to be more numerous in box B. However, box A is still the preferred box to select a black ball. Why?

Are the subject not assuming anything? Did they not understand what to do?

Most probably, people have aversion for uncertainty. This is called “Ambiguity Aversion

Mostly, we lead our life navigating oceans of uncertainties, even though we have strong aversion for uncertainty.

Statistics and probability are hard to comprehend and master or even to recall their tenants after a short while.

Consequently, even incases of known probabilities (risky events), we tend to treat cases as “under uncertainty

In any case, statistics don’t stir us: people do.

A thousand of people died of famine? That’s terrible, but famine tragedy are getting pretty common.

Show the face of hungry kid dying of famine and donations pour in.

Give the story a face

Estimation is another part of risk, and we tend to systematically over estimate our chances of success.

If we think that we have talents, we start believing we are going to become a star musician, actor, painter, photographer, design model, sports…

After all, the media consistently display only stars and you think that the stars are the vast majority and pretty common to become a star  with the adequate skills and talents: “I have enough talents, and with a little luck, I’ll be  a star very shortly…

What you missed in that reasoning is that for every star, maybe 10,000 failed in their attempts. So much luck needs to be distributed for the few lucky ones, and mostly for those in developed nations and a variety of opportunities.

We don’t try to take a good look at the graveyards of all these once-promising talented hopefuls. This is called the” Survivorship Bias

Note: Daniel Ellsberg is mostly known as the one who leaked the top-secret Pentagon Papers to the press, leading to the downfall of President Nixon.

 

Article #35 (Started March 4, 2006)

 “Efficiency of the human body structure”

This article is an on going project to summarize a few capabilities and limitations of man. While the most sophisticated intelligent machines invented by man may contain up to ten thousand elements, the human machine is constituted of up to a million trillion of cells, up to a thousand trillions of neurons in the central nervous system, about a couple hundred bones, and as many organs, muscles, tendons and ligaments.

In the previous article #33 we discussed a graph in a story style and discovered that a human barefoot in texture, shape, and toes has a higher coefficient of friction than many man-made shoes that allow easier traction to move forward for less energy expenditure. We also expanded our story to observe that the structure of the bones and major muscles attached to limbs for movements as lever systems provide higher speed and range of movements at the expense of exorbitant muscular efforts.

A most important knowledge for designing interfaces is a thorough recognition of the capabilities and limitations of the five senses.  One of the assignment involves comparing the various senses within two dozens categories such as: anatomy, physiology, receptor organs, stimulus, sources of energy, wave forms, reaction time, detectable wavelengths and frequencies, practical detection thresholds of signals, muscles, physical pressure, infections and inflammations, disorders and dysfunctions, assessment, diagnostic procedures, corrective measures, effects of age, and safety and risk.

Human dynamic efforts for doing mechanical work is at best 30% efficient because most of the efforts are converted to maintaining static positions in order to preserve stability and equilibrium for all the other concomitant stabilizing joints, bones and muscles.  For example, the stooping position consumes 60% of the efforts for having a work done, in addition to the extremely high moment effected on the edges of the lower back intervertebrae discs.  Static postures constrict the blood vessels and fresh blood is no longer carrying the necessary nutrients to sustain any effort for long duration and heart rate increases dramatically; lactic acid accumulates in the cells and fatigue ensues until the body rests in order to break down that acid.

Human energy efficiency is even worse because most of the energy expended is converted into heat.  Not only physical exercises generate heat but, except for glucose or sugar, most of the nutrients have to undergo chemical transformations to break down the compounds into useful and ready sources of energy, thus generating more heat.  Consequently, heat is produced even when sleeping when the body cells are regenerated. Internal heat could be a blessing in cold environments but a worst case scenario in a hot atmosphere because the cooling mechanism in human is solely confined to sweating off the heat accumulated in the blood stream.  Heat is a source of blessing when we are sick with microbes and bacteria because the latter is killed when the internal body temperature rises above normal.

“Human Factors versus Artificial Intelligence”

Article #49 in Human Factors in Engineering category, written in June 12, 2006

In her book “Choices” the actress Liv Ullman asks her scientist lover: “What do you believe will be the main preoccupation of science in the future?” and he replies: “I have no doubt that it will be the search for the correct definition of error.”

The scientist goes on to state that a living organism has the capacity to be able to make mistakes, to get in touch with chance or hazard, which is absolutely a necessary part in survival. (kind of trial and error)

The meaning of error here is broad and related to the chance of surviving a cataclysm, where the survivors are probably the errors or the monsters in the tail of the “normal group”.

It is the monsters that scientists would be interested in studying in the future because they fail to belong in the logic process of our automated life.

Taxonomy of Artificial Intelligence AI is the duplication of human faculties of creativity, self-improvement, and language usage that might be necessary to illustrate the progress attained in this field.

There are four basic systems of AI mainly: 1) Thinking like human, 2) thinking rationally, 3) acting like human, or 4) acting rationally.

The purpose of the first system is to create computer machines with complex human minds or to automate the activities that are associated with human thinking.

In order to satisfy that goal adequate answers need to be provided accurately to the following questions:

1. Where does knowledge come from?

2. How to decide when payoff may be far in the future?

3. How do brains process information?

4. How do human think and act, and how do animals think and act?

The preferred approaches by scientists in that system were either to model our cognitive processes or to get inside the actual workings of minds.

In 1957, Simon claimed that “There are now machines that think, learn, and create.  In a visible future the range of problems they can handle will be coextensive with the range of the human mind.”

This claim was backed by practical programs such as the reasoning ‘Logic Theory’ that can think non-numerically and prove the theorems in the mathematical book ‘Principia Mathematica‘.

‘Logic Theory’ was followed by the program ‘General Problem Solver‘ that imitates human protocols in manipulating data structures composed of symbols.

The second system of thinking rationally has for purpose to create computational models of mental faculties or alternatively to compute the faculties of perceiving, reasoning, and acting.  The typical questions to resolve are:

1. How does mind arise from a physical brain?

2. What can be computed?

3. How do we reason with uncertain information?

4. How does language relate to thought?

The approaches undertaken were to codify the ‘laws of thought’ and using syllogism for the ‘right thinking‘ as in Aristotle.

McCulloch & Pitts claimed in 1943 that “Any computable function can be derived from some network of connected neurons, with logical switches such as (AND, OR, NOT, etc)”.

This line of thinking was supported by an updated learning rule for modifying the connection strengths and the manufacture of the first neural network computer, the SNARC by Minsky & Edmonds (1951).

The third system of acting like human had for purpose to create machines that perform functions requiring similar human intelligence or alternatively, to create computers doing what people are better at doing now.  The relevant questions to resolve are:

1. How does knowledge lead to action?

2. How to decide so as to maximize payoff?

3. How can we build an efficient computer?

The main approach was to emulating the Turing test, which is based on the inability of a person to distinguish that a program has undeniably human entities when people are to decide whether the responses are generated from a machine or a human.

This system was successful when programs for playing checkers or chess learned to play better than their creators and when Gelernter (1959) constructed the ‘Geometry Theorem Prove’ program.

To that end, Minsky (1963) initiated a series of anti-logical programs called ‘microworlds’ within limited domains such as: SAINT, ANALOGY, STUDENT, and the famous solid block world.

The fourth system to act rationally had for purpose to define the domain of AI as computational intelligence for designing intelligent agents (Poole, 1998) or artifact agents behaving intelligently.  The relevant questions are:

1. Can formal rules draw valid conclusions?

2. What are these formal rules?

3. How can artifacts operate under their own control?

The approaches were to create rational agents and programs that operate under autonomous control.

This line of thinking generated the LISP computer language, then the ‘Advice Taker’ (McCarthy, 1958) where new axioms could be added to the central principles of knowledge representation and reasoning without reprogramming.

An advisory committee reported in 1966 that “There has been no machine translation of general scientific text, and none is in immediate prospect.”

Since then, most of the works on AI research were concentrated within the fourth system in developing AI programs.

The fourth view of AI is within the capability of human to make progress in AI but there is a caveat.

If human rely exclusively on the fourth system, which is in the realm of the learned and educated people in mathematics and engineering, the danger is that autonomous systems will be developed by normal and learned people as if human will behave logically.

Thus, we might end up with systems that do not coincide with the basic uncertain behavior of human.

The concept of error as defined in the beginning of the article will not be accounted for and some major calamities might befall human kind.

Note 1: On error taxonomy https://adonis49.wordpress.com/2009/05/26/error-taxonomies-in-human-factors/

Note 2: Artificial Intelligence started its application around 1988 by trying to cope with retiring “experts” who have been decades in the job and knows what works and what  doesn’t. The program was tailored made to a specific job with a series of “What if” questions followed by answers from the expert. I tried my hand on these kinds of programs.

Note 3: Currently, AI is relying on megadata sources gathered from all kinds of fields.  My impression is that the products are evaluated through testing and Not according to time consuming experiments. It would be more efficient to collect the Facts from real peer-reviewed research papers, but this will require full-time professionals in selecting what paper is scientific and what is pseudo-scientific or just funded by biased interested companies


adonis49

adonis49

adonis49

November 2020
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Blog Stats

  • 1,429,194 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 777 other followers

%d bloggers like this: