Adonis Diaries

Posts Tagged ‘error

Article #12, April 9, 2005

“What are the error taxonomies in Human Factors?”

There is a tendency to separate errors made by human and those done by machines as if any man made equipment, product, or system has not been designed, tested, evaluated, manufactured, distributed, or operated by a human.  My point is any errors committed by using or operating an artificial implement that causes injuries is ultimately a human errors. 

How Human Factors classifies errors when people operate systems, and what are the types of errors, their frequencies and consequences on the health and safety of operators and systems’ performance?  Human Factors professionals attempted to establish various error taxonomies, some within specific contexts such as nuclear power plants and chemical installations, of deficiencies in design and operation that might be committed, and others that are general in nature and restricted to processes of the mind and the limitations of human capabilities. One alternative classification of human errors is based on human behavior and the level of comprehension; mainly skill-based, rule-based, or knowledge-based behavioral patterns. For example, Rasmussen (1982) developed a decision flow diagram that identifies 13 types of errors; this taxonomy identifies two kinds of errors attributable to skill based behavior such as acts relevant to manual variability or topographic misorientation, four major errors related to rule based behavior such as stereotype takeover, forgetting isolated acts, mistakes alternatives, and other slip of memory, and then seven types of errors that can be attached to knowledge based behavior such as familiar association short cut, information not seen or sought, information assumed but not observed, information misinterpreted, and side effects or conditions not adequately considered. 

These types of errors are the products of activities done in routine situations or when the situation deviates from normal routine and discriminate among the stages and strength of controlled routines in the mind that precipitate the occurrence of an error whether during executing of a task, omitting steps, changing the order of steps, sequence of steps, timing errors, inadequate analysis or decision making.  With a strong knowledge of the behavior of a system, provided that the mental model is not deficient, then applying the rules consistently most of the errors will be concentrated on the level of skill achieved in performing a job.

Another taxonomy rely on the theory of information processing and it is somehow a literal transcription of the experimental processes; mainly observation of the status of a system, choice of hypothesis, testing of hypothesis, choice of goal, choice of procedure and execution of procedure.  Basically, this taxonomy may answer the problems in the rule-based and knowledge–based behavior.

Another alternative taxonomy could be found in measurement errors considered in statistical research such as conceptual, consistent or random errors. Conceptual errors are committed when a proxy is used instead of the variable of interest either because of lack of knowledge of how to measure the latter (i.e., measuring vocabulary ability when mental ability is the object of the research) or because it is less expensive or more convenient.

Consistent errors are represented by systematic errors from respondents whether conscious or not, measuring instruments, research settings, interviewers, raters, and researchers. Consistent errors affect the validity of measures.

Random errors occur as a result of temporary fluctuations in respondents, raters, etc.  Random errors affect the reliability of the measures.

The effects of these measurement errors have different consequences whether committed relative to the dependent or independent variables. It would be interesting to find correspondence among the various error taxonomies as well as assigning every error to either a conscious, predetermined tendency along with the real reasons underlining these errors, or unconscious errors.

It is useful to specify in the final steps of taxonomy whether an error is of omission or of commission.  I suggest that the errors of commission be also fine tuned to differentiate among errors of sequence, the kind of sequence and timing of the execution.

There are alternative strategies for reducing human errors by either training, selection of the appropriate applicants or redesigning a system to fit the capabilities of end users and or taking care of his limitations by preventive designs, exclusion designs and fail-safe designs.

“Human Factors versus Artificial Intelligence”

Article #49 in Human Factors in Engineering category, written in June 12, 2006

In her book “Choices” the actress Liv Ullman asks her scientist lover: “What do you believe will be the main preoccupation of science in the future?” and he replies: “I have no doubt that it will be the search for the correct definition of error.”

The scientist goes on to state that a living organism has the capacity to be able to make mistakes, to get in touch with chance or hazard, which is absolutely a necessary part in survival. (kind of trial and error)

The meaning of error here is broad and related to the chance of surviving a cataclysm, where the survivors are probably the errors or the monsters in the tail of the “normal group”.

It is the monsters that scientists would be interested in studying in the future because they fail to belong in the logic process of our automated life.

Taxonomy of Artificial Intelligence AI is the duplication of human faculties of creativity, self-improvement, and language usage that might be necessary to illustrate the progress attained in this field.

There are four basic systems of AI mainly: 1) Thinking like human, 2) thinking rationally, 3) acting like human, or 4) acting rationally.

The purpose of the first system is to create computer machines with complex human minds or to automate the activities that are associated with human thinking.

In order to satisfy that goal adequate answers need to be provided accurately to the following questions:

1. Where does knowledge come from?

2. How to decide when payoff may be far in the future?

3. How do brains process information?

4. How do human think and act, and how do animals think and act?

The preferred approaches by scientists in that system were either to model our cognitive processes or to get inside the actual workings of minds.

In 1957, Simon claimed that “There are now machines that think, learn, and create.  In a visible future the range of problems they can handle will be coextensive with the range of the human mind.”

This claim was backed by practical programs such as the reasoning ‘Logic Theory’ that can think non-numerically and prove the theorems in the mathematical book ‘Principia Mathematica‘.

‘Logic Theory’ was followed by the program ‘General Problem Solver‘ that imitates human protocols in manipulating data structures composed of symbols.

The second system of thinking rationally has for purpose to create computational models of mental faculties or alternatively to compute the faculties of perceiving, reasoning, and acting.  The typical questions to resolve are:

1. How does mind arise from a physical brain?

2. What can be computed?

3. How do we reason with uncertain information?

4. How does language relate to thought?

The approaches undertaken were to codify the ‘laws of thought’ and using syllogism for the ‘right thinking‘ as in Aristotle.

McCulloch & Pitts claimed in 1943 that “Any computable function can be derived from some network of connected neurons, with logical switches such as (AND, OR, NOT, etc)”.

This line of thinking was supported by an updated learning rule for modifying the connection strengths and the manufacture of the first neural network computer, the SNARC by Minsky & Edmonds (1951).

The third system of acting like human had for purpose to create machines that perform functions requiring similar human intelligence or alternatively, to create computers doing what people are better at doing now.  The relevant questions to resolve are:

1. How does knowledge lead to action?

2. How to decide so as to maximize payoff?

3. How can we build an efficient computer?

The main approach was to emulating the Turing test, which is based on the inability of a person to distinguish that a program has undeniably human entities when people are to decide whether the responses are generated from a machine or a human.

This system was successful when programs for playing checkers or chess learned to play better than their creators and when Gelernter (1959) constructed the ‘Geometry Theorem Prove’ program.

To that end, Minsky (1963) initiated a series of anti-logical programs called ‘microworlds’ within limited domains such as: SAINT, ANALOGY, STUDENT, and the famous solid block world.

The fourth system to act rationally had for purpose to define the domain of AI as computational intelligence for designing intelligent agents (Poole, 1998) or artifact agents behaving intelligently.  The relevant questions are:

1. Can formal rules draw valid conclusions?

2. What are these formal rules?

3. How can artifacts operate under their own control?

The approaches were to create rational agents and programs that operate under autonomous control.

This line of thinking generated the LISP computer language, then the ‘Advice Taker’ (McCarthy, 1958) where new axioms could be added to the central principles of knowledge representation and reasoning without reprogramming.

An advisory committee reported in 1966 that “There has been no machine translation of general scientific text, and none is in immediate prospect.”

Since then, most of the works on AI research were concentrated within the fourth system in developing AI programs.

The fourth view of AI is within the capability of human to make progress in AI but there is a caveat.

If human rely exclusively on the fourth system, which is in the realm of the learned and educated people in mathematics and engineering, the danger is that autonomous systems will be developed by normal and learned people as if human will behave logically.

Thus, we might end up with systems that do not coincide with the basic uncertain behavior of human.

The concept of error as defined in the beginning of the article will not be accounted for and some major calamities might befall human kind.

Note 1: On error taxonomy https://adonis49.wordpress.com/2009/05/26/error-taxonomies-in-human-factors/

Note 2: Artificial Intelligence started its application around 1988 by trying to cope with retiring “experts” who have been decades in the job and knows what works and what  doesn’t. The program was tailored made to a specific job with a series of “What if” questions followed by answers from the expert. I tried my hand on these kinds of programs.

Note 3: Currently, AI is relying on megadata sources gathered from all kinds of fields.  My impression is that the products are evaluated through testing and Not according to time consuming experiments. It would be more efficient to collect the Facts from real peer-reviewed research papers, but this will require full-time professionals in selecting what paper is scientific and what is pseudo-scientific or just funded by biased interested companies


adonis49

adonis49

adonis49

October 2020
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

Blog Stats

  • 1,429,033 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 777 other followers

%d bloggers like this: