Adonis Diaries

Posts Tagged ‘error taxonomy

Human Factors in Engineering; Article 26, November 13, 2005

“Guess what my job is”

It would be interesting to have a talk with the freshly enrolled engineering students from all fields as to the objectives and meaning of design projects.

This talk should be intended to orient engineers for a procedure that might provide their design projects the necessary substance for becoming marketable and effective in reducing the pitfalls in having to redesign.

This design behavior should start right at the freshman level while taking formal courses so that prospective engineers will naturally apply this acquired behavior in their engineering career.

In the talk, the students will have to guess what the Human Factors discipline is from the case studies, exercises and problems that will be discussed.

The engineers will try to answer a few of the questions that might be implicit, but never formally explicitly explained or learned, because the necessary courses are generally offered outside the engineering curriculums.

A sample of the questions might be as follows:

1. What is the primary job of an engineer?

2. What does design means?  How do you perceive designing to look like?

3. To whom are you designing?  What category of people?

4. Who are your target users? Engineer, consumers, support personnel, operators?

5. What are your primary criteria in designing?  Error free application product?

6. Who commit errors?  Can a machine do errors?

7. How can we categorize errors?  Any exposure to an error taxonomy?

8. Can you foresee errors, near accidents, accidents?  Take a range oven for example, expose the foreseeable errors and accidents in the design and specifically the display and control idiosyncrasy.

9. Who is at fault when an error is committed or an accident occurs?

10. Can we practically account for errors without specific task taxonomy?

11. Do you view yourself as responsible for designing interfaces to your design projects depending on the target users?

12. Would you relinquish your responsibilities for being in the team assigned to design an interface for your design project?

13. What kinds of interfaces are needed for your design to be used efficiently?

14. How engineers solve problems?  Searching for the applicable formulas? Can you figure out the magnitude of the answer?  Have you memorized the allowable range for your answers from the given data and restriction imposed in the problem after solving so many exercises?

15. What are the factors or independent variables that may affect your design project?

16. How can we account for the interactions among the factors?

17. Have you memorize the dimensions of your design problem?

18. Have you been exposed to reading research papers? Can you understand, analyze and interpret the research paper data? Can you have an opinion as to the validity of an experiment?

19. Would you accept the results of any peer-reviewed article as facts that may be readily applied to your design projects?

20. Do you expect to be in charged of designing any new product or program or procedures in your career?

21. Do you view most of your job career as a series of supporting responsibilities; like just applying already designed programs and procedures?

22. Are you ready to take elective courses in psychology, sociology, marketing, and business targeted to learning how to design experiments and know more about the capabilities, limitations and behavioral trends of target users?

23. Are you planning to go for graduate studies?  Do you know what elective courses might suit you better in your career?

Article #13, April 10, 2005

 “How basic are task taxonomies in Human Factors?”

The follow up question is: how can we conceive practical human error taxonomies before working out taxonomies for the tasks required in a system, its processes or steps in a method? 

If the type of skills required by an operator to perform a set of tasks are not well defined and studied it might not be that useful to apply a complex error taxonomy that does not delineate the applicable domain. For example, how can we allocate functions to either operators or machines or how can we decided who is better at performing a set of tasks an automated machine or a trained operator if we fail or cannot classify the human capabilities and limitations versus the potential capabilities and limitations of the machine we intend to design?

There is a relationship between task taxonomy and task analysis.  Originally, task analysis methods were conceived to break down a job into work modules and then to elemental tasks that standard time measurements could be applied to in order to maximize profit on human efforts. The purpose of task analysis is to originate an ordered list of all the task that people will do in a system with details on information requirements, task times, operator actions, environmental conditions, evaluations, and decisions that must be made.

Consequently, a task analysis should produce estimates of time and effort required to perform tasks, determination of staffing, skills, and training requirements, pinpointing the necessary interfaces between operators and the system, and to provide inputs to reviews and specifications.  This process enables detailed examination in the evaluation of human functions in terms of abilities, skills, knowledge, and attitudes required for performance of any function from inputs to outputs.  When profits are the bottom line you should also have in mind that reducing errors is a major criterion beside time saved and direct costs.

It seems implicit when allocating standard times that the appropriate conditions of work are explicitly defined, the age and gender of the worker are acknowledged, the duration and frequency of rest breaks accounted for, the eventuality that overtime work is considered and ability to cope with boredom and repetitive tasks because all these variables would affect the standard times for accomplishing a job efficiently with minimum errors for the long haul.

If you were to decide between the two alternatives: either correcting standard times to finish a task based on experiments accounting for the above factors that might affect efficiency, safety and health of workers, or allocating a separate expense fund based on actuarial studies for rate of illnesses, rate of errors, hospitalization cost and overturn among workers if the uncorrected standard times are applied, then which choice would you definitely retain?

A task analysis of a system allow estimate of the likelihood of a certain error (i.e., the product of frequency and the probability of occurrence of a certain error) and how often the error will occur for a duration, thus enabling a numerical estimate for the acceptability level and need for a redesign.

The consequences for lack of a task analysis combined by practical error taxonomies in designing a system are not that futile on operators, end users and the whole performance of systems since time is of the essence for delivering a functional product. 

The fact that current technology can automate the travel of airplanes from take off, to cruising and to landing without the need of a pilot does not guarantee safety or acceptability by airplane commuters.

The obvious problem is who in his right mind would board an airplane without a certified pilot and a co-pilot? It seems that in Japan the fast trains have no train pilot aboard but are controlled before reaching destinations.  In this case, passengers are taking these trains but would rather be doubly secured by having trained pilots on board no matter the extremely high safety records of these automated trains.

Nowadays, most of these functions and task allocations are done by computer programs with the hope that an expert professional is going to take serious time to analyze the printouts and provide a judicious human feedback. These computer programs have, crossing our fingers, the necessary constraints on safety standards, health standards, serious errors restrictions and labor requirements for the least.

A student provided a version of the “Shel” model as a standard task taxonomy that would permit sharing of data among different modes of transportation and other industries.  Apparently, this model can serve as an organizational tool for data collection in the investigation of workplace.  The components of the Shel model are 1) Live ware (the individual to human interface); 2) Hardware (human to machine interface); 3) Software (human to system interface); and 4) Environment (human to environment interface).  The model might relate all peripheral elements to central human live ware and thus focus on the factors which influence human performance.

The best way to assimilate the concept of task taxonomy is by examples.  For the purpose, one of the assignments is to study the job of the bread earner of the family, through questions, observation, and investigation and analyze its task taxonomy. Another assignment is a lecture project analyzing the task taxonomy of an industry or system not covered in the course materials.

Are you wondering what methods could be used in Industrial engineering, Human Factors or Industrial Psychology for improving designs?  Would you be interested at working taxonomy for methods in the next article?

“Human Factors versus Artificial Intelligence”

Article #49 in Human Factors in Engineering category, written in June 12, 2006

In her book “Choices” the actress Liv Ullman asks her scientist lover: “What do you believe will be the main preoccupation of science in the future?” and he replies: “I have no doubt that it will be the search for the correct definition of error.”

The scientist goes on to state that a living organism has the capacity to be able to make mistakes, to get in touch with chance or hazard, which is absolutely a necessary part in survival. (kind of trial and error)

The meaning of error here is broad and related to the chance of surviving a cataclysm, where the survivors are probably the errors or the monsters in the tail of the “normal group”.

It is the monsters that scientists would be interested in studying in the future because they fail to belong in the logic process of our automated life.

Taxonomy of Artificial Intelligence AI is the duplication of human faculties of creativity, self-improvement, and language usage that might be necessary to illustrate the progress attained in this field.

There are four basic systems of AI mainly: 1) Thinking like human, 2) thinking rationally, 3) acting like human, or 4) acting rationally.

The purpose of the first system is to create computer machines with complex human minds or to automate the activities that are associated with human thinking.

In order to satisfy that goal adequate answers need to be provided accurately to the following questions:

1. Where does knowledge come from?

2. How to decide when payoff may be far in the future?

3. How do brains process information?

4. How do human think and act, and how do animals think and act?

The preferred approaches by scientists in that system were either to model our cognitive processes or to get inside the actual workings of minds.

In 1957, Simon claimed that “There are now machines that think, learn, and create.  In a visible future the range of problems they can handle will be coextensive with the range of the human mind.”

This claim was backed by practical programs such as the reasoning ‘Logic Theory’ that can think non-numerically and prove the theorems in the mathematical book ‘Principia Mathematica‘.

‘Logic Theory’ was followed by the program ‘General Problem Solver‘ that imitates human protocols in manipulating data structures composed of symbols.

The second system of thinking rationally has for purpose to create computational models of mental faculties or alternatively to compute the faculties of perceiving, reasoning, and acting.  The typical questions to resolve are:

1. How does mind arise from a physical brain?

2. What can be computed?

3. How do we reason with uncertain information?

4. How does language relate to thought?

The approaches undertaken were to codify the ‘laws of thought’ and using syllogism for the ‘right thinking‘ as in Aristotle.

McCulloch & Pitts claimed in 1943 that “Any computable function can be derived from some network of connected neurons, with logical switches such as (AND, OR, NOT, etc)”.

This line of thinking was supported by an updated learning rule for modifying the connection strengths and the manufacture of the first neural network computer, the SNARC by Minsky & Edmonds (1951).

The third system of acting like human had for purpose to create machines that perform functions requiring similar human intelligence or alternatively, to create computers doing what people are better at doing now.  The relevant questions to resolve are:

1. How does knowledge lead to action?

2. How to decide so as to maximize payoff?

3. How can we build an efficient computer?

The main approach was to emulating the Turing test, which is based on the inability of a person to distinguish that a program has undeniably human entities when people are to decide whether the responses are generated from a machine or a human.

This system was successful when programs for playing checkers or chess learned to play better than their creators and when Gelernter (1959) constructed the ‘Geometry Theorem Prove’ program.

To that end, Minsky (1963) initiated a series of anti-logical programs called ‘microworlds’ within limited domains such as: SAINT, ANALOGY, STUDENT, and the famous solid block world.

The fourth system to act rationally had for purpose to define the domain of AI as computational intelligence for designing intelligent agents (Poole, 1998) or artifact agents behaving intelligently.  The relevant questions are:

1. Can formal rules draw valid conclusions?

2. What are these formal rules?

3. How can artifacts operate under their own control?

The approaches were to create rational agents and programs that operate under autonomous control.

This line of thinking generated the LISP computer language, then the ‘Advice Taker’ (McCarthy, 1958) where new axioms could be added to the central principles of knowledge representation and reasoning without reprogramming.

An advisory committee reported in 1966 that “There has been no machine translation of general scientific text, and none is in immediate prospect.”

Since then, most of the works on AI research were concentrated within the fourth system in developing AI programs.

The fourth view of AI is within the capability of human to make progress in AI but there is a caveat.

If human rely exclusively on the fourth system, which is in the realm of the learned and educated people in mathematics and engineering, the danger is that autonomous systems will be developed by normal and learned people as if human will behave logically.

Thus, we might end up with systems that do not coincide with the basic uncertain behavior of human.

The concept of error as defined in the beginning of the article will not be accounted for and some major calamities might befall human kind.

Note 1: On error taxonomy

Note 2: Artificial Intelligence started its application around 1988 by trying to cope with retiring “experts” who have been decades in the job and knows what works and what  doesn’t. The program was tailored made to a specific job with a series of “What if” questions followed by answers from the expert. I tried my hand on these kinds of programs.

Note 3: Currently, AI is relying on megadata sources gathered from all kinds of fields.  My impression is that the products are evaluated through testing and Not according to time consuming experiments. It would be more efficient to collect the Facts from real peer-reviewed research papers, but this will require full-time professionals in selecting what paper is scientific and what is pseudo-scientific or just funded by biased interested companies




October 2020

Blog Stats

  • 1,426,865 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by

Join 774 other followers

%d bloggers like this: