Adonis Diaries

Posts Tagged ‘SNARC

“Human Factors versus Artificial Intelligence”

Article #49 in Human Factors in Engineering category, written in June 12, 2006

In her book “Choices” the actress Liv Ullman asks her scientist lover: “What do you believe will be the main preoccupation of science in the future?” and he replies: “I have no doubt that it will be the search for the correct definition of error.”

The scientist goes on to state that a living organism has the capacity to be able to make mistakes, to get in touch with chance or hazard, which is absolutely a necessary part in survival. (kind of trial and error)

The meaning of error here is broad and related to the chance of surviving a cataclysm, where the survivors are probably the errors or the monsters in the tail of the “normal group”.

It is the monsters that scientists would be interested in studying in the future because they fail to belong in the logic process of our automated life.

Taxonomy of Artificial Intelligence AI is the duplication of human faculties of creativity, self-improvement, and language usage that might be necessary to illustrate the progress attained in this field.

There are four basic systems of AI mainly: 1) Thinking like human, 2) thinking rationally, 3) acting like human, or 4) acting rationally.

The purpose of the first system is to create computer machines with complex human minds or to automate the activities that are associated with human thinking.

In order to satisfy that goal adequate answers need to be provided accurately to the following questions:

1. Where does knowledge come from?

2. How to decide when payoff may be far in the future?

3. How do brains process information?

4. How do human think and act, and how do animals think and act?

The preferred approaches by scientists in that system were either to model our cognitive processes or to get inside the actual workings of minds.

In 1957, Simon claimed that “There are now machines that think, learn, and create.  In a visible future the range of problems they can handle will be coextensive with the range of the human mind.”

This claim was backed by practical programs such as the reasoning ‘Logic Theory’ that can think non-numerically and prove the theorems in the mathematical book ‘Principia Mathematica‘.

‘Logic Theory’ was followed by the program ‘General Problem Solver‘ that imitates human protocols in manipulating data structures composed of symbols.

The second system of thinking rationally has for purpose to create computational models of mental faculties or alternatively to compute the faculties of perceiving, reasoning, and acting.  The typical questions to resolve are:

1. How does mind arise from a physical brain?

2. What can be computed?

3. How do we reason with uncertain information?

4. How does language relate to thought?

The approaches undertaken were to codify the ‘laws of thought’ and using syllogism for the ‘right thinking‘ as in Aristotle.

McCulloch & Pitts claimed in 1943 that “Any computable function can be derived from some network of connected neurons, with logical switches such as (AND, OR, NOT, etc)”.

This line of thinking was supported by an updated learning rule for modifying the connection strengths and the manufacture of the first neural network computer, the SNARC by Minsky & Edmonds (1951).

The third system of acting like human had for purpose to create machines that perform functions requiring similar human intelligence or alternatively, to create computers doing what people are better at doing now.  The relevant questions to resolve are:

1. How does knowledge lead to action?

2. How to decide so as to maximize payoff?

3. How can we build an efficient computer?

The main approach was to emulating the Turing test, which is based on the inability of a person to distinguish that a program has undeniably human entities when people are to decide whether the responses are generated from a machine or a human.

This system was successful when programs for playing checkers or chess learned to play better than their creators and when Gelernter (1959) constructed the ‘Geometry Theorem Prove’ program.

To that end, Minsky (1963) initiated a series of anti-logical programs called ‘microworlds’ within limited domains such as: SAINT, ANALOGY, STUDENT, and the famous solid block world.

The fourth system to act rationally had for purpose to define the domain of AI as computational intelligence for designing intelligent agents (Poole, 1998) or artifact agents behaving intelligently.  The relevant questions are:

1. Can formal rules draw valid conclusions?

2. What are these formal rules?

3. How can artifacts operate under their own control?

The approaches were to create rational agents and programs that operate under autonomous control.

This line of thinking generated the LISP computer language, then the ‘Advice Taker’ (McCarthy, 1958) where new axioms could be added to the central principles of knowledge representation and reasoning without reprogramming.

An advisory committee reported in 1966 that “There has been no machine translation of general scientific text, and none is in immediate prospect.”

Since then, most of the works on AI research were concentrated within the fourth system in developing AI programs.

The fourth view of AI is within the capability of human to make progress in AI but there is a caveat.

If human rely exclusively on the fourth system, which is in the realm of the learned and educated people in mathematics and engineering, the danger is that autonomous systems will be developed by normal and learned people as if human will behave logically.

Thus, we might end up with systems that do not coincide with the basic uncertain behavior of human.

The concept of error as defined in the beginning of the article will not be accounted for and some major calamities might befall human kind.

Note 1: On error taxonomy https://adonis49.wordpress.com/2009/05/26/error-taxonomies-in-human-factors/

Note 2: Artificial Intelligence started its application around 1988 by trying to cope with retiring “experts” who have been decades in the job and knows what works and what  doesn’t. The program was tailored made to a specific job with a series of “What if” questions followed by answers from the expert. I tried my hand on these kinds of programs.

Note 3: Currently, AI is relying on megadata sources gathered from all kinds of fields.  My impression is that the products are evaluated through testing and Not according to time consuming experiments. It would be more efficient to collect the Facts from real peer-reviewed research papers, but this will require full-time professionals in selecting what paper is scientific and what is pseudo-scientific or just funded by biased interested companies


adonis49

adonis49

adonis49

June 2023
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  

Blog Stats

  • 1,522,144 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 770 other subscribers
%d bloggers like this: