Adonis Diaries

Posts Tagged ‘Poincare

One thing we know is that life reinforces the hypothesis that the world is infinitely complex and most of its phenomena will remain incomprehensible, meaning unexplained.  For example, no theory of life evolution was able to predict the next phase in evolution and the route taken to the next phase.  We don’t know if laws in biology will exist in the same meaning of laws of physics or natural phenomena.

For example, is the universe simple or complex, finite or infinite? The mathematician Chaitin answered: “This question will remain without any resolution, simply because we need an external observer outside our system of reference, preferably non-human, to corroborate our theoretical perception.”  (A few of my readers will say: “This smack of philosophy” and they hate philosophy or the rational logic deducted from reduced propositions that cannot rationally be proven)

So many scholars wanted to believe that “God does not play dice” (Einstein) or that chaos is within the predictive laws of God and nature (Leibniz), or that the universe can be explained by simple, restricted set of axioms, non-redundant rules (Stephen Hawking).

Modern mathematical theories and physical observations are demonstrating that most phenomena are basically behaving haphazardly.  For example, quantum physics reveals that hazard is the fundamental principle in the universe of the very tiny particles:  Individual behaviors of small particles in the atomic nucleus are unpredictable; thus, there is no way of measuring accurately speed, location, and direction of a particle simultaneously; all that physics can do is assigning probability numbers.

Apparently, hazard plays a role even in mathematics.  For example, many mathematical “true” statesmans cannot be demonstrated, they are logically irreducible and incomprehensible.  Mathematicians know that there exists an infinity of “twin” prime numbers (odd number followed by even number) but this knowledge cannot be proven mathematically. Thus, many mathematicians would suggest to add these true “propositions” but non demonstrable theories to the basic set of axioms.  Axioms are a set of the bare minimum of “given propositions” that we think we know to be true, but the reason is unable to approach them adequately, using the logical processes.

Einstein said: “What is amazing is that the eternally incomprehensible in nature is comprehensible”; meaning that we always think that we can extend an explanation to a phenomenon without being able to proving its working behaviors.  Einstein wrote that to comprehend means to rationally explain by compressing the basic axioms so that our mind can understand the facts; even if we are never sure how the phenomenon behaves.

For example, Platon said that the universe is comprehensible simply because it looks structured by the beauty of geometric constructs, the regularity of the tonality in string instruments, and steady movement of planets… Steven Weinberg admits that “If we manage to explain the universal phenomenon of nature it will not be feasible by just simple laws.”

Many facts can be comprehended when they are explained by a restricted set of theoretical affirmations:  This is called the Occam Razor theory which says: “The best theory or explanation is the simplest.”  The mathematician Herman Weyl explained: “We first need to confirm that nature is regulated by simple mathematical laws.  Then, the fundamental relationships become simpler the further we fine-tune the elements, and the better the explication of facts is more exact.”

So what is theory?  Informatics extended another perspective for defining theory: “a theory is a computer program designed to taking account of observed facts by computation.  Thus, the program is designed to predict observations.  If we say that we comprehend a phenomenon then, we should be able to program its behavior.  The smaller the program (more elegant) the better the theory is comprehended.”

When we say “I can explain” we mean that “I compressed a complex phenomenon into simple programs that “I can comprehend”, that human mind can comprehend.  Basically, explaining and comprehending is of an anthropic nature, within the dimension of human mental capabilities.

The father of information theory, John von Neumann wrote: “Theoretical physics mainly categorizes phenomena and tries to find links among the categories; it does not explain phenomena.”

In 1931, mathematician Kurt Godel adopted a mental operation consisting of indexing lists of all kinds of assertions.  His formal mathematical method demonstrated that there are true propositions that cannot be demonstrated, called “logically incomplete problems”  The significance of Godel’s theory is that it is impossible to account for elemental arithmetic operations (addition or multiplication) by reducing its results from a few basic axioms.  With any given set of logical rules, except for the most simple, there will always be statements that are undecidable, meaning that they cannot be proven or disproven due to the inevitable self-reference nature of any logical systems.

The theorem indicates that there is no grand mathematical system capable of proving or disproving all statements.  An undecidable statement can be thought of as a mathematical form of a statement like “What I just said is a lie”:  The statement makes reference to the language being used to describe it, it cannot be known whether the statement is true or not. However, an undecidable statement does not need to be explicitly self-reference to be undecidable. The main conclusion of Gödel’s incompleteness theorems is that all logical systems will have statements that cannot be proven or disproven; therefore, all logical systems must be “incomplete.”

The philosophical implications of these theorems are widespread. The set suggests that in physics, a “theory of everything” may be impossible, as no set of rules can explain every possible event or outcome. It also indicates that logically, “proof” is a weaker concept than “true”.  Such a concept is unsettling for scientists because it means there will always be things that, despite being true, cannot be proven to be true. Since this set of theorems also applies to computers, it also means that our own minds are incomplete and that there are some ideas we can never know, including whether our own minds are consistent (i.e. our reasoning contains no incorrect contradictions).

The second of Gödel’s incompleteness theorems states that no consistent system can prove its own consistency, meaning that no sane mind can prove its own sanity. Also, since that same law states that any system able to prove its consistency to itself must be inconsistent, any mind that believes it can prove its own sanity is, therefore, insane.

Alan Turing used a deeper twist to Godel’s results.  In 1936, Turing indexed lists of programs designed to compute real numbers from zero to 1 (think probability real numbers).  Turing demonstrated mathematically that no infallible computational procedures (algorithms) exist that permit to deciding whether a mathematical theorem is true or false.  In a sense, there can be no algorithm able to know if a computer program will even stop.  Consequently, no computer program can predict that another program will ever stop computing.  All that can be done is allocating a probability number that the program might stop.  Thus, you can play around with all kinds of axioms, but no sets can deduce that a program will end.  Turing proved the existence of non computable numbers.

Note 1: Chaitin considered the set of all possible programs; he played dice for each bit in the program (0 or 1, true or false) and allocated a probability number for each program that it might end.  The probability that a program will end in a finite number of steps is called Omega.  The succession of numbers comprising Omega are haphazard and thus, no simple set of axioms can deduce the exact number.  Thus, while Omega is defined mathematically, the succession of the numbers in Omega has absolutely no structure.  For example we can write algorithm to computing Pi but never for Omega.

Note 2:  Bertrand Russell (1872-1970) tried to rediscover the founding blocks of mathematics “the royal highway to truth”  He was disappointed and wrote: “Mathematics is infected of non proven postulates and infested with cyclic definitions.  The beauty and the terror of mathematics is that a proof must be found; even if it proves that a theory cannot e be proven”

Note 3:  The French mathematician Poincaré got a price for supposedly having discovered chaos.  The article was officially published when Poincaré realized that he made a serious error that disproved his original contention.  Poincaré had to pay for all the published articles and for destroying them.  A single copy was saved and found at the Mittag-Leffler Institute in Stockholm.

Efficiency has limits within cultural bias; (Dec. 10, 2009)

Sciences that progressed so far have relied on mathematicians: many mathematical theories have proven to be efficacious in predicting, classifying, and explaining phenomena.

In general, fields of sciences that failed to interest mathematicians stagnated or were shelved for periods; maybe with the exception of psychology.

People wonder how a set of abstract symbols that are linked by precise game rules (called formal language) ends up predicting and explaining “reality” in many cases.

Biology has received recently a new invigorating shot: a few mathematicians got interested working for example on patterns of butterfly wings and mammalian furs using partial derivatives, but nothing of real value is expected to further interest in biology.

Economy, mainly for market equilibrium, applied methods adapted to dynamic systems, games, and topology. Computer sciences is catching up some interest.

Significant mathematics” or those theories that offer classes of invariant relative to operations, transformations, and relationship almost always find applications in the real world: they generate new methods and tools such as theories of group and functions of a complex variable.

For example, the theory of knot was connected to many applied domains because of its rich manipulation of “mathematical objects” (such as numbers, functions, or structures) that remain invariant when the knot is deformed.

What is the main activity of a modern mathematician?

First of all, they do systematic organization of classes of “mathematical objects” that are equivalent to transformations. For example, surfaces to a homeomorphisms or plastic transformation and invariant in deterministic transformations.

There are several philosophical groups within mathematicians

1. The Pythagorean mathematicians admit that natural numbers are the foundations of the material reality that is represented in geometric figures and forms. Their modern counterparts affirm that real physical structure (particles, fields, and space-time…) is identically mathematical. Math is the expression of reality and its symbolic language describes reality. 

2. The “empirical mathematicians” construct models of empirical (experimental) results. They know in advance that their theories are linked to real phenomena.

3. The “Platonist mathematicians” conceive the universe of their ideas and concepts as independent of the world of phenomena. At best, the sensed world is but a pale reflection of their ideas. Their ideas were not invented but are as real though not directly sensed or perceived. Thus, a priori harmony between the sensed world and their world of ideas is their guiding rod in discovering significant theories.

4. There is a newer group of mathematicians who are not worried to getting “dirty” by experimenting (analyses methods), crunching numbers, and adapting to new tools such as computer and performing surgery on geometric forms.

This new brand of mathematicians do not care to be limited within the “Greek” cultural bias of doing mathematics: they are ready to try the Babylonian and Egyptian cultural way of doing math by computation, pacing lands, and experimenting with various branches in mathematics (for example, Pelerman who proved the conjecture of Poincaré with “unorthodox” techniques and Gromov who gave geometry a new life and believe computer to be a great tool for theories that do not involve probability).

Explaining phenomena leads to generalization (reducing a diversity of phenomena, even in disparate fields of sciences, to a few fundamental principles).  Mathematics extend new concepts or strategies to resolving difficult problems that require collaboration of various branches in the discipline.

For example, the theory elaborated by Hermann Weyl in 1918 to unifying gravity and electromagnetism led to the theory of “jauge” (which is the cornerstone theory for quantum mechanics), though the initial theory failed to predict experimental results.

The cord and non-commutative geometry theories generated new horizons even before they verified empirical results. 

Axioms and propositions used in different branches of mathematics can be combined to developing new concepts of sets, numbers, or spaces.

Historically, mathematics was never “empirically neutral”: theories required significant work of translation and adaptation of the theories so that formal descriptions of phenomena are validated.

Thus, mathematical formalism was acquired by bits and pieces from the empirical world.  For example, the theory of general relativity was effective because it relied on the formal description of the invariant tensor calculus combined with the fundamental equation that is related to Poisson’s equations in classical potential

The same process of adaptation was applied to quantum mechanics that relied on algebra of operators combined with Hilbert’s theory of space and then the atomic spectrum.

In order to comprehend the efficiency of mathematics, it is important to master the production of mental representations such as ideas, concepts, images, analogies, and metaphors that are susceptible to lending rich invariant.

Thus, the discovery of the empirical world is done both ways:

First, the learning processes of the senses and

Second, the acquisition processes of mathematical modeling.

Mathematical activities are extensions to our perception power and written in a symbolic formal language.

Natural sciences grabbed the interest of mathematicians because they managed to extract invariant in natural phenomena. 

So far, mathematicians are wary to look into the invariant of the much complex human and social sciences. Maybe if they try to find analogies of invariant among the natural and human worlds then a great first incentive would enrich new theories applicable to fields of vast variability.

It appears that “significant mathematics” basically decodes how the brain perceives invariant in what the senses transmit to it as signals of the “real world”. For example, Stanislas Dehaene opened the way to comprehending how elementary mathematical capacity are generated from neuronal substrate.

I conjecture that, since individual experiences are what generate intuitive concepts, analogies, and various perspectives to viewing and solving problems, most of the useful mathematical theories were essentially founded on the vision and auditory perceptions. 

New breakthrough in significant theories will emerge when mathematicians start mining the processes of the brain of the other senses (they are far more than the regular six senses). Obviously, the interested mathematician must have witnessed his developed senses and experimented with them as hobbies to work on decoding their valuable processes in order to construct this seemingly “coherent world”.

A wide range of interesting discoveries on human capabilities can be uncovered if mathematicians dare direct scientists and experimenters to additional varieties of parameters and variables in cognitive processes and brain perception that their theories predict.

Mathematicians have to expand their horizon: the cultural bias to what is Greek has its limits.  It is time to take serious attempts at number crunching, complex computations, complex set of equations, and adapting to newer available tools.

Note: I am inclined to associate algebra to the deductive processes and generalization on the macro-level, while viewing analytic solutions in the realm of inferring by manipulating and controlling possibilities (singularities); it is sort of experimenting with rare events.


adonis49

adonis49

adonis49

June 2023
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  

Blog Stats

  • 1,522,059 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 769 other subscribers
%d bloggers like this: