Adonis Diaries

Posts Tagged ‘Artificial Intelligence

Does it matter to Debate Machine Consciousness?

“I think therefore I am.”

“What about thinking? Here I make my discovery: thought exists; it alone cannot be separated from me.

I am; I exist – this is certain. But for how long?

For as long as I am thinking; for perhaps it could also come to pass that if I were to cease all thinking I would then utterly cease to exist.

At this time I admit nothing that is not necessarily true.

I am therefore precisely nothing but a thinking thing; that is a mind, or intellect, or understanding, or reason – words of whose meanings I was previously ignorant.

Yet I am a true thing and am truly existing; but what kind of thing? I have said it already: a thinking thing.” – René Descartes 

In 1637, when he published, The Discourse on Method, René Descartes unleashed a philosophical breakthrough, which later became a fundamental principle that much of modern philosophy now stands upon.

Nearly 400 years later, if a machine says these five powerful words, “I think therefore I am,” does the statement still hold true?

If so, who then is this “I” that is doing the thinking?

In a recent talk, Ray Kurzweil showed the complexity of measuring machine consciousness, “We can’t just ask an entity, ‘Are you conscious?’ because we can ask entities in video games today, and they’ll say, ‘Yes, I’m conscious and I’m angry at you.

But we don’t believe them because they don’t have the subtle cues that we associate with really having that subjective state. My prediction that computers will pass the Turing test and be indistinguishable from humans by 2029 is that they really will have those convincing cues.”

If artificial intelligence becomes indistinguishable from human intelligence, how then will we determine which entities are, or are not, conscious—specifically, when consciousness is not quantifiable?

Though the word consciousness has many commonly held definitions, this question can be answered quite differently when filtered through the many existing philosophical and religious frameworks.

With two particularly conflicting viewpoints being the common Eastern and Western notions of what exactly consciousness is—and how it comes to exist.

At the heart of many Eastern philosophies is the belief that consciousness is our fundamental reality; it is what brings the physical world into existence.

By contrast, the Western notion of consciousness holds that it arises only at a certain level of development.

Looking at these two opposing belief systems, we can see that to answer, “What and who is conscious?” can pull drastically different responses.

“Fundamentally, there’s no scientific experiment that doesn’t have philosophical assumptions about the nature of consciousness,” Kurzweil says.

We’d like to have an objective scientific understanding of consciousness, but such a view remains elusive.

“Some scientists say, ‘Well, it’s just an illusion. We shouldn’t waste time with it,” Kurzweil says. “But that’s not my view because, as I say, morality is based on consciousness.”

Why does all this matter?

Because as technological evolution begins intersecting our biological evolution as a species, the lines between “human” and “non-human” entities will begin blurring more so than humanity has ever encountered, and a new era of identity, and the surrounding ethics and philosophy, will take center stage.

What happens if a non-human conscious entity travels into another region of the world where its consciousness is not believed to be real?

Or more broadly, how will we treat intelligent machines ethically as their intelligence approaches our own?

If morality is based on consciousness, does a machine become an “I” if it has one?

(If Morality is a set of behaviors disseminated by the power-to-be and communities are coaxed to follow suit, then an artificial intelligent consciousness is political by nature)

Note: It is Not a matter of morality: It is this lapse of time  of uncertainty we need to determine whether our act is Good or Bad. Give a machine the illusion of needing some time to decide and you can be fooled. 

“If artificial intelligence becomes indistinguishable from human intelligence, how then will we determine which entities are, or are not, conscious?”

“I think therefore I am.”
In 1637, when he published, The Discourse on Method, René Descartes unleashed a philosophical breakthrough, which later became a…
singularityhub.com

 Ban on (artificial Intelligent) AI and autonomous weapons: Killer Robots?

Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue.

Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.

Toby Walsh, professor of AI at the University of New South Wales said:

“We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”

Musk and Hawking have warned that AI is “our biggest existential threat” and that the development of full AI could “spell the end of the human race”.

But others, including Wozniak have recently changed their minds on AI, with the Apple co-founder saying that robots would be good for humans, making them like the “family pet and taken care of all the time”.

At a UN conference in Geneva in April discussing the future of weaponry, including so-called “killer robots”, the UK opposed a ban on the development of autonomous weapons, despite calls from various pressure groups, including the Campaign to Stop Killer Robots.

The Guardian view on robots as weapons: the human factor

Andrew Bossone shared this link

No killer robots please

More than 1,000 experts and leading robotics researchers sign open letter warning of military artificial intelligence arms race
theguardian.com|By Samuel Gibbs

 

It’s No Myth:

Robots and Artificial Intelligence Will Erase Jobs in Nearly Every Industry

With the unemployment rate falling to 5.3 percent, the lowest in seven years, policy makers are heaving a sigh of relief.  (Every time these short terms sighs of relief to fool the people)

Indeed, with the technology boom in progress, there is a lot to be optimistic about.

Manufacturing will be returning to U.S. shores with robots doing the job of Chinese workers;

American carmakers will be mass-producing self-driving electric vehicles;

technology companies will develop medical devices that greatly improve health and longevity;

we will have unlimited clean energy and 3D print our daily needs.

The cost of all of these things will plummet and make it possible to provide for the basic needs of every human being.

(No sweat hoping. And what of the people who needs a job?)

I am talking about technology advances that are happening now, which will bear fruit in the 2020s. (How about in 50 years if no major calamities hit us before then?)

But policy makers will have a big new problem to deal with: the disappearance of human jobs.

Not only will there be fewer jobs for people doing manual work, the jobs of knowledge workers will also be replaced by computers.

(So even educated people will have no jobs?)

Almost every industry and profession will be impacted and this will create a new set of social problems — because most people can’t adapt to such dramatic change. (Why should they?)

If we can develop the economic structures necessary to distribute the prosperity we are creating (here we go again), most people will no longer have to work to sustain themselves. They will be free to pursue other creative endeavors (assuming that the opportunities are there?

The problem is that without jobs, they will not have the dignity, social engagement, and sense of fulfillment that comes from work.

The life, liberty and pursuit of happiness that the constitution entitles us to won’t be through labor, it will have to be through other means. (How about we start thinking of these other means?)

It is imperative that we understand the changes that are happening and find ways to cushion the impacts.

The technology elite who are leading this revolution will reassure you that there is nothing to worry about because we will create new jobs just as we did in previous centuries when the economy transitioned from agrarian to industrial to knowledge-based. (kids were working in the mining industries, in tunnels deep inside the bowel of earth)

Tech mogul Marc Andreessen has called the notion of a jobless future a “Luddite fallacy,” referring to past fears that machines would take human jobs away. Those fears turned out to be unfounded because we created newer and better jobs and were much better off. (Please expand on these new jobs and their cumulative trauma disorders)

True, we are living better lives. But what is missing from these arguments is the timeframe over which the transitions occurred.

The industrial revolution unfolded over centuries. Today’s technology revolutions are happening within years.

We will surely create a few intellectually-challenging jobs, but we won’t be able to retrain the workers who lose today’s jobs.

The working people will experience the same unemployment and despair that their forefathers did. It is they who we need to worry about. (Which means 6 billion people?)

The first large wave of unemployment will be caused by self-driving cars. (if the technology is here, who in his right head will give up control over the car?)

These will provide tremendous benefit by eliminating traffic accidents and congestion, making commuting time more productive, and reducing energy usage. But they will eliminate the jobs of millions of taxi and truck drivers and delivery people.

Fully-automated robotic cars are no longer in the realm of science fiction; you can see Google’s cars on the streets of Mountain View, Calif. There are also self-driving trucks on our highways (they should be banned) and self-driving tractors on farms.

Uber just hired away dozens of engineers from Carnegie Mellon University to build its own robotic cars. It will surely start replacing its human drivers as soon as its technology is ready — later in this decade (And give away the profit it is amassing by hiring drivers?).

As Uber CEO Travis Kalanick reportedly said in an interview, “The reason Uber could be expensive is you’re paying for the other dude in the car. When there is no other dude in the car, the cost of taking an Uber anywhere is cheaper. Even on a road trip. (Assuming the clients will prefer Not having this dude on the wheel)”

The dude in the driver’s seat will go away.

Manufacturing will be the next industry to be transformed.

Robots have, for many years, been able to perform surgery, milk cows, do military reconnaissance and combat, and assemble goods. But they weren’t dexterous enough to do the type of work that humans do in installing circuit boards.

The latest generation of industrial robots by ABB of Switzerland and Rethink Robotics of Boston can do this however. ABB’s robot, Yumi, can even thread a needle. It costs only $40,000.

China, fearing the demise of its industry, is setting up fully-automated robotic factories in the hope that by becoming more price-competitive, it can continue to be the manufacturing capital of the world.

But its advantage only holds up as long as the supply chains are in China and shipping raw materials and finished goods over the oceans remains cost-effective.

Don’t forget that our robots are as productive as theirs are; they too don’t join labor unions (yet) and will work around the clock without complaining.

Supply chains will surely shift and the trickle of returning manufacturing will become a flood.

But there will be few jobs for humans once the new, local factories are built.

With advances in artificial intelligence, any job that requires the analysis of information can be done better by computers. This includes the jobs of physicians, lawyers, accountants, and stock brokers.

We will still need some humans to interact with the ones who prefer human contact, but the grunt work will disappear. The machines will need very few humans to help them.

This jobless future will surely create social problems — but it may be an opportunity for humanity to uplift itself. (That is the biggest lie that has been perpetrated this century)

Why do we need to work 40, 50, or 60 hours a week, after all? Just as we were better off leaving the long and hard agrarian and factory jobs behind, we may be better off without the mindless work at the office.

What if we could be working 10 or 15 hours per week from anywhere we want and have the remaining time for leisure, social work, or attainment of knowledge?

Yes, there will be a booming tourism and recreation industry and new jobs will be created in these — for some people. (Assuming the money are there for these entertainment)

There are as many things to be excited about as to fear.

If we are smart enough to develop technologies that solve the problems of disease, hunger, energy, and education, we can — and surely will — develop solutions to our social problems.  (Then why famine is still harvesting millions of people?)

But we need to start by understanding where we are headed and prepare for the changes.

We need to get beyond the claims of a Luddite fallacy — to a discussion about the new future.

Vivek-Wadhwa-41

Vivek Wadhwa is a fellow at Rock Center for Corporate Governance at Stanford University, director of research at Center for Entrepreneurship and Research Commercialization at Duke, and distinguished fellow at Singularity University.

His past appointments include Harvard Law School, University of California Berkeley, and Emory University. Follow him on Twitter @wadhwa.

Though robots and artificial intelligence will erase jobs and bring social challenges, they may also provide an opportunity for humanity to uplift itself.
singularityhub.com

 

Caution: Artificial Intelligence is a Frankenstein

In the late 1980’s, Artificial Intelligence programs relied on practicing experts in practical fields in order to extract the “How to, and how to go about when a problem hits the system” using a series of questions: “What if“. These programs were designed to foresee going many experts into retirement  and the need to train new comers with the least cost and hire the minimum numbers of new employees.

Artificial Intelligence has progress and branched into many fields and this time around it is the professionals in labs who are designing the sophisticated software.

An open letter calling for caution to ensure intelligent machines do not run beyond our control has been signed by a large and growing number of people, including some of the leading figures in artificial intelligence.

“There is now a broad consensus that (AI) research is progressing steadily, and that its impact on society is likely to increase,” the letter said.

“The potential benefits are huge, since everything that civilization has to offer is a product of ; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable,” it added.

“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

How to handle the prospect of automatic weapons that might kill indiscriminately, the liabilities of automatically driven cars and the prospect of losing control of AI systems so that they no longer align with human wishes, were among the concerns raised in the letter that signees said deserve further research

Scientists urge artificial intelligence safety focus

Jan 12, 2015

Roboy, a humanoid robot developed at the University of Zurich,at the 2014 CeBIT technology trade fair on March 9, 2014 in Hanove
Roboy, a humanoid robot developed at the University of Zurich,at the 2014 CeBIT technology trade fair on March 9, 2014 in Hanover, Germany

Scientists and Engineers Warn Of The Dangers Of Artificial Intelligence

January 13, 2015 | by Stephen Luntz

Fears of our creations turning on us stretch back at least as far as Frankenstein, and films such as The Terminator gave us a whole new language to discuss what would happen when robots stopped taking orders.

However, as computers beat (most of) us at Jeopardy and self-driving cars appear on our roads, we may be getting closer to the point where we will have to tackle these issues.

In December, Stephen Hawking kicked off a renewed debate on the topic.

As someone whose capacity to communicate depends on advanced computer technology, Hawking can hardly be dismissed as a Luddite, and his thoughts tend to attract attention.

The letter was initiated by the Future of Life Institute, a volunteer organization that describes itself as “working to mitigate existential risks facing humanity.” The letter notes:

“As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research.

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.

Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

The authors add that “our AI systems must do what we want them to do,” and have set out research priorities they believe will help “maximize the societal benefit of AI.”

Anyone can sign, and at the time of this writing well over a thousand people have done so. While many did not indicate an affiliation, names such as Elon Musk and Hawking himself are easily recognized.

Many of the other names on the list are leading researchers in IT or philosophy, including the IBM team behind the Watson supercomputer.

So much intellectual and financial heft may make their prospects good for conducting research in the areas proposed. Musk has said he invests in companies researching AI in order to keep an eye on them.

Musk worries that even if most researchers behave responsibly, in the absence of international regulation, a single rogue nation or corporation could produce self-replicating machines whose priorities might be very different to humanity’s, and once industries become established they become resistant to control.

Can a Robot emulate human emotions? That should not be the question

A robot programmed with an artificial intelligence that can learn how to love and express emotions is feasible, and highly welcomed.

A child robot David can acquire and follow the various stages of kids emotional development, all the way to adulthood.

The question is why scientists should invest time and energy creating robot that would exacerbate the current calamities experienced and witnessed by men on human emotions and love consequences and trials?

Have we not gotten enough of negative jealousy that generates serious pains, frustrations, beating, castration, killing…?

It is getting evident that parents will no longer enjoy the adequate quality time and opportunities to caring full-time for nurturing the kids.

A kid nurturing robot at home will be the best invention for the stability and healthy emotional development of isolated kids in the future

If robots have to convey emotions and feeling, and they had better extend proper nurturing examples that kids at home may emulate…

Robots must learn to listen to the kids, ask  questions, circumvent human shortcomings in failure to communicate, overcome the tendency of kids in building negative fictitious myths and role played empathy projected in relationship…

The movie “AI” of Steven Spielberg investigated the limits of man and machines confronted at the ineluctable problems:

1. The child separation from family members, particularly the mother early emotional attachment…The moment we discover that our mother is not perfect and our father is a coward…

2. The moment it dawn on the child that we are not unique, perfect, really loved…as we wished it should be…

2. The moment we realize that we are no longer the center of the universe and that community is too busy to care for our future…

4. The moment we accept that we are “All alone” and we have to fend for our health, safety, mental sanity…

5. The moment we feel that we were left bare and unprepared to face the desolate world around us…

Should the kid robot replace the myth of the “Blue Fairy?”  This fairy supposed to:

1. Heal the torn parts in the separation with family members…

2. Render possible what we came to learn as irreversible, irreparable, and almost unfeasible…?

3. Convince us that there is always a person out there who will love us, be a true friend for life

4. Bring our way this person who suffered and felt wounded as we are…

5. Keep at bay those cannibals, ever ready to sacrifice man and animal under the pretense of “celebrating life

We do need such a robot to nurture babies into adulthood for empathy, communication, expressing gratitude…

A child robot with unconditional devotion, soft-spoken, cultured, patient, and willing to listen to our lucubrations…

The happy ending that teaches us to grasp and grab on the fleeting moments of rich happiness, to taste the powerful instants of tenderness…

Freed at last from illusion, myths and these comfortable peaceful world views we thought we had acquired in childhood…

We do live on the assumption of recovering what we had lost, learning that what we lost “Never existed” in the first place…

At least, a compassionate kid robot would extend, now and then, at critical difficult moments, a glimpse of our childhood innocent belief system, of a world of goodness, sensibility, and wonder…

Little robot David should learn how and when to inject a healthy dose of emotional adrenaline to keep us sane, and ready to face the real world with more courage, more determination to disseminate what is good in us, the compassion needed to sustain and maintain our hope in a better future…

Note: This post was inspired from an article in the monthly Lebanese magazine Sante/Beaute #21. The article was not signed, but the source maybe www.shaomi blog.net

Noam Chomsky on “Occupy Wall Street protests”
 
I reviewed and published a dozen articles on Noam Chomsky’s books and social, educational, and political positions.  This should be no surprise to my reader to disseminate Noam Chomsky latest political stand: On “Occupy Wall Street protests”. Danny Garza, who has been on the ground of the Occupy Wall Street movement since Day One, received this email from Chomsky.

Chomsky mailed: “Anyone with eyes open, knows that the gangsterism of Wall Street — financial institutions generally — has caused severe damage to the people of the United States (and the world). And should also know that it has been doing so increasingly for over 30 years, as their power in the economy has radically increased, and with it their political power.

(These behaviors ) have set in motion a vicious cycle that has concentrated immense wealth, and with it political power, in a tiny sector of the population, a fraction of 1%, while the rest of the (citizens ) have increasingly become what is sometimes called “a precariat” — seeking to survive in a precarious existence (a term mostly used in Spain and Portugal mass demonstrators). They also carry out these ugly activities with almost complete impunity — not only too big to fail, but also “too big to jail”

The courageous and honorable protests underway in Wall Street should serve to bring this calamity to public attention, and to lead to dedicated efforts to overcome it and set the society on a more healthy course.”

Warren Buffet published letter to The New York Times a month ago might have strengthen the convictions of many US citizens on the state of affairs and got catalyzed to starting somewhere  https://adonis49.wordpress.com/2011/09/19/warren-e-buffett-tax-us-hard-stop-coddling-with-the-ultra-rich/

Note 2: Chomsky is quoted saying in wikiquote.com:  “the best way to restrict democracy is to move the decision-making from the public to unaccountable institutions: kings and princes, priestly castes, military juntas, party dictatorships, or modern corporations. The U.S. political system as a very marginal affair, made up of two political parties, but essentially of same ideology “the Business Party”, a group of intellectuals who consist of a herd of independent thinkers.

He resumed: “Unfortunately, you can’t vote the rascals out, because you never voted them in, in the first place.” (Government in the Future, Poetry Center of New York, February 16, 1970)

In June of 2011, Chomsky was awarded the Sydney Peace Prize in honor of promoting human rights, unfailing courage, and critical analysis of power as an American linguist. This is the only International Peace Prize awarded in Australia, promoting peace with justice.

Chomsky was also awarded the IEEE Intelligent System’s Al’s Hall of Fame for his significant contributions to the field of Artificial Intelligence Al and intelligent systems.

Read more: http://www.digitaljournal.com/article/311959#ixzz1ZobwgJUG

Research on brain or mind: How done? 

I attended a session of TEDx talk in Awkar (Lebanon).  The meeting started around 10 pm and ended at 1:30 am.  And we watched several TED talks on brain research and language.  The discussion and the friendly association inspired this article.

Since the Italian Galvani’s experiments on reactions of frog to electrical impulses in the 18th century, study on brain functions basically relied on binary (on/off) activities of neurons and nerves.

Currently, experiments are done using non-intrusive tools and techniques such as photo-voltaic (light) energy impulses.  The pores of particular axons in network of neurons and synapses in insects are activated by the light; the insect is thus programmed to behave as lights go on/off.

Research is focusing on selecting specialized network of neurons that can be activated and programmed so that particular functions of the brain are localized and controlled.  This strategy says: “let us investigate sets of neuron networks with definite functions.  As more networks are identified then, extrapolating procedures might shed better lights on how the brain function”.

It seems that this strategy in research is adopted frequently among teams of neuro-scientists.

Basically,  although the brain does not function as current computers do (advanced computers are being tested, working on living organisms such as bacteria that are programmed with artificial intelligence rules), the brain and nervous systems are activated in binary modes as computer by surges of energy impulses.  Hormones (chemical compounds) in body activate and deactivate neurons for particular functions in the brain and the body.

I like to suggest a complementary strategy for neuron research based on investigating pairs of hormones as a guiding program.

The idea is to mapping particular pairs of hormones, among the hundred of them, that are specialized in firing and cancelling out stimulus for activating certain tasks.

The next step is to construct a taxonomy for all the tasks and functions of the body and then regrouping the tasks that share the same network of neurons activated by particular pairs of hormones.

The set of tasks for a pair of hormones do not necessarily engage a direct function: they may be accessory and complementary to a function such as controlling, maintaining, decision, motor, feedback critics, actors, learning…

The variety of hormones correspond to different external senses, internal senses, and special nervous structures and molecular cells in the body and the brain.    The number of hormones is countable, but combinations of pairs of (on/off) hormones are vast. I suppose that a hormone might be playing a valid role in several tasks while its opposite hormones might be different for other sets of tasks.

I have this strong impression that research on animals and insects are not solely based on moral grounds or ethical standards.  The practical premises are that animals are far more “rational” in their “well-behaved” habits than mankind.  And thus, experiments on mostly male insects (even female insects have more complex behaviors and body instability) are more adequate to logical designs.

The variability (in types and number) in experimenting with particular animal species are vastly less systematic than experimenting with mankind:  For one thing, we are unable to communicate effectively with animal species and we have excuses to hide under the carpet our design shortcomings.

I think there is a high positive correlation between longevity in the animal kingdom and level of “intelligence”.

Species that live long must have a flexible nervous systems that rejuvenate, instead of the mostly early hard-wired nervous systems in short-lived species.

Consequently, the brains of long-lived species are constantly “shaking”, meaning cogitating and thinking when faced with new conditions and environments.

Mankind observed the short-lived species (with mostly hard-wired nervous systems) and applied control mechanisms on societies based on those “well-behaving” animals for control and organization models of communities of mankind.  

It is of no surprise that control mechanisms on human societies failed so far in the long-term:  Man is endowed with a brain shaking constantly and rejuvenating most of his nervous cells and submit but momentarily to control mechanism, long enough to subdue a community for many years.

Note:  You may read my article on bacteria running supercomputers on https://adonis49.wordpress.com/2010/09/19/bacteria-running-supercomputers-how/

Who is towing sciences? (Dec. 6, 2009)

In the previous 2 centuries it was mathematics that was towing sciences and especially physics in its theoretical aspects. Actually, most theories in the sciences were founded on mathematical abstract theorems that were demonstrated many decades ago by mathematicians.

It appears that this century is witnessing a different trend: sciences are offering opportunities for mathematicians to expand their fields of interests, away from the internal problems of solving conjectures (axioms or hypotheses) that were enunciated a century ago.

Cedric Villani, professor of mathematics at the Institute of Henry Poincaré in Paris, thinks that physics still remains the main engine for mathematicians to opening new fields of study.  For example, equations of fluid mechanics are not yet resolved (those related to Navier-Stokes and Euler); compressed fluids; Bose-Einstein condensation; rarefied gas environment. We even cannot explain why water boils.

There is also the study of how the borders that separate two phases of equilibrium among chaotic, random, and unstable physical systems behave.  The mathematician Wendelin Werner (Fields Prize) has been interested in that problem and I will publish a special post on how he resolved these phenomena.

There is an encouraging tendency among a few mathematicians to dealing with new emerging fields in sciences.  The most promising venue is in computer sciences or “informatique”. In computer sciences the problems of verification (for example, theory of verifying proofs) is capturing interest: mathematical tools for validating theorems in the realm of logic or exhuming errors are challenging.

Biology is an exciting field but it didn’t capture the interest of mathematicians: the big illusion that mathematicians will approach biology faded away simply because fields related to the living world is too variable in complexity to attract mathematicians. The number of “variability” is “absurdly” numerous and does not lend simple and clean-cut laws that prove that the world is well structured and “mathematically” ordered.

This has been the case in the 70’s for cognitive sciences and artificial intelligence: scientists in those two fields hoped that mathematicians would get interested and drive them to innovative results, but nothing much happened.

The good trend is this kind of social re-organization within the mathematical community for deeper cooperative undertaking in solving problems.  The web or internet has kind of revolutionized cooperation; it opened up this great highway for sharing ideas instantly and cooperating among several researchers.

For example, Terry Tao and Tim Gowens propose problems on their blog (Polymath project) and then the names of contributors are disseminated after a problem comes to fruition.

Still, individual initiatives are the norm; for example, Tao and Gowens ended up solving the problems most of the time.  Mikhail Gromov (Abel Prize) has given geometry a new life line in mathematics.

It appears that “Significant mathematics” basically decoded how the brain perceives “invariants” in what the senses transmit to it.  I conjecture that since individual experiences are what generate the intuitive concepts, analogies, and various perspectives to viewing a problem. Most of the mathematical theories were essentially founded on the stored vision and auditory perceptions.


adonis49

adonis49

adonis49

July 2020
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Blog Stats

  • 1,400,286 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 748 other followers

%d bloggers like this: