Adonis Diaries

Archive for the ‘Mathematics’ Category

An exercise: taxonomy of methods

Posted on: June 10, 2009

Article #14 in Human Factors

I am going to let you have a hand at classifying methods by providing a list of various methods that could be used in Industrial engineering, Human Factors, Ergonomics, and Industrial Psychology.

This first list of methods is organized in the sequence used to analyzing part of a system or a mission;

The second list is not necessarily randomized, though thrown in without much order; otherwise it will not be an excellent exercise.

First, let us agree that a method is a procedure or a set of step by step process that our forerunners of geniuses and scholars have tested, found it good, agreed on it on consensus basis and offered it for you to use for the benefit of progress and science.

Many of you will still try hard to find short cuts to anything, including methods, for the petty argument that the best criterion to discriminating among clever people is who waste time on methods and who are nerds.

Actually, the main reason I don’t try to teach many new methods in this course (Human Factors in Engineering) is that students might smack run into a real occupational stress, which they are Not immune of, especially that methods in human factors are complex and time consuming.

Here is this famous list of a few methods and you are to decide which ones are still in the conceptual phases and which have been “operationalized“.

The first list contains the following methods:

Operational analysis, activity analysis, critical incidents, function flow, decision/action, action/information analyses, functional allocation, task, fault tree, failure modes and effects analyses, timeline, link analyses, simulation, controlled experimentation,  operational sequence analysis, and workload assessment.

The second list is constituted of methods that human factors are trained to utilize if need be such as:

Verbal protocol, neural network, utility theory, preference judgments, psycho-physical methods, operational research, prototyping, information theory, cost/benefit methods, various statistical modeling packages, and expert systems.

Just wait, let me resume.

There are those that are intrinsic to artificial intelligence methodology such as:

Fuzzy logic, robotics, discrimination nets, pattern matching, knowledge representation, frames, schemata, semantic network, relational databases, searching methods, zero-sum games theory, logical reasoning methods, probabilistic reasoning, learning methods, natural language understanding, image formation and acquisition, connectedness, cellular logic, problem solving techniques, means-end analysis, geometric reasoning system, algebraic reasoning system.

If your education is multidisciplinary you may catalog the above methods according to specialty disciplines such as:

Artificial intelligence, robotics, econometrics, marketing, human factors, industrial engineering, other engineering majors, psychology or mathematics.

The most logical grouping is along the purpose, input, process/procedure, and output/product of the method. Otherwise, it would be impossible to define and understand any method.

Methods could be used to analyze systems, provide heuristic data about human performance, make predictions, generate subjective data, discover the cause and effects of the main factors, or evaluate the human-machine performance of products or systems.

The inputs could be qualitative or quantitative such as declarative data, categorical, or numerical and generated from structured observations, records, interviews, questionnaires, computer generated or outputs from prior methods.

The outputs could be point data, behavioral trends, graphical in nature, context specific, generic, or reduction in alternatives.

The process could be a creative graphical or pictorial model, logical hierarchy or in network alternative, operational, empirical, informal, or systematic.

You may also group these methods according to their mathematical branches such as algebraic, probabilistic, or geometric.

You may collect them as to their deterministic, statistical sampling methods and probabilistic characters.

You may differentiate the methods as belonging to categorical, ordinal, discrete or continuous measurements.

You may wish to investigate the methods as parametric, non parametric, distribution free population or normally distributed.

You may separate them on their representation forms such as verbal, graphical, pictorial, or in table.

You may discriminate them on heuristic, observational, or experimental scientific values.

You may bundle these methods on qualitative or quantitative values.

You may as well separate them on their historical values or modern techniques based on newer technologies.

You may select them as to their state of the art methods such as ancient methods that new information and new paradigms have refuted their validity or recently developed.

You may define the methods as those digitally or analytically amenable for solving problems.

You may choose to draw several lists of those methods that are economically sounds, esoteric, or just plainly fuzzy sounding.

You may opt to differentiate these methods on requiring high level of mathematical reasoning that are out of your capability and those that can be comprehended through persistent efforts.

You could as well sort them according to which ones fit nicely into the courses that you have already taken, but failed to recollect that they were indeed methods worth acquiring for your career.

You may use any of these taxonomies to answer an optional exam question with no guarantees that you might get a substantial grade.

It would be interesting to collect statistics on how often these methods are being used, by whom, for what rational and by which line of business and by which universities.

It would be interesting to translate these methods into Arabic, Chinese, Japanese, Hindu, or Russian.

You’re Not Dumb.

Are you doing your due diligence to acquire comprehensive knowledge of cultures, civilization, nature, environment, communication…?

Are you prepared with the subject matter before you compete in school studies?

This may surprise you, but Sal Khan used to skip classes at MIT. (Very normal behaviour if you never joined team sports or served in the military)

Khan, perhaps the best-known teacher in the world today, entrepreneur.com|By Kim Lachance Shandrow

Lectures were too long and boring: “I found it much more valuable to learn the material at my own time and pace”

“I learned a lot more going into the computer lab or the science lab or the circuits lab, fiddling with things and playing and getting my hands dirty.” (That’s called training your experimental mind in education methods)

Patsy Z shared this link TEDxBarcelona

“Whoever you are, wherever you are. You only have to know one thing: you can learn anything.”

Khan Academy Founder: No, You’re Not Dumb. Anyone Can Learn Anything.

That same renegade spirit of independence and innovation, of learning on your own terms and on your own time, is still the heart and soul of Khan Academy. The “revolutionary controversial” online learning platform that this 38-year-old math whiz engineer singlehandedly founded 10 years ago.

What began as a handful of tutoring videos, the former hedge fund analyst uploaded to YouTube to help his cousins with their algebra homework. The platform mushroomed into a massive digital classroom for the world.

To date, the free, non-profit learning hub has delivered more than 580 million of Khan’s straightforward video lessons on demand, with students completing around 4 million companion exercises on any given day.

The Academy is in the midst of a growth spurt offline as well, with an excess of 1 million registered teachers around the globe incorporating the supplemental teaching tool into their classrooms.

We recently caught up with Khan, who discussed how his own education shaped his passion project, his belief that anyone can learn anything and what’s next for Khan Academy, online and off.

How did you develop a passion for education? Who inspired you?

Education has helped me a lot. My father’s side of the family was very active in education.

My parents separated when I was two and then my father passed away. I never really knew that side of the family. But, when I got to know my father’s side branch, they’re intensely academic.

My mother’s side of the family, they’re more the artists. We have a lot of dancers and singers who don’t fit with certain stereotypes that they’re all engineers and they’re all super invested in math.

I went to a fairly normal, middle of the road public school in a suburb of New Orleans, but it gave me huge opportunities. I had a lot of friends there who were just smart as I am.

They seem to learn things just as fast, but they’re hitting walls in algebra class and chemistry class.

That’s when I started questioning the notion of mastery-based learning. It wasn’t completely obvious to me then, but I just knew something was off.

You often say that anyone can learn anything. Why do you think that?

If you’re doing well in school you can have one of two things: You can say, “Oh, well, I have the DNA for doing it. Or you can say, “No, my brain was able to tackle it. I had the right mindset.” I saw those ideas in action early in high school.

Also, I tutored others as part of this math honors society I was in. I noticed that if you tutored people the right way, engaged with them the right way, they would improve.

I saw C and D students all of the sudden do very, very well and become some of the best math students in the state.

Then I go to college at MIT and I saw a lot of people struggle there, too, mainly because they aren’t adequately prepared. It was the same thing.

It was clear to me that it wasn’t intelligence at play, it was much more preparation. The people who did well were the people who saw the material for the third time, had a lot of rigor and didn’t have any gaps in their knowledge.

The people who really struggled were the folks who weren’t familiar with the material and didn’t have a super solid grasp. It has nothing to do with some type of innate intelligence.

How are you taking Khan Academy out from behind the Internet and into the real world?

We piloted a program called LearnStorm in the Bay Area (of San Francisco,California) last year. We’re expanding it to three to five other areas this Spring.

We hope it will function nationwide by 2017. It goes beyond the core skill work we do on Khan Academy, tying it into monthly challenges that are intended to be done in a physical environment, in your math class with your teacher.

LearnStorm came from the idea of we can create these great experiences online that are aligned with standards that are really good for students and they correlate with success metrics, but you need the the students to engage with them.

On our own, we can create a lot of neat game mechanics and all sorts of things on the site, but nothing beats having physical people who are part of your life, especially your teachers, your school and your peers, involved in your learning.

More recently, we worked with Disney Pixar to bridge the disconnect between what students learn about math and science at school and tackling creative challenges in the real world with an initiative called Pixar in a Box.

Our relationship with Pixar makes it very clear that math, science, creativity and storytelling aren’t separate things. They can all happen together.

Why the recent pivot to a growing list of local, offline projects when you originally set out to be a digital classroom for the world?

This isn’t the first time we’ve branched out offline. From day one, I immediately reached out to teachers to see if they’d want to use Khan Academy and to get their feedback on our features. In 2010, we started with the Los Altos school district here in Northern California.

Plus, there’s a whole teacher resources section on Khan Academy, so we’ve always had this dimension.

What’s different now isn’t us working with a handful of classrooms in a very high-touch way. It’s us being able to work with many more teachers and, frankly, they’re able to do a lot of the heavy lifting around mindset, meta cognition, getting students into it, and we provide the tools.

When we say that our vision statement is a free, world-class education for anyone anywhere, it doesn’t mean that it’s just going to all happen through our software, through our content.

As an organization, we view it as part of our mission to up how we interface with all of the other incredible stakeholders in this ecosystem, especially teachers and schools, to figure out how we educate students together, not just all from one site.

What will the classroom of the future look like and how will Khan Academy play a role?

You won’t need lectures in class any more. Those can happen on students’ own time. Using exercises, students can progress at their own pace, like how the Khan academy software works.

Instead, in-class time can be spent having peer-to-peer socratic dialogues, case-based discussions, programming and project based learning.

Why can’t teachers co-teach and mentor each other?

Why separate students by perceived ability or age?

Can’t you benefit from older students mentoring younger students? When classrooms are not one pace, when it’s all not lectured-based, it opens up all sorts of possibilities.

What’s the next big tech innovation in education, even bigger than the Internet?

Virtual reality, though my gut says it’s going to be about 10 years before we see major potential here. It’s very early right now. I can imagine that in about a decade, when you come to Khan Academy, you’ll literally feel like you’re in a virtual place of learning and in a community.

You’ll see people walking around in a virtual world. Who knows? I don’t know if that’s in 10 or 20 years, but I think that’s going to happen.

Aside from virtual reality integration, what else is on the horizon for Khan Academy?

We’re going to be available in all of the world’s major languages on all of the major platforms, whether it’s a cheap smartphone or an Oculus Rift. The more the better. We’re working on translating all of our resources into more than 36 languages, with thousands of volunteers helping us subtitle videos.

Are any new subjects in the works? Topics outside of the traditional academic realm, like, say, yoga and meditation perhaps?

No, nothing like that at the moment, although I do love yoga. We already have a lot of material in physics and chemistry and biology, but we want to really nail those core academic subjects.

Expect to see a lot from us in history and civics over the next year, along with interesting things around grammar, writing and programming.

What advice do you have for entrepreneurs who hope to be as astronomically successful as you

I cringe at the term “astronomically successful,” because it sure doesn’t feel like I am. As for advice, though, I think every entrepreneur should know what they’re getting into, that there are moments of extreme stress and pain that aren’t so obvious sometimes when you read about startups in the press.

Still, all entrepreneurs go through it. You need to be prepared for it and know that it’s normal when you’re in the midst of it.

The transcript that follows has been edited for clarity and brevity.

Tags:

Incomplete: Simplify (Einstein, Godel, Turing, Chaitin…)

One thing we know is that life reinforces the hypothesis that the world is infinitely complex and most of its phenomena will remain incomprehensible, meaning unexplained.

For example, no theory of life evolution was able to predict the next phase in evolution and the route taken to the next phase. The reason we have difficulty discovering how living organism adapt to the environment to survive, in longer term.

We don’t know if laws in biology will exist in the same meaning of laws of physics or natural phenomena.

For example, is the universe simple or complex, finite or infinite?

The mathematician Chaitin answered: “This question will remain without any resolution, simply because we need an external observer outside our system of reference, preferably non-human, to corroborate our theoretical perception.”

(A few of my readers will say: “This smack of philosophy” and they hate philosophy or the rational logic deducted from reduced propositions that cannot rationally be proven)

So many scholars wanted to believe that “God does not play dice” (Einstein) or that chaos is within the predictive laws of God and nature (Leibniz), or that the universe can be explained by simple, restricted set of axioms, non-redundant rules (Stephen Hawking).

Modern mathematical theories and physical observations are demonstrating that most phenomena are basically behaving haphazardly.

For example, quantum physics reveals that hazard is the fundamental principle in the universe of the very tiny particles:  Individual behaviors of small particles in the atomic nucleus are unpredictable.  Thus, there is no way of measuring accurately speed, location, and direction of a particle simultaneously; all that physics can do is assigning probability numbers.

Apparently, hazard plays a role even in mathematics.

For example, many mathematical “true” statesmans cannot be demonstrated, they are logically irreducible and incomprehensible.

Mathematicians know that there exists an infinity of “twin” prime numbers (odd number followed by even number) but this knowledge cannot be proven mathematically.

Thus, many mathematicians would suggest to add these true “propositions” but non demonstrable theories to the basic set of axioms.

Axioms are a set of the bare minimum of “given propositions” that we think we know to be true, but the reason is unable to approach them adequately, using the logical processes.

Einstein said: “What is amazing is that the eternally incomprehensible in nature is comprehensible”; meaning that we always think that we can extend an explanation to a phenomenon without being able to proving its working behaviors.

Einstein wrote that to comprehend means to rationally explain by compressing the basic axioms so that our mind can understand the facts; even if we are never sure how the phenomenon behaves.

For example, Plato said that the universe is comprehensible simply because it looks structured by the beauty of geometric constructs, the regularity of the tonality in string instruments, and steady movement of planets…

Steven Weinberg admits that “If we manage to explain the universal phenomenon of nature it will not be feasible by just simple laws.” (I agree with Weinberg in that statement. Consequently, comprehension will be limited to the few scientists who can handle and visualize complex equations)

Many facts can be comprehended when they are explained by a restricted set of theoretical affirmations:  This is called the Occam Razor theory which says: “The best theory or explanation is the simplest.”

The mathematician Hermann Weyl explained: “We first need to confirm that nature is regulated by simple mathematical laws.  Then, the fundamental relationships become simpler the further we fine-tune the elements, and the better the explication of facts is more exact.”

So what is theory?

Informatics extended another perspective for defining theory: “a theory is a computer program designed to take account of observed facts by computation.  Thus, the program is designed to predict observations.  If we say that we comprehend a phenomenon then, we should be able to program its behavior.  The smaller the program (more elegant) the better the theory is comprehended.”

When we say “I can explain” we mean that “I compressed a complex phenomenon into simple programs that “I can comprehend”, that human mind can comprehend. 

Basically, explaining and comprehending is of an anthropic nature, within the dimension of human mental capabilities.

The father of information theory, John von Neumann wrote: “Theoretical physics mainly categorizes phenomena and tries to find links among the categories; it does not explain phenomena.

In 1931, mathematician Kurt Godel adopted a mental operation consisting of indexing lists of all kinds of assertions.

His formal mathematical method demonstrated that there are true propositions that cannot be demonstrated, called “logically incomplete problems

The significance of Godel’s theory is that it is impossible to account for elemental arithmetic operations (addition or multiplication) by reducing its results from a few basic axioms.  With any given set of logical rules, except for the most simple, there will always be statements that are undecidable, meaning that they cannot be proven or disproven due to the inevitable self-reference nature of any logical systems.

The theorem indicates that there is no grand mathematical system capable of proving or disproving all statements.

An undecidable statement can be thought of as a mathematical form of a statement like “What I just said is a lie”:  The statement makes reference to the language being used to describe it, it cannot be known whether the statement is true or not.

However, an undecidable statement does not need to be explicitly self-reference to be undecidable. The main conclusion of Gödel’s incompleteness theorems is that all logical systems will have statements that cannot be proven or disproven; therefore, all logical systems must be “incomplete.”

The philosophical implications of these theorems are widespread.

The set suggests that in physics, a “theory of everything” may be impossible, as no set of rules can explain every possible event or outcome. It also indicates that logically, “proof” is a weaker concept than “true”.

Such a concept is unsettling for scientists because it means there will always be things that, despite being true, cannot be proven to be true. Since this set of theorems also applies to computers, it also means that our own minds are incomplete and that there are some ideas we can never know, including whether our own minds are consistent (i.e. our reasoning contains no incorrect contradictions).

The second of Gödel’s incompleteness theorems states that no consistent system can prove its own consistency, meaning that no sane mind can prove its own sanity.

Also, since that same law states that any system able to prove its consistency to itself must be inconsistent, any mind that believes it can prove its own sanity is, therefore, insane.

Alan Turing used a deeper twist to Godel’s results.

In 1936, Turing indexed lists of programs designed to compute real numbers from zero to 1 (think probability real numbers).  Turing demonstrated mathematically that no infallible computational procedures (algorithms) exist that permit to decide whether a mathematical theorem is true or false.

In a sense, there can be no algorithm able to know if a computer program will even stop.

Consequently, no computer program can predict that another program will ever stop computing.  All that can be done is allocating a probability number that the program might stop.  Thus, you can play around with all kinds of axioms, but no sets can deduce that a program will end.  Turing proved the existence of non computable numbers.

Note 1: Chaitin considered the set of all possible programs; he played dice for each bit in the program (0 or 1, true or false) and allocated a probability number for each program that it might end.  The probability that a program will end in a finite number of steps is called Omega.  The succession of numbers comprising Omega are haphazard and thus, no simple set of axioms can deduce the exact number.  Thus, while Omega is defined mathematically, the succession of the numbers in Omega has absolutely no structure.  For example we can write algorithm to compute Pi but never for Omega.

Note 2:  Bertrand Russell (1872-1970) tried to rediscover the founding blocks of mathematics “the royal highway to truth”.  He was disappointed and wrote: “Mathematics is infected of non proven postulates and infested with cyclic definitions.  The beauty and the terror of mathematics is that a proof must be found; even if it proves that a theory cannot e be proven”

Note 3:  The French mathematician Poincaré got a prize for supposedly having discovered chaos.  The article was officially published when Poincaré realized that he made a serious error that disproved his original contention.  Poincaré had to pay for all the published articles and for destroying them.  A single copy was saved and found at the Mittag-Leffler Institute in Stockholm.

A few chaotic glitches in sciences and philosophy?

Note: Re-edit of “Ironing out a few chaotic glitches; (Dec. 5, 2009)”

This Covid-19 pandemics has forced upon me to repost this old article.

Philosophers have been babbling for many thousand years whether the universe is chaotic or very structured so that rational and logical thinking can untangle its laws and comprehend nature’s behaviors and phenomena.

Plato wrote that the world is comprehensible.  The world looked like a structured work of art built on mathematical logical precision. Why?

Plato was fond of symmetry, geometry, numbers, and he was impressed by the ordered tonality of musical cord instruments.

Leibnitz in the 18th century explained “In what manner God created the universe it must be in the most regular and ordered structure”.

Leibnitz claimed that “God selected the simplest in hypotheses that generated the richest varieties of phenomena.

A strong impetus that the universe is comprehensible started with the “positivist philosophers and scientists” of the 20th century who were convinced that the laws of nature can be discovered by rational mind.

Einstein followed suit and wrote “God does not play dice.  To rationally comprehend a phenomenon we must reduce, by a logical process, the propositions (or axioms) to apparently known evidence that reason cannot touch.”

The pronouncement of Einstein “The eternally incomprehensible universe is its comprehensibility” can be interpreted in many ways.

The first interpretation is “what is most incomprehensible in the universe is that it can be comprehensible but we must refrain from revoking its sacral complexity and uncertainty”.

The second interpretation is “If we are still thinking that the universe is not comprehensible then may be it is so, as much as we want to think that we may understand it; thus, the universe will remain incomprehensible (and we should not prematurely declare the “end of science”).

The mathematician Hermann Weyl developed the notion: “The assertion that nature is regulated by strict laws is void, unless we affirm that it is related by simple mathematical laws.  The more we delve in the reduction process to the bare fundamental propositions the more facts are explained with exactitude.”

It is this philosophy of an ordered and symmetrical world that drove Mendeleyev to classifying the chemical elements; Murray Gell-Mann used “group theory” to predict the existence of quarks.

A few scientists went even further; they claimed that the universe evolved in such a way to permit the emergence of the rational thinking man.

Scientists enunciated many principles such as:

“The principle of least time” that Fermat used to deduce the laws of refraction and reflection of light;

Richard Feynman discoursed on the “principle of least actions”;

We have the “principle of least energy consumed”, the “principle of computational equivalence”, the “principle of entropy” or the level of uncertainty in a chaotic environment.

Stephen Hawking popularized the idea of the “Theory of Everything TOE” a theory based on a few simple and non redundant rules that govern the universe.

Stephen Wolfram thinks that the TOE can be found by a thorough systematic computer search: The universe complexity is finite and the most seemingly complex phenomena (for example cognitive functions) emerge from simple rules.

Before we offer the opposite view that universe is intrinsically chaotic let us define what is a theory.

Gregory Chaitin explained that “a theory is a computer program designed to account for observed facts by computation”.  (Warning to all mathematicians!  If you want your theory to be published by peer reviewers then you might have to attach an “elegant” or the shortest computer program in bits that describes your theory)

Kurt Gödel and Alain Turing demonstrated what is called “incompletude” in mathematics or the ultimate uncertainty of mathematical foundations.  There are innumerable “true” propositions or conjectures that can never be demonstrated.

For example, it is impossible to account for the results of elementary arithmetic such as addition or multiplication by the deductive processes of its basic axioms.  Thus, many more axioms and unresolved conjectures have to be added in order to explain correctly many mathematical results.

Turing demonstrated mathematically that there is no algorithm that can “know” if a program will ever stop or not.  The consequence in mathematics is this: No set of axioms will ever permit to deduce if a program will ever stop or not. Actually, there exist many numbers that cannot be computed.  There are mathematical facts that are logically irreducible and incomprehensive.

Quantum mechanics proclaimed that, on the micro level, the universe is chaotic: there is impossibility of simultaneously locating a particle, its direction, and determining its velocity.  We are computing probabilities of occurrences.

John von Neumann wrote: “Theoretical physics does not explain natural phenomena: it classifies phenomena and tries to link or relate the classes.”

Acquiring knowledge was intuitively understood as a tool to improving human dignity by increasing quality of life. Thus, erasing as many dangerous superstitions that bogged down spiritual and moral life of man.

Ironically, the trend captured a negative life of its own in the last century.  The subconscious goal for learning was  meant to frustrate fanatic religiosity that proclaimed that God is the sole creator and controller of our life, its quality, and its destiny.

With our gained power in knowledge we may thus destroy our survival by our own volition: We can commit earth suicide regardless of what God wishes.

So far, we have been extremely successful beyond all expectations.  We can destroy all living creatures and plants by activating a single H-Bomb or whether we act now or desist from finding resolution to the predicaments of climate changes.

I have impressions.

First, what the mathematicians and scientists are doing is not discovering the truth or the real processes but to condense complexity into simple propositions so that an individual may think that he is able to comprehend the complexities of the world.

Second, nature is complex; man is more complex; social interactions are far more complex.

No mathematical equations or simple laws will ever help an individual to comprehend the thousands of interactions among the thousands of variability.

Third, we need to focus on the rare events. It has been proven that the rare events (for example, occurrences at the tails of probability functions) are the most catastrophic simply because very few are the researchers interested in investigating them: scientists are cozy with those well structured behaviors that answer collective behaviors.

Fourth impression is that I am a genius without realizing it.  Unfortunately, Kurt Gödel is the prime killjoy; he would have mock me on the ground that he mathematically demonstrated that any sentence I write is a lie.  How would I dare write anything?

Pairing math and music in integrated teaching method?

And Most of us will love doing math?

Like “If a student can clap about a beat based on a time signature, well aren’t they adding and subtracting fractions based on music notation? We have to think differently.”

Jazz composer Herbie Hancock later studied electrical engineering at Grinnell College before starting his jazz career full-time.

He says there is an intrinsic link between playing music and building things, one that he thinks should be exploited in classrooms across the country, where there has been a renewed emphasis on science, technology, engineering and math (STEM) education.

Hancock joined a group of educators and researchers Tuesday at the U.S. Education Department’s headquarters to discuss how music can be better integrated into lessons on math, engineering and even computer science, ahead of International Jazz Day this weekend.

Education Secretary John B. King Jr. said that an emphasis on math and reading — along with standardized testing — has had the unfortunate side effect of squeezing arts education out of the nation’s classrooms, a trend he thinks is misguided.

“English and math are necessary but not sufficient for students’ long-term success,” King said, noting that under the Every Student Succeeds Act, the new federal education law, schools have new flexibility to use federal funding for arts education.


Jazz composer Herbie Hancock addresses a group at the U.S. Department of Education on April 26, 2016,

Hancock is the chairman of the Thelonious Monk Institute of Jazz, which has developed MathScienceMusic.org, a website that offers teachers resources and apps to use music as a vehicle to teach other academic lessons.

One app, Groove Pizza, allows users to draw lines and shapes onto a circle. The circle then rotates and each shape and line generates its own distinct sound.

It’s a discreet way for children to learn about rhythm and proportions. With enough shapes and lines, children can create elaborate beats on the app, all in the context of a “pizza” — another way to make learning math and music palatable to kids.

Another app — Scratch Jazz — allows children to use the basic coding platform Scratch to create their own music.

“A lot of what we focus on is lowering the barriers to creative expression,” said Alex Ruthmann, a professor of music education at New York University who helped develop the Groove Pizza app.

Other researchers discussed their experiments with music and rhythm to teach fractions and proportionality, a challenging concept for young students to grasp when it is taught in the abstract.

Susan Courey, a professor of special education at San Francisco State University, developed a fractions lesson that has students tap out a beat.

“It goes across language barriers, cultures and achievement barriers and offers the opportunity to engage a very diverse set of students,” Courey said.

In a small study, students who received the music lesson scored 50 percent higher on a fraction test than those who learned with the standard curriculum. “They should be taught together.”

Hancock thinks that the arts may offer a better vehicle to teach math and science to some students. But he also sees value in touching students’ hearts through music — teaching them empathy, creative expression and the value of working together and keeping an open mind.

“Learning about and adopting the ethics inherent in jazz can make positive changes in our world, a world that now more than ever needs more creativity and innovation and less anger and hostility to help solve the challenges that we have to help deal with every single day,” Hancock said.

[Top business leaders, 27 governors, urge Congress to boost computer science education]

[Click here to check out the Groove Pizza app]

[Herbie Hancock performance full of funk, energy]

Math Blog and how to write math equations using LaTeX $latex…$

WordPress.com supports LaTeX, a document markup language for the TeX typesetting system, which is used widely in academia as a way to format mathematical formulas and equations.

LaTeX makes it easier for math and computer science bloggers and other academics in our community to publish their work and write about topics they care about.

If you’re a math blogger and expressing equations you’ve worked on, you’ve probably used LaTeX before. If you’re just starting out (or simply curious to see how it all works), we’ve gathered a few examples of great math and computing blogs on WordPress.com that will inspire you.

In general, to display formulas and equations, you place LaTeX code in between $latex and $, like this:

$latex YOUR LATEX CODE HERE$

So for example, inserting this when you’re creating a post . . .

$latex i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>$

. . . will display this on your site:

i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>

Nifty, huh? Learning LaTeX is like learning a new language, and the bloggers below show just how much you can do. And if you’re not a math whiz, don’t worry! You’re not expected to understand the snippets below, but we hope they show what’s possible.

Gödel’s Lost Letter and P=NP

Suppose Alice gives Bob two boxes labelled respectively {X} and {Y}. Box {X}contains some positive integer {x}, and as you might guess, box {Y} contains some positive integer {y}. Bob cannot open either box to see what integer it holds. Bob can shake the boxes, or hold them up to a bright light, but there is no way he can discover what they contain.

This blog, on P=NP and other questions in the theory of computing, presents the work of Dick Lipton at Georgia Tech and Ken Regan at the University at Buffalo. One of their main goals is to pull back the curtain so readers can understand how research works and who is behind it.

From the recent post “Move the Cheese” to an older piece on “Navigating Cities and Understanding Proofs,” they present problems and sketch solutions, and publish thorough and thoughtful discussions that not only talk about interesting open problems, but offer context and history.

You can see LaTex in action in the example above, from the recent post “Euclid Strikes Back.”

Math ∩ Programming

Note that we will have another method to determine the necessary coefficients later, so we can effectively ignore how these coefficients change. Next, we note the following elementary identities from complex analysis:

\displaystyle \cos(2 \pi k t) = \frac{e^{2 \pi i k t} + e^{-2 \pi i k t}}{2}
\displaystyle \sin(2 \pi k t) = \frac{e^{2 \pi i k t} - e^{-2 \pi i k t}}{2i}

Jeremy Kun, a mathematics PhD student at the University of Illinois in Chicago, explores deeper mathematical ideas and interesting solutions to programming problems. Math ∩ Programming is both a blog and portfolio, and well-organized: you can use the left-side menu to navigate Jeremy’s sections, from Primers to the Proof Gallery. The site is also clean and well-presented — can you believe he uses the Confit theme, which was originally created for restaurant sites?

The snippet above illustrates more you can do with LaTeX, taken from “The Fourier Series — A Primer.”

Terence Tao

Definition 1 (Multiple dense divisibility) Let {y \geq 1}. For each natural number {k \geq 0}, we define a notion of {k}-tuply {y}-dense divisibility recursively as follows:

  • Every natural number {n} is {0}-tuply {y}-densely divisible.
  • If {k \geq 1} and {n} is a natural number, we say that {n} is {k}-tuply {y}-densely divisible if, whenever {i,j \geq 0} are natural numbers with {i+j=k-1}, and {1 \leq R \leq n}, one can find a factorisation {n = qr} with {y^{-1} R \leq r \leq R} such that {q} is {i}-tuply {y}-densely divisible and {r} is {j}-tuply {y}-densely divisible.

We let {{\mathcal D}^{(k)}_y} denote the set of {k}-tuply {y}-densely divisible numbers. We abbreviate “{1}-tuply densely divisible” as “densely divisible”, “{2}-tuply densely divisible” as “doubly densely divisible”, and so forth; we also abbreviate {{\mathcal D}^{(1)}_y}as {{\mathcal D}_y}.

Mathematician, UCLA faculty member, and Fields Medal recipient Terence Tao uses his WordPress.com site to present research updates and lecture notes, discuss open problems, and talk about math-related topics.

He uses the Tarski theme with a modified CSS (to do things such as boxed theorems).

As stated on his About page, he uses Luca Trevisan’s LaTeX to WordPress converter to write his more mathematically intensive posts. Above, you’ll see an example of how he uses LaTeX on his blog, excerpted from the post “An improved Type I estimate.”

Terence also has a blog category for non-technical posts, aimed at a more general audience, and offers helpful advice on mathematical careers.

Using LaTeX

From  “Euclid Strikes Back,” Gödel’s Lost Letter.

You can read a brief primer on using LaTeX on our Support site and search related forum discussions to see if a WordPress.com user has asked your question.

If you’re dipping in for the first time, we encourage you to check out these resources for help and detailed documentation:

We look forward to your posts showing off your math wizardry!

You might also enjoy these posts:

Which machine learning algorithm should I use?

Note: Re-edit of May 11, 2018

Note: in the early 1990’s, I took graduate classes in Artificial Intelligence (AI) and neural networks. The concepts are the same, though upgraded with new algorithms and automation.

I recall a book with a Table (like the Mendeleev table in chemistry) that contained the terms, mental processes, mathematical concepts behind the ideas

There are several lists of methods, depending on the field of study you are more concerned with.

One list of methods is constituted of methods that human factors are trained to utilize if need be such as:

Verbal protocol, neural network, utility theory, preference judgments, psycho-physical methods, operational research, prototyping, information theory, cost/benefit methods, various statistical modeling packages, and expert systems.

There are those that are intrinsic to artificial intelligence methodology such as:

Fuzzy logic, robotics, discrimination nets, pattern matching, knowledge representation, frames, schemata, semantic network, relational databases, searching methods, zero-sum games theory, logical reasoning methods, probabilistic reasoning, learning methods, natural language understanding, image formation and acquisition, connectedness, cellular logic, problem solving techniques, means-end analysis, geometric reasoning system, algebraic reasoning system.

 

This resource is designed primarily for beginner to intermediate data scientists or analysts who are interested in identifying and applying machine learning algorithms to address the problems of their interest.

A typical question asked by a beginner, when facing a wide variety of machine learning algorithms, is “which algorithm should I use?” The answer to the question varies depending on many factors, including:

  • The size, quality, and nature of data.
  • The available computational time.
  • The urgency of the task.
  • What you want to do with the data.

Even an experienced data scientist cannot tell which algorithm will perform the best before trying different algorithms.

We are not advocating a one and done approach, but we do hope to provide some guidance on which algorithms to try first depending on some clear factors.

The machine learning algorithm cheat sheet

Flow chart shows which algorithms to use when

The machine learning algorithm cheat sheet helps you to choose from a variety of machine learning algorithms to find the appropriate algorithm for your specific problems.

This article walks you through the process of how to use the sheet.

Since the cheat sheet is designed for beginner data scientists and analysts, we will make some simplified assumptions when talking about the algorithms.

The algorithms recommended here result from compiled feedback and tips from several data scientists and machine learning experts and developers.

There are several issues on which we have not reached an agreement and for these issues we try to highlight the commonality and reconcile the difference.

Additional algorithms will be added in later as our library grows to encompass a more complete set of available methods.

How to use the cheat sheet

Read the path and algorithm labels on the chart as “If <path label> then use <algorithm>.” For example:

  • If you want to perform dimension reduction then use principal component analysis.
  • If you need a numeric prediction quickly, use decision trees or logistic regression.
  • If you need a hierarchical result, use hierarchical clustering.

Sometimes more than one branch will apply, and other times none of them will be a perfect match.

It’s important to remember these paths are intended to be rule-of-thumb recommendations, so some of the recommendations are not exact.

Several data scientists I talked with said that the only sure way to find the very best algorithm is to try all of them.

(Is that a process to find an algorithm that matches your world view on an issue? Or an answer that satisfies your boss?)

Types of machine learning algorithms

This section provides an overview of the most popular types of machine learning. If you’re familiar with these categories and want to move on to discussing specific algorithms, you can skip this section and go to “When to use specific algorithms” below.

Supervised learning

Supervised learning algorithms make predictions based on a set of examples.

For example, historical sales can be used to estimate the future prices. With supervised learning, you have an input variable that consists of labeled training data and a desired output variable.

You use an algorithm to analyze the training data to learn the function that maps the input to the output. This inferred function maps new, unknown examples by generalizing from the training data to anticipate results in unseen situations.

  • Classification: When the data are being used to predict a categorical variable, supervised learning is also called classification. This is the case when assigning a label or indicator, either dog or cat to an image. When there are only two labels, this is called binary classification. When there are more than two categories, the problems are called multi-class classification.
  • Regression: When predicting continuous values, the problems become a regression problem.
  • Forecasting: This is the process of making predictions about the future based on the past and present data. It is most commonly used to analyze trends. A common example might be estimation of the next year sales based on the sales of the current year and previous years.

Semi-supervised learning

The challenge with supervised learning is that labeling data can be expensive and time consuming. If labels are limited, you can use unlabeled examples to enhance supervised learning. Because the machine is not fully supervised in this case, we say the machine is semi-supervised. With semi-supervised learning, you use unlabeled examples with a small amount of labeled data to improve the learning accuracy.

Unsupervised learning

When performing unsupervised learning, the machine is presented with totally unlabeled data. It is asked to discover the intrinsic patterns that underlies the data, such as a clustering structure, a low-dimensional manifold, or a sparse tree and graph.

  • Clustering: Grouping a set of data examples so that examples in one group (or one cluster) are more similar (according to some criteria) than those in other groups. This is often used to segment the whole dataset into several groups. Analysis can be performed in each group to help users to find intrinsic patterns.
  • Dimension reduction: Reducing the number of variables under consideration. In many applications, the raw data have very high dimensional features and some features are redundant or irrelevant to the task. Reducing the dimensionality helps to find the true, latent relationship.

 Reinforcement learning

Reinforcement learning analyzes and optimizes the behavior of an agent based on the feedback from the environment.  Machines try different scenarios to discover which actions yield the greatest reward, rather than being told which actions to take. Trial-and-error and delayed reward distinguishes reinforcement learning from other techniques.

Considerations when choosing an algorithm

When choosing an algorithm, always take these aspects into account: accuracy, training time and ease of use. Many users put the accuracy first, while beginners tend to focus on algorithms they know best.

When presented with a dataset, the first thing to consider is how to obtain results, no matter what those results might look like. Beginners tend to choose algorithms that are easy to implement and can obtain results quickly. This works fine, as long as it is just the first step in the process. Once you obtain some results and become familiar with the data, you may spend more time using more sophisticated algorithms to strengthen your understanding of the data, hence further improving the results.

Even in this stage, the best algorithms might not be the methods that have achieved the highest reported accuracy, as an algorithm usually requires careful tuning and extensive training to obtain its best achievable performance.

When to use specific algorithms

Looking more closely at individual algorithms can help you understand what they provide and how they are used. These descriptions provide more details and give additional tips for when to use specific algorithms, in alignment with the cheat sheet.

Linear regression and Logistic regression    

Linear regression is an approach for modeling the relationship between a continuous dependent variable [Math Processing Error]y and one or more predictors [Math Processing Error]X. The relationship between [Math Processing Error]y and [Math Processing Error]X can be linearly modeled as [Math Processing Error]y=βTX+ϵ Given the training examples [Math Processing Error]{xi,yi}i=1N, the parameter vector [Math Processing Error]β can be learnt.

If the dependent variable is not continuous but categorical, linear regression can be transformed to logistic regression using a logit link function. Logistic regression is a simple, fast yet powerful classification algorithm.

Here we discuss the binary case where the dependent variable [Math Processing Error]y only takes binary values [Math Processing Error]{yi∈(−1,1)}i=1N (it which can be easily extended to multi-class classification problems).

In logistic regression we use a different hypothesis class to try to predict the probability that a given example belongs to the “1” class versus the probability that it belongs to the “-1” class. Specifically, we will try to learn a function of the form:[Math Processing Error]p(yi=1|xi)=σ(βTxi) and [Math Processing Error]p(yi=−1|xi)=1−σ(βTxi).

Here [Math Processing Error]σ(x)=11+exp(−x) is a sigmoid function. Given the training examples[Math Processing Error]{xi,yi}i=1N, the parameter vector [Math Processing Error]β can be learnt by maximizing the Pyongyang said it could call off the talks, slated for June 12, if the US continues to insist that it give up its nuclear weapons. North Korea called the military drills between South Korea and the US a “provocation,” and canceled a meeting planned for today with South Korea.of [Math Processing Error]β given the data set.

Linear SVM and kernel SVM

Kernel tricks are used to map a non-linearly separable functions into a higher dimension linearly separable function. A support vector machine (SVM) training algorithm finds the classifier represented by the normal vector [Math Processing Error]w and bias [Math Processing Error]b of the hyperplane. This hyperplane (boundary) separates different classes by as wide a margin as possible. The problem can be converted into a constrained optimization problem:
[Math Processing Error]minimizew||w||subject toyi(wTXi−b)≥1,i=1,…,n.

A support vector machine (SVM) training algorithm finds the classifier represented by the normal vector  and bias  of the hyperplane. This hyperplane (boundary) separates different classes by as wide a margin as possible. The problem can be converted into a constrained optimization problem:

Linear and kernel SVM charts

When the classes are not linearly separable, a kernel trick can be used to map a non-linearly separable space into a higher dimension linearly separable space.

When most dependent variables are numeric, logistic regression and SVM should be the first try for classification. These models are easy to implement, their parameters easy to tune, and the performances are also pretty good. So these models are appropriate for beginners.

Trees and ensemble trees

A decision tree for prediction model.

Decision trees, random forest and gradient boosting are all algorithms based on decision trees.

There are many variants of decision trees, but they all do the same thing – subdivide the feature space into regions with mostly the same label. Decision trees are easy to understand and implement.

However, they tend to over fit data when we exhaust the branches and go very deep with the trees. Random Forrest and gradient boosting are two popular ways to use tree algorithms to achieve good accuracy as well as overcoming the over-fitting problem.

Neural networks and deep learning

Neural networks flourished in the mid-1980s due to their parallel and distributed processing ability.

Research in this field was impeded by the ineffectiveness of the back-propagation training algorithm that is widely used to optimize the parameters of neural networks. Support vector machines (SVM) and other simpler models, which can be easily trained by solving convex optimization problems, gradually replaced neural networks in machine learning.

In recent years, new and improved training techniques such as unsupervised pre-training and layer-wise greedy training have led to a resurgence of interest in neural networks.

Increasingly powerful computational capabilities, such as graphical processing unit (GPU) and massively parallel processing (MPP), have also spurred the revived adoption of neural networks. The resurgent research in neural networks has given rise to the invention of models with thousands of layers.

A neural network

Shallow neural networks have evolved into deep learning neural networks.

Deep neural networks have been very successful for supervised learning.  When used for speech and image recognition, deep learning performs as well as, or even better than, humans.

Applied to unsupervised learning tasks, such as feature extraction, deep learning also extracts features from raw images or speech with much less human intervention.

A neural network consists of three parts: input layer, hidden layers and output layer. 

The training samples define the input and output layers. When the output layer is a categorical variable, then the neural network is a way to address classification problems. When the output layer is a continuous variable, then the network can be used to do regression.

When the output layer is the same as the input layer, the network can be used to extract intrinsic features.

The number of hidden layers defines the model complexity and modeling capacity.

Deep Learning: What it is and why it matters

k-means/k-modes, GMM (Gaussian mixture model) clustering

Kmeans/k-modes, GMM clustering aims to partition n observations into k clusters. K-means define hard assignment: the samples are to be and only to be associated to one cluster. GMM, however define a soft assignment for each sample. Each sample has a probability to be associated with each cluster. Both algorithms are simple and fast enough for clustering when the number of clusters k is given.

DBSCAN

 

A DBSCAN illustration

When the number of clusters k is not given, DBSCAN (density-based spatial clustering) can be used by connecting samples through density diffusion.

Hierarchical clustering

Hierarchical partitions can be visualized using a tree structure (a dendrogram). It does not need the number of clusters as an input and the partitions can be viewed at different levels of granularities (i.e., can refine/coarsen clusters) using different K.

PCA, SVD and LDA

We generally do not want to feed a large number of features directly into a machine learning algorithm since some features may be irrelevant or the “intrinsic” dimensionality may be smaller than the number of features. Principal component analysis (PCA), singular value decomposition (SVD), and latent Dirichlet allocation (LDA) all can be used to perform dimension reduction.

PCA is an unsupervised clustering method which maps the original data space into a lower dimensional space while preserving as much information as possible. The PCA basically finds a subspace that most preserves the data variance, with the subspace defined by the dominant eigenvectors of the data’s covariance matrix.

The SVD is related to PCA in the sense that SVD of the centered data matrix (features versus samples) provides the dominant left singular vectors that define the same subspace as found by PCA. However, SVD is a more versatile technique as it can also do things that PCA may not do.

For example, the SVD of a user-versus-movie matrix is able to extract the user profiles and movie profiles which can be used in a recommendation system. In addition, SVD is also widely used as a topic modeling tool, known as latent semantic analysis, in natural language processing (NLP).

A related technique in NLP is latent Dirichlet allocation (LDA). LDA is probabilistic topic model and it decomposes documents into topics in a similar way as a Gaussian mixture model (GMM) decomposes continuous data into Gaussian densities. Differently from the GMM, an LDA models discrete data (words in documents) and it constrains that the topics are a priori distributed according to a Dirichlet distribution.

 Conclusions

This is the work flow which is easy to follow. The takeaway messages when trying to solve a new problem are:

  • Define the problem. What problems do you want to solve?
  • Start simple. Be familiar with the data and the baseline results.
  • Then try something more complicated.
  • Dr. Hui Li is a Principal Staff Scientist of Data Science Technologies at SAS. Her current work focuses on Deep Learning, Cognitive Computing and SAS recommendation systems in SAS Viya. She received her PhD degree and Master’s degree in Electrical and Computer Engineering from Duke University.
  • Before joining SAS, she worked at Duke University as a research scientist and at Signal Innovation Group, Inc. as a research engineer. Her research interests include machine learning for big, heterogeneous data, collaborative filtering recommendations, Bayesian statistical modeling and reinforcement learning.

 

Tidbits and notes posted on FB and Twitter. Part 193

Note: I take notes of books I read and comment on events and edit sentences that fit my style. I pa attention to researched documentaries and serious links I receive. The page is long and growing like crazy, and the sections I post contains a month-old events that are worth refreshing your memory.

The ancient Greeks came up with a system called the Sieve of Eratosthenes for easily determining which numbers are prime. It works by simply eliminating the multiples of each prime number. Any numbers left over will be prime. (The ancient Greeks couldn’t do this in gifs, though.)

“Primes seem to me to be these un-arbitrary, unique, fated things. It cannot be coincidence that the mythical numbers of storytelling like 3, 7, and 13 are random. The lower-end primes have incredible resonance in fiction and art.”— Robin Sloan,

Robin Sloan wrote the bestselling mystery Mr. Penumbra’s 24-Hour Bookstore, in which every number was a prime (except 24 of course).

Usage of Prime Numbers:

1) Getting into Gear

Before primes were used to encrypt information, their only true practical use was at the auto-body shop. The gears in a car—and every other machine—work most reliably when the teeth are arranged by prime numbers. When gears have 13 or 17 or 23 teeth, it ensures that every gear combination is used, which helps to evenly distribute dirt, oil, and overall wear and tear.

2) Talking with Aliens

In his sci-fi novel Contact, Carl Sagan suggested that humans could communicate with aliens through prime numbers. This wasn’t a new idea. In the summer of 1960, the National Radio Astronomy Observatory searched for intelligent extraterrestrial messages by searching for prime numbers. Years later, the astronomer Frank Drake proposed that humans could communicate with aliens by transmitting “semiprimes”—that is, multiples of two prime numbers—into space.

3) Making Nature’s Music

The French modernist composer Olivier Messiaen wrote music containing transcribed birdsong and prime numbers, which helped create unusual and unpredictable rhythms, note duration, and time signatures. Messiaen, a Roman Catholic, said that musical prime numbers represented the indivisibility of God. His Liturgie de Cristal is a grand example. Listen to Messien put prime numbers into practice here.

A “prime-numbered life cycle had the most successful survival strategy” in nature, since cycles of boom and drop of resources are consistent and predictable

An emirp is a prime number that, when its decimal digits are reversed, results in a different prime. Think 13, 17, 31, 37, 71, 73, 79 …. According to Wikipedia, the largest known emirp is 10^10006+941992101×10^4999+1.

Mersenne prime numbers, named after a 17th-century French monk, are a special breed: They’re prime numbers that are one less than a power of two

Jonathan Pace is one of the volunteers participating in the Great Internet Mersenne Prime Search GIMPS. The prime he discovered (notated as 2^77,232,917-1) contains 23,249,425 digits—nearly a million digits longer than the previous record holder.

Since 1996, GIMPS volunteers have discovered 16 new numbers. “There are tens of thousands of computers involved in the search. On average, they are finding less than one a year.” (He was awarded $3,000 for 14 years of work)

Mathematician G H Hardy wrote that he avoided “practical” mathematics: it was dull and too often exploited for military gain. His discoveries in Prime Numbers were useful: They’ve aided the fields of genetics research, quantum physics, and thermodynamics. Today, his research on the distribution of prime numbers is the bedrock for our current understanding of how prime numbers operate.

Le Chuiche (al shemmace?)

Halla2 saar fi mou3aradat: Al Moustakbal lan yet 7alaf ma3 Hezbollah bil intikhabaat. Haaza lan ya3ni 3adam al ta7alof ma3 al mouta7alifeen ma3 al Moukawamat. Kelna moukawamat.

3,700-year-old Babylonian tablet rewrites the history of maths – and shows the Greeks did not develop trigonometry

3,700-year-old clay tablet has proven that the Babylonians developed trigonometry 1,500 years before the Greeks and were using a sophisticated method of mathematics which could change how we calculate today.

The tablet, known as Plimpton 332, was discovered in the early 1900s in Southern Iraq by the American archaeologist and diplomat Edgar Banks, who was the inspiration for Indiana Jones.

The true meaning of the tablet has eluded experts until now, but new research by the University of New South Wales, Australia, has shown it is the world’s oldest and most accurate trigonometric table, which was probably used by ancient architects to construct temples, palaces and canals.

 

The tablet is broken and probably had more rows, experts believe  CREDIT: UNSW

Dr Daniel Mansfield with the 3,700-year-old trigonometric table

However unlike today’s trigonometry, Babylonian mathematics used a base 60, or sexagesimal system, rather than the 10 which is used today. Because 60 is far easier to divide by three, experts studying the tablet, found that the calculations are far more accurate.

“Our research reveals that Plimpton 322 describes the shapes of right-angle triangles using a novel kind of trigonometry based on ratios, Not angles and circles,” said Dr Daniel Mansfield of the School of Mathematics and Statistics in the UNSW Faculty of Science.

“It is a fascinating mathematical work that demonstrates undoubted genius. The tablet not only contains the world’s oldest trigonometric table; it is also the only completely accurate trigonometric table, because of the very different Babylonian approach to arithmetic and geometry.

“This means it has great relevance for our modern world. Babylonian mathematics may have been out of fashion for more than 3000 years, but it has possible practical applications in surveying, computer graphics and education.

“This is a rare example of the ancient world teaching us something new.”

The Greek astronomer Hipparchus, who lived around 120BC, has long been regarded as the father of trigonometry, with his ‘table of chords’ on a circle considered the oldest trigonometric table.

A trigonometric table allows a user to determine two unknown ratios of a right-angled triangle using just one known ratio. But the tablet is far older than Hipparchus, demonstrating that the Babylonians were already well advanced in complex mathematics far earlier.

Babylon, which was in modern day Iraq, was once one of the most advanced cultures in the world
Babylon, which was in modern day Iraq, was once one of the most advanced cultures in the world 

The tablet, which is thought to have come from the ancient Sumerian city of Larsa, has been dated to between 1822 and 1762 BC. It is now in the Rare Book and Manuscript Library at Columbia University in New York.

“Plimpton 322 predates Hipparchus by more than 1000 years,” says Dr Wildberger.

“It opens up new possibilities not just for modern mathematics research, but also for mathematics education. With Plimpton 322 we see a simpler, more accurate trigonometry that has clear advantages over our own.

“A treasure-trove of Babylonian tablets exists, but only a fraction of them have been studied yet. The mathematical world is only waking up to the fact that this ancient but very sophisticated mathematical culture has much to teach us.”

The 15 rows on the tablet describe a sequence of 15 right-angle triangles, which are steadily decreasing in inclination.

The left-hand edge of the tablet is broken but the researchers believe there were originally six columns and that the tablet was meant to be completed with 38 rows.

“Plimpton 322 was a powerful tool that could have been used for surveying fields or making architectural calculations to build palaces, temples or step pyramids,” added Dr Mansfield.

The new study is published in Historia Mathematica, the official journal of the International Commission on the History of Mathematics.

How can we make love statistics? Interactive graphs?

Think you’re good at guessing stats? Guess again. Whether we consider ourselves math people or not, our ability to understand and work with numbers is terribly limited, says data visualization expert Alan Smith. Smith explores the mismatch between what we know and what we think we know.

Alan Smith. Data visualisation editor
Alan Smith uses interactive graphics and statistics to breathe new life into how data is presented. Full bio
Filmed April 2016

Back in 2003, the UK government carried out a survey. And it was a survey that measured levels of numeracy in the population.

And they were shocked to find out that for every 100 working age adults in the country, 47 of them lacked Level 1 numeracy skills. Now, Level 1 numeracy skills — that’s low-end GCSE score. It’s the ability to deal with fractions, percentages and decimals.

this figure prompted a lot of hand-wringing in Whitehall. Policies were changed, investments were made, and then they ran the survey again in 2011. So can you guess what happened to this number? It went up to 49.

0:57 And in fact, when I reported this figure in the FT, one of our readers joked and said, This figure is only shocking to 51 percent of the population.”

But I preferred the reaction of a schoolchild when I presented at a school this information, who raised their hand and said, “How do we know that the person who made that number isn’t one of the 49 percent either?”

1:20 (Laughter)

So clearly, there’s a numeracy issue, because these are important skills for life, and a lot of the changes that we want to introduce in this century involve us becoming more comfortable with numbers. (Can’t learn numeracy without using a pen and pencil?)

it’s not just an English problem. OECD this year released some figures looking at numeracy in young people, and leading the way, the USA — nearly 40 percent of young people in the US have low numeracy. Now, England is there too, but there are seven OECD countries with figures above 20 percent. That is a problem, because it doesn’t have to be that way. If you look at the far end of this graph, you can see the Netherlands and Korea are in single figures. So there’s definitely a numeracy problem that we want to address. (It is the method used to learning numeracy)

 as useful as studies like these are, I think we risk herding people inadvertently into one of two categories; that there are two kinds of people: those people that are comfortable with numbers, that can do numbers, and the people who can’t.

And what I’m trying to talk about here today is to say that I believe that is a false dichotomy. It’s not an immutable pairing. I think you don’t have to have tremendously high levels of numeracy to be inspired by numbers, and that should be the starting point to the journey ahead.

one of the ways in which we can begin that journey, for me, is looking at statistics. Now, I am the first to acknowledge that statistics has got somewhat of an image problem.

2:52 (Laughter)

It’s the part of mathematics that even mathematicians don’t particularly like, because whereas the rest of maths is all about precision and certainty, statistics is almost the reverse of that.

But actually, I was a late convert to the world of statistics myself. If you’d asked my undergraduate professors what two subjects would I be least likely to excel in after university, they’d have told you statistics and computer programming, and yet here I am, about to show you some statistical graphics that I programmed. (You think you comprehended probability and statistics, but you forget them if Not practiced)

 what inspired that change in me? What made me think that statistics was actually an interesting thing? It’s really because statistics are about us.

If you look at the etymology of the word statistics, it’s the science of dealing with data about the state or the community that we live in. So statistics are about us as a group, not us as individuals. And I think as social animals, we share this fascination about how we, as individuals, relate to our groups, to our peers. And statistics in this way are at their most powerful when they surprise us.

there’s been some really wonderful surveys carried out recently by Ipsos MORI in the last few years. They did a survey of over 1,000 adults in the UK, and said, for every 100 people in England and Wales, how many of them are Muslim? Now the average answer from this survey, which was supposed to be representative of the total population, was 24. That’s what people thought. British people think 24 out of every 100 people in the country are Muslim. Now, official figures reveal that figure to be about five. So there’s this big variation between what we think, our perception, and the reality as given by statistics. And I think that’s interesting. What could possibly be causing that misperception?

I was so thrilled with this study, I started to take questions out in presentations. I was referring to it. Now, I did a presentation at St. Paul’s School for Girls in Hammersmith, and I had an audience rather like this, except it was comprised entirely of sixth-form girls.

And I said, “Girls, how many teenage girls do you think the British public think get pregnant every year?” And the girls were apoplectic when I said the British public think that 15 out of every 100 teenage girls get pregnant in the year. And they had every right to be angry, because in fact, I’d have to have closer to 200 dots before I could color one in, in terms of what the official figures tell us.

And rather like numeracy, this is not just an English problem. Ipsos MORI expanded the survey in recent years to go across the world. And so, they asked Saudi Arabians, for every 100 adults in your country, how many of them are overweight or obese? And the average answer from the Saudis was just over a quarter. That’s what they thought. Just over a quarter of adults are overweight or obese. The official figures show, actually, it’s nearer to three-quarters.

5:56 (Laughter)

5:57 So again, a big variation.

I love this one: they asked the Japanese, for every 100 Japanese people, how many of them live in rural areas? The average was about a 50-50 split, just over halfway. They thought 56 out of every 100 Japanese people lived in rural areas. The official figure is seven.

So extraordinary variations, and surprising to some, but not surprising to people who have read the work of Daniel Kahneman, for example, the Nobel-winning economist. He and his colleague, Amos Tversky, spent years researching this disjoint between what people perceive and the reality, the fact that people are actually pretty poor intuitive statisticians. (I read many of their research papers in the late 80’s)

And there are many reasons for this. Individual experiences, certainly, can influence our perceptions, but so, too, can things like the media reporting things by exception, rather than what’s normal. Kahneman had a nice way of referring to that. He said, “We can be blind to the obvious” — so we’ve got the numbers wrong — “but we can be blind to our blindness about it.” And that has enormous repercussions for decision making.

at the statistics office while this was all going on, I thought this was really interesting. I said, this is clearly a global problem, but maybe geography is the issue here.

These were questions that were all about, how well do you know your country? So in this case, it’s how well do you know 64 million people? Not very well, it turns out. I can’t do that. So I had an idea, which was to think about this same sort of approach but to think about it in a very local sense. Is this a local? If we reframe the questions and say, how well do you know your local area, would your answers be any more accurate?

I devised a quiz: How well do you know your area? It’s a simple Web app. You put in a post code and then it will ask you questions based on census data for your local area. And I was very conscious in designing this. I wanted to make it open to the widest possible range of people, not just the 49 percent who can get the numbers.

I wanted everyone to engage with it. So for the design of the quiz, I was inspired by the isotypes of Otto Neurath from the 1920s and ’30s. Now, these are methods for representing numbers using repeating icons. And the numbers are there, but they sit in the background. So it’s a great way of representing quantity without resorting to using terms like “percentage,” “fractions” and “ratios.”

So here’s the quiz. The layout of the quiz is, you have your repeating icons on the left-hand side there, and a map showing you the area we’re asking you questions about on the right-hand side. There are 7 questions. Each question, there’s a possible answer between zero and a hundred, and at the end of the quiz, you get an overall score between zero and a hundred.

And so because this is TEDxExeter, I thought we would have a quick look at the quiz for the first few questions of Exeter. And so the first question is: For every 100 people, how many are aged under 16? Now, I don’t know Exeter very well at all, so I had a guess at this, but it gives you an idea of how this quiz works. You drag the slider to highlight your icons, and then just click “Submit” to answer, and we animate away the difference between your answer and reality. And it turns out, I was a pretty terrible guess: five.

How about the next question? This is asking about what the average age is, so the age at which half the population are younger and half the population are older. (This is the definition of the median) And I thought 35 — that sounds middle-aged to me.

9:35 (Laughter)

9:39 Actually, in Exeter, it’s incredibly young, and I had underestimated the impact of the university in this area. The questions get harder as you go through. So this one’s now asking about homeownership: For every 100 households, how many are owned with a mortgage or loan? And I hedged my bets here, because I didn’t want to be more than 50 out on the answer.

 these get harder, these questions, because when you’re in an area, when you’re in a community, things like age — there are clues to whether a population is old or young. Just by looking around the area, you can see it. Something like homeownership is much more difficult to see, so we revert to our own heuristics, our own biases about how many people we think own their own homes.

the truth is, when we published this quiz, the census data that it’s based on was already a few years old. We’ve had online applications that allow you to put in a post code and get statistics back for years. So in some senses, this was all a little bit old and not necessarily new. But I was interested to see what reaction we might get by gamifying the data in the way that we have, by using animation and playing on the fact that people have their own preconceptions.

It turns out, the reaction was more than I could have hoped for. It was a long-held ambition of mine to bring down a statistics website due to public demand.

11:06 (Laughter)

This URL contains the words “statistics,” “gov” and “UK,” which are three of people’s least favorite words in a URL. And the amazing thing about this was that the website came down at quarter to 10 at night, because people were actually engaging with this data of their own free will, using their own personal time.

I was very interested to see that we got something like a quarter of a million people playing the quiz within the space of 48 hours of launching it. And it sparked an enormous discussion online, on social media, which was largely dominated by people having fun with their misconceptions, which is something that I couldn’t have hoped for any better, in some respects. I also liked the fact that people started sending it to politicians. How well do you know the area you claim to represent? (All candidates to public office must go through such quizzes in their locality and the nation)

 then just to finish, going back to the two kinds of people, I thought it would be really interesting to see how people who are good with numbers would do on this quiz. The national statistician of England and Wales, John Pullinger, you would expect he would be pretty good. He got 44 for his own area.

12:16 (Laughter)

Jeremy Paxman — admittedly, after a glass of wine — 36. Even worse. It just shows you that the numbers can inspire us all. They can surprise us all.

12:31 So very often, we talk about statistics as being the science of uncertainty. My parting thought for today is: actually, statistics is the science of us. And that’s why we should be fascinated by numbers. 

Patsy Z shared this link · 7 hrs

TEST YOUR FACTS.
“Whether we consider ourselves math people or not, our ability to understand and work with numbers is terribly limited.”

A talk from TEDxExeter.
#TED #TEDx #TEDxTalks #SKE #TEDxSKE #Salon #TEDxSKESalon #TEDxExeter #Statistics #Numbers #Facts

Alan Smith explores the mismatch between what we know and what we th…
ted.com

adonis49

adonis49

adonis49

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Blog Stats

  • 1,519,125 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 764 other subscribers
%d bloggers like this: