Adonis Diaries

Posts Tagged ‘umwelt


Let’s stop pretending we know everything

We often fail to notice things that we are not expecting. Lisa Randall, Physicist

Can we please stop pretending we have the answers or are on a knowledge home run where the main issues are settled with only scraps to be tidied up?

Dionne Lew posted

The reality is –

  1. We hardly know anything
  2. What we think we know changes constantly, often in astounding ways
  3. The best method we have for discovering facts, scientific method, is limited
  4. Science is not reality, but provides models of reality
  5. Science is robust not unequivocal, it can produce wrong answers that are useful and seemingly right answers that are wrong
  6. What is real varies between systems, people and within ourselves
  7. We cannot even conceive of what is yet to be asked, making imagination as important as science for progress.

To claim to know for certain, in particular about issues that do not yield to testing, is unscientific and given history, likely unwise.

  1. We hardly know anything

Once we’d never heard of dark energy and matter, now we know they make up 95% of the cosmos, meaning less than 5% is made of what we once considered ‘normal’.

(Dr Lisa Randall suggests that since dark matter does not interact with us it should have been called transparent.)

That’s a phenomenal revision of known versus unknown if you consider that dark matter was discovered fewer than 18 years ago.

Extrapolate broadly.

I am not saying we don’t know things, we do –

  1. If you boil an egg, the protein denatures, it goes hard
  2. Light travels at 299 792 458 meters a second
  3. Immunisation works
  4. Warfarin thins the blood
  5. Methanol can be fatal to an adult.

We know a lot about a lot of things.

But there is way more that we don’t know and this should compel us keep an open mind.

2. What we think we know changes constantly, often in astounding ways

Take the atom.

At school I learned atoms were ‘elementary particles’, the smallest stuff you could get. Elementary meant they could not be broken down more.

These days we think of an atom more like the mother-ship of a sub-particle zoo of stuff so minute some particles can’t be described as having size.

We found the atom was made of sub-atomic particles — the electron, the proton and the neutron, which could be divided further into quarks and so on. Part of that zoo, the Higgs-Boson particle, annihilates itself as quickly as it forms and is so small that it’s not really described as having size, but instead as a ‘resonance’.

The once-elementary atom has become a giant.

Just this year we discovered a new subatomic particle.

But even in areas we consider well established, like botany and biology, we are revising what we know as more information, including from other fields, becomes available.

For example, we have known for a very long time that –

  • Enzymes are catalysts that speed up chemical reactions in the cells many times over
  • Plants soak up the sun (photosynthesis) and use it grow
  • Birds migrate in winter.

That may be right but we have reached a conclusion through partial insight.

  • Sometimes we know that something happens, but not why.
  • Or why something happens, but not how.
  • Or until there are good-enough tools we have no idea that it is happening at all.

Then we learn more.

In a fascinating talk, quantum physicist Jim Khalil cites emerging (admittedly speculative) research that shows quantum physics may play in the macro world, a field known as quantum biology.

With respect to the above examples –

  • Enzymes may work fast because subatomic particles flip from one part of the DNA molecule to the other using quantum tunnelling (Klinman)
  • The stored energy in a plant’s cells travels along multiple pathways simultaneously (quantum coherence) to find the fastest, most efficient path for the photons to reach the reaction centres where light energy is converted into chemical energy (Hildner)
  • In the eye of European robins, electrons are spatially separated but affect each other through entanglement and may change spin depending on the earth’s gravitational field, this could mean birds actually see the earth’s magnetic field as they fly. (Benjamin.)

You don’t have to understand this to understand its implications.

These examples should help us hold views more lightly, in particular when it comes to bigger, more esoteric questions that do not lend themselves to investigation.

They might induce a sense of astonishment, like –

  • Gee, I wonder if there could be even more to this, or
  • This isn’t necessarily the end of the road, or
  • Could there be something else underneath that again?

Instead, one more person is slapped back into place by someone who claims to know ‘the truth’ and have ‘the answer’ and who believes they are right, right, right and (more to the point) everyone else should agree.

3. Science is not reality, it’s a model of reality

Where does this overconfidence (or fear, or insecurity) about being right come from?

Belief is complex and is influenced by character, gender, race, upbringing, cultural environment and the political system to name only a few.

The unequivocal ‘I Know’ can come from faith, including the faith we have that science is the truth.

Instead, a scientific experiment produces data that refutes a specific hypothesis. That question may be nestled within a more encompassing theory, same idea.

An experiment can produce excellent or questionable data and anything in between depending on how it is set up and run.

There’s good science and bad science. Good science pokes holes in the bad and that is one of the many areas in which scientific rigour comes into its own.

For example a systematic review of 6 self-controlled case series, 2 ecological studies, one case crossover trial, five time series trials, 17 case-control studies, 27 cohort studies and 5 randomised control trials of over 15 million children is undeniably more robust that the now discredited study that used a few subjects, false data and no statistics to claim vaccines caused autism.

In the context of what I am saying here, my point is that we may learn more about vaccines in the future and the diseases they protect us against, new ways of dealing with these illnesses may emerge, we may be able to tailor medications or their delivery systems to better suit individuals, we might discover what we thought caused a symptom masked a deeper underlying factor, the door is not closed.

But eggs boil, E = mc2, the liver needs vitamin K.

What I am saying is that science should not be confused with a fixed and immovable reality, it does not claim to be, but that is often how it is understood in particular by lay people.

Science is not reality, it’s an approach; scientific method is a process and one with limitations.

Now in my experience such is its hegemony of science (where I come from) that to acknowledge a limitation in the methodology, even to clarify what it is from what it is not, can raise hackles.

  • Oh, so you don’t believe in science?
  • No, that is not what I said. I said, scientific methodology has limitations, which, scientists acknowledge.

We get all cross-eyed and froth-mouthed when someone pokes at a sacred cow.

Like when I wrote that rationality was a myth we should not aspire to and someone replied asking if I thought we should do away with encyclopaedias. No, I replied, facts are good, the better the facts, the better. (A point I had made in the post.)

4. The best method we have for discovering facts, scientific method, is itself limited

Saying there are limits to scientific methodology is neither anti-science nor unscientific nor does it undermine the respect I have for the rigours of science. It’s just a fact.

But along the line we’ve learned to associate science with ‘right ‘and unscientific with ‘wrong’; whereas saying something is unscientific simply means it cannot be tested in ways that refute it.

This can happen amongst other things because –

  1. We don’t have the right tools (Democratus speculated about atoms but it took thousands of years before an electron microscope enabled us to see them, Einstein predicted gravitational waves but did not think we could build tools to detect them)
    What Colliding Black Holes Sound Like | Video
  2. There are too many options to test (according to Brian Greene there are 10 to the 500 candidate shapes to test in string theory, a theory about the fundamental structure of the universe, abandoning string theory may not mean it’s wrong but untestable)
  3. We use models that don’t adequately translate into humans; scientists like Mattison and Raza says billions are wasted in drug development because mouse models give false leads (many disagree) instead, reconstructing human cells on a chip might offer a better option
  4. The hypothesis-model-test approach may be obsolete, data combined with applied mathematics may replace the need for semantic or causal analysis in some areas
  5. We can’t agree on the definition of what we want to discuss, the existence or not of God a classical example
  6. We don’t know what question to ask to disprove our thesis; again what hypothesis would you test for a God that wasn’t inane or anthropomorphic? No lighting strike after someone blasphemes? Hardly. That bad things happen? Humans do a perfectly good job of that on their own. You might say who needs God when we have the laws of physics (in which case prove it) but others might say the laws of physics are God (in which case prove it). That’s why atheism is unscientific. Not because it’s right or wrong but because it can’t be tested and refuted.

Of course a critical limitation to scientific enquiry is that we can only test a question we ask.

5. Science is robust, but not unequivocal, it produces wrong answers that are useful and seemingly right answers that are wrong

Robust is not a synonym for truth

Science has never claimed to offer unequivocal truths, only robust testing, but scientific findings are often presented as truth.

Our faith stems in part from a belief that science self-corrects by exposing false insights over time, but there’s an alarming degree to which it does not.

Wrong (or partial) answers can be useful

Newton’s laws describe how world works but are wrong at the subatomic level, nevertheless, they are useful. We’ve used them to build bridges, planes and rockets.

Mendelian genetics told us genes were recessive and dominant and traits were inherited. This helped us understand why we had brown or blue eyes and how to breed different coloured flowers, amongst other things.

Now we know genes and proteins interact, epigenetics has shown that environment can influence inheritable traits, something we thought impossible.

The environment modifies genes through chemical tags that attach to DNA and switch genes on and off. Some (albeit controversial) research into trauma suggests these tags are passed on, meaning the effects of trauma could be intergenerational. Potentially too those of love?

Seemingly right answers can be wrong

But even when all the ducks are lined up, when –

  • There’s something to test
  • Within testable parameters
  • Using powerful tools

We can still come up with the wrong answer.

For example, our best evidence tells us the universe is expanding (the scientists who discovered this were trying to work out the rate at which the universe was contracting, a great example of scientific method in action because their data refuted the hypothesis and a new theory, inflation, replaced it.)

Inflation means galaxies are moving away from each other at increasing speed.

Brian Greene says eventually galaxies will be so far apart that light will not be able to travel fast enough between them to be seen. In that future if we looked out we’d see nothing and may assume nothing else existed.

The obvious question is whether things exist now that we don’t see in the same way?

We don’t need cosmological hypotheticals to tell us that this is the case.

Even within the prosaic of daily life most things are hidden because we can only detect what is within our biological limits (although we build tools that go beyond this).

6. What is real varies between systems, people and within the self

Same system, different signal

I have written before about what neuroscientist David Eagleman calls the Umwelt, that an organism’s reality is defined by what it senses.

In other words –

  1. I can’t hear a dog whistle, but a dog can
  2. I can’t see ultraviolet light, but a bee can.

My reality is different from that of the dog and the bee.

However these worlds intersect. I blow the whistle and my dog comes back. An undetectable-to-me signal has tangible impact.

Are there other unknowns influencing behaviour? I imagine many.

Think that there are billions of dark matter particles going through you right now.

You don’t see or hear or feel them because dark matter does not interact with us in any way we can detect. But it may. We don’t know yet and there aren’t the tools.

Same system, same signal

There’s no need to refer to interspecies examples to confirm that realities differ based on what is perceived.

What I hear varies from what you do depending on our health, the environment (if you don’t protect your ears on the workshop floor) or as we age.

A simple hearing test shows some tones become inaudible as we age, the reality for a 5 and 50-year old doing the test is different — both are right.

I think this should help us move from –

  • I heard it
  • No you didn’t.

To –

  • I heard it
  • I didn’t, but that doesn’t mean you didn’t.

We should not ask others to base their reality on our constraints.

7. We cannot even conceive of what is yet to be asked, making imagination as important as science for progress.

While the amount of data we have would stretch from here to the outer rims of the galaxy, we hardly know anything.

I think then that in relation to the big questions we should be more open and less certain that we are right.

Being open minded is not the same as relativism.

It’s not okay to rape a wo/man because your culture says so or to bomb a bus because other people don’t like your God.

Being open minded does not mean ‘anything goes’.

Nor that a lay view on epigenetics is as informed as that of a person with a PhD on the effect of mutations in genes affecting homologous recombination on restriction enzyme-mediated and illegitimate recombination in Saccharomyces cerevisiae, expert opinion is so for a reason.

Instead I am suggesting that we should be as wary of unquestioningly deferring to scientific authority as we were once compelled to do with religion.

Ironically, some of the harshest critics of old-style religious dogma share a similar mindset, that is, to claim to know the undisputed truth and scorn those who disagree. I understand their frustration.

Ignorance has driven some of the worst behaviour on earth, but so has intelligence. But we must find a way to challenge dogma without becoming dogmatists.

Arrogance is arrogance, whether driven by religion, or science, social superiority, greed or ‘just because’.

Don’t dismiss those who ponder the unknown when history suggests, most things are.

The world is not divided into scientific fact on the one hand and a mashup of biased-anecdote on the other. Things are underneath and in between.

We cannot even conceive of what is yet to be asked.

We are constantly beginning again, off a higher knowledge base. This makes imagination as important as science for progress.

@dionnelew, I blog at theunderneathness.




Do we need more senses to create a few more?

We are built out of very small stuff, and we are embedded in a very large cosmos, and the fact is that we are not very good at understanding reality at either of those scales, and that’s because our brains haven’t evolved to understand the world at that scale.

0:31 Instead, we’re trapped on this very thin slice of perception right in the middle.

But it gets strange, because even at that slice of reality that we call home, we’re not seeing most of the action that’s going on.

So take the colors of our world. This is light waves, electromagnetic radiation that bounces off objects and it hits specialized receptors in the back of our eyes. But we’re not seeing all the waves out there. In fact, what we see is less than a 10 trillionth of what’s out there.

So you have radio waves and microwaves and X-rays and gamma rays passing through your body right now and you’re completely unaware of it, because you don’t come with the proper biological receptors for picking it up. There are thousands of cell phone conversations passing through you right now, and you’re utterly blind to it.

1:27 It’s not that these things are inherently unseeable.

Snakes include some infrared in their reality, and honeybees include ultraviolet in their view of the world, and of course we build machines in the dashboards of our cars to pick up on signals in the radio frequency range, and we built machines in hospitals to pick up on the X-ray range.

But you can’t sense any of those by yourself, at least not yet, because you don’t come equipped with the proper sensors.

1:58 Now, what this means is that our experience of reality is constrained by our biology, and that goes against the common sense notion that our eyes and our ears and our fingertips are just picking up the objective reality that’s out there. Instead, our brains are sampling just a little bit of the world.

2:21 Across the animal kingdom, different animals pick up on different parts of reality.

So in the blind and deaf world of the tick, the important signals are temperature and butyric acid;

in the world of the black ghost knifefish, its sensory world is lavishly colored by electrical fields; and

for the echolocating bat, its reality is constructed out of air compression waves. That’s the slice of their ecosystem that they can pick up on, and we have a word for this in science.

It’s called the umwelt, which is the German word for the surrounding world.

 Presumably, every animal assumes that its umwelt is the entire objective reality out there, because why would you ever stop to imagine that there’s something beyond what we can sense. Instead, what we all do is we accept reality as it’s presented to us.

3:18 Let’s do a consciousness-raiser on this.

Imagine that you are a bloodhound dog. Your whole world is about smelling. You’ve got a long snout that has 200 million scent receptors in it, and you have wet nostrils that attract and trap scent molecules, and your nostrils even have slits so you can take big nosefuls of air.

Everything is about smell for you. So one day, you stop in your tracks with a revelation. You look at your human owner and you think, “What is it like to have the pitiful, impoverished nose of a human? (Laughter) What is it like when you take a feeble little noseful of air? How can you not know that there’s a cat 100 yards away, or that your neighbor was on this very spot 6 hours ago?” 

4:09 So because we’re humans, we’ve never experienced that world of smell, so we don’t miss it, because we are firmly settled into our umwelt.

But the question is, do we have to be stuck there? So as a neuroscientist, I’m interested in the way that technology might expand our umwelt, and how that’s going to change the experience of being human.

4:37 We already know that we can marry our technology to our biology, because there are hundreds of thousands of people walking around with artificial hearing and artificial vision.

So the way this works is, you take a microphone and you digitize the signal, and you put an electrode strip directly into the inner ear.

Or, with the retinal implant, you take a camera and you digitize the signal, and then you plug an electrode grid directly into the optic nerve.

And as recently as 15 years ago, there were a lot of scientists who thought these technologies wouldn’t work. Why? It’s because these technologies speak the language of Silicon Valley, and it’s not exactly the same dialect as our natural biological sense organs. But the fact is that it works; the brain figures out how to use the signals just fine.

5:30 How do we understand that?  Here’s the big secret: Your brain is not hearing or seeing any of this.

Your brain is locked in a vault of silence and darkness inside your skull. All it ever sees are electrochemical signals that come in along different data cables, and this is all it has to work with, and nothing more. Now, amazingly, the brain is really good at taking in these signals and extracting patterns and assigning meaning, so that it takes this inner cosmos and puts together a story of this, your subjective world.

6:15 But here’s the key point: Your brain doesn’t know, and it doesn’t care, where it gets the data from. ( A very simplistic conjecture?)

Whatever information comes in, it just figures out what to do with it. And this is a very efficient kind of machine. It’s essentially a general purpose computing device, and it just takes in everything and figures out what it’s going to do with it, and that, I think, frees up Mother Nature to tinker around with different sorts of input channels.

6:48 So I call this the P.H. model of evolution, and I don’t want to get too technical here, but P.H. stands for Potato Head, and I use this name to emphasize that all these sensors that we know and love, like our eyes and our ears and our fingertips, these are merely peripheral plug-and-play devices: You stick them in, and you’re good to go.

The brain figures out what to do with the data that comes in. And when you look across the animal kingdom, you find lots of peripheral devices. So snakes have heat pits with which to detect infrared, and the ghost knifefish has electroreceptors, and the star-nosed mole has this appendage with 22 fingers on it with which it feels around and constructs a 3D model of the world, and many birds have magnetite so they can orient to the magnetic field of the planet. So what this means is that nature doesn’t have to continually redesign the brain. Instead, with the principles of brain operation established, all nature has to worry about is designing new peripherals.

8:00 What this means is this: The lesson that surfaces is that there’s nothing really special or fundamental about the biology that we come to the table with.

It’s just what we have inherited from a complex road of evolution. But it’s not what we have to stick with, and our best proof of principle of this comes from what’s called sensory substitution. And that refers to feeding information into the brain via unusual sensory channels, and the brain just figures out what to do with it.

8:34 Now, that might sound speculative, but the first paper demonstrating this was published in the journal Nature in 1969.

A scientist named Paul Bach-y-Rita put blind people in a modified dental chair, and he set up a video feed, and he put something in front of the camera, and then you would feel that poked into your back with a grid of solenoids. So if you wiggle a coffee cup in front of the camera, you’re feeling that in your back, and amazingly, blind people got pretty good at being able to determine what was in front of the camera just by feeling it in the small of their back.

Now, there have been many modern incarnations of this. The sonic glasses take a video feed right in front of you and turn that into a sonic landscape, so as things move around, and get closer and farther, it sounds like “Bzz, bzz, bzz.” It sounds like a cacophony, but after several weeks, blind people start getting pretty good at understanding what’s in front of them just based on what they’re hearing.

And it doesn’t have to be through the ears: this system uses an electrotactile grid on the forehead, so whatever’s in front of the video feed, you’re feeling it on your forehead.

Why the forehead? Because you’re not using it for much else.

9:50 The most modern incarnation is called the brainport, and this is a little electrogrid that sits on your tongue, and the video feed gets turned into these little electrotactile signals, and blind people get so good at using this that they can throw a ball into a basket, or they can navigate complex obstacle courses. They can come to see through their tongue.

Now, that sounds completely insane, right? But remember, all vision is electrochemical signals coursing around in your brain.

Your brain doesn’t know where the signals come from. It just figures out what to do with them.

10:33 So my interest in my lab is sensory substitution for the deaf, and this is a project I’ve undertaken with a graduate student in my lab, Scott Novich, who is spearheading this for his thesis.

And here is what we wanted to do: we wanted to make it so that sound from the world gets converted in some way so that a deaf person can understand what is being said.

And we wanted to do this, given the power and ubiquity of portable computing, we wanted to make sure that this would run on cell phones and tablets, and also we wanted to make this a wearable, something that you could wear under your clothing.

So here’s the concept. So as I’m speaking, my sound is getting captured by the tablet, and then it’s getting mapped onto a vest that’s covered in vibratory motors, just like the motors in your cell phone.

So as I’m speaking, the sound is getting translated to a pattern of vibration on the vest. Now, this is not just conceptual: this tablet is transmitting Bluetooth, and I’m wearing the vest right now. So as I’m speaking — (Applause) — the sound is getting translated into dynamic patterns of vibration. I’m feeling the sonic world around me.

So, we’ve been testing this with deaf people now, and it turns out that after just a little bit of time, people can start feeling, they can start understanding the language of the vest.

12:13 So this is Jonathan. He’s 37 years old. He has a master’s degree. He was born profoundly deaf, which means that there’s a part of his umwelt that’s unavailable to him. So we had Jonathan train with the vest for four days, two hours a day, and here he is on the fifth day.

 Scott Novich: You.

David Eagleman: So Scott says a word, Jonathan feels it on the vest, and he writes it on the board.

SN: Where. Where.

 DE: Jonathan is able to translate this complicated pattern of vibrations into an understanding of what’s being said.

SN: Touch. Touch.

DE: Now, he’s not doing this — (Applause) — Jonathan is not doing this consciously, because the patterns are too complicated, but his brain is starting to unlock the pattern that allows it to figure out what the data mean, and our expectation is that, after wearing this for about 3 months, he will have a direct perceptual experience of hearing in the same way that when a blind person passes a finger over braille, the meaning comes directly off the page without any conscious intervention at all.

Now, this technology has the potential to be a game-changer, because the only other solution for deafness is a cochlear implant, and that requires an invasive surgery. And this can be built for 40 times cheaper than a cochlear implant, which opens up this technology globally, even for the poorest countries.

13:59 Now, we’ve been very encouraged by our results with sensory substitution, but what we’ve been thinking a lot about is sensory addition.

How could we use a technology like this to add a completely new kind of sense, to expand the human umvelt?

For example, could we feed real-time data from the Internet directly into somebody’s brain, and can they develop a direct perceptual experience?  (Kind of no filtering processes? Will the brain not succumb to overcrowding of data?)

14:26 So here’s an experiment we’re doing in the lab.

A subject is feeling a real-time streaming feed from the Net of data for 5 seconds. Then, two buttons appear, and he has to make a choice. He doesn’t know what’s going on. He makes a choice, and he gets feedback after one second.

Now, here’s the thing: The subject has no idea what all the patterns mean, but we’re seeing if he gets better at figuring out which button to press.

He doesn’t know that what we’re feeding is real-time data from the stock market, and he’s making buy and sell decisions. (Laughter)

And the feedback is telling him whether he did the right thing or not. And what we’re seeing is, can we expand the human umvelt so that he comes to have, after several weeks, a direct perceptual experience of the economic movements of the planet.

So we’ll report on that later to see how well this goes. (Laughter)

15:21 Here’s another thing we’re doing: During the talks this morning, we’ve been automatically scraping Twitter for the TED2015 hashtag, and we’ve been doing an automated sentiment analysis, which means, are people using positive words or negative words or neutral?

And while this has been going on, I have been feeling this, and so I am plugged in to the aggregate emotion of thousands of people in real time, and that’s a new kind of human experience, because now I can know how everyone’s doing and how much you’re loving this. (Laughter)  It’s a bigger experience than a human can normally have.

16:10 We’re also expanding the umvelt of pilots.

So in this case, the vest is streaming 9 different measures from this quadcopter, so pitch and yaw and roll and orientation and heading, and that improves this pilot’s ability to fly it.

It’s essentially like he’s extending his skin up there, far away.

16:31 And that’s just the beginning. What we’re envisioning is taking a modern cockpit full of gauges and instead of trying to read the whole thing, you feel it.

We live in a world of information now, and there is a difference between accessing big data and experiencing it.  (Awesome. We can view 9 interactions, r trends but have hard time comprehending the global meaning) 

16:53 So I think there’s really no end to the possibilities on the horizon for human expansion.

Just imagine an astronaut being able to feel the overall health of the International Space Station, or, for that matter, having you feel the invisible states of your own health, like your blood sugar and the state of your microbiome, or having 360-degree vision or seeing in infrared or ultraviolet.

17:22 So the key is this: As we move into the future, we’re going to increasingly be able to choose our own peripheral devices.

We no longer have to wait for Mother Nature’s sensory gifts on her timescales, but instead, like any good parent, she’s given us the tools that we need to go out and define our own trajectory.

So the question now is, how do you want to go out and experience your universe?


17:53 (Applause)

Chris Anderson: Can you feel it?

DE: Yeah.  Actually, this was the first time I felt applause on the vest. It’s nice. It’s like a massage. (Laughter)

CA: Twitter’s going crazy. Twitter’s going mad. So that stock market experiment. This could be the first experiment that secures its funding forevermore, right, if successful?

DE: Well, that’s right, I wouldn’t have to write to NIH anymore.

CA: Well look, just to be skeptical for a minute, I mean, this is amazing, but isn’t most of the evidence so far that sensory substitution works, not necessarily that sensory addition works? I mean, isn’t it possible that the blind person can see through their tongue because the visual cortex is still there, ready to process, and that that is needed as part of it?

DE: That’s a great question. We actually have no idea what the theoretical limits are of what kind of data the brain can take in.

The general story, though, is that it’s extraordinarily flexible. So when a person goes blind, what we used to call their visual cortex gets taken over by other things, by touch, by hearing, by vocabulary.

So what that tells us is that the cortex is kind of a one-trick pony. It just runs certain kinds of computations on things.

And when we look around at things like braille, for example, people are getting information through bumps on their fingers. So I don’t think we have any reason to think there’s a theoretical limit that we know the edge of.

CA: If this checks out, you’re going to be deluged.

There are so many possible applications for this. Are you ready for this?

What are you most excited about, the direction it might go?

DE: I mean, I think there’s a lot of applications here. In terms of beyond sensory substitution, the things I started mentioning about astronauts on the space station, they spend a lot of their time monitoring things, and they could instead just get what’s going on, because what this is really good for is multidimensional data.

The key is this: Our visual systems are good at detecting blobs and edges, but they’re really bad at what our world has become, which is screens with lots and lots of data.

We have to crawl that with our attentional systems. So this is a way of just feeling the state of something, just like the way you know the state of your body as you’re standing around.

So I think heavy machinery, safety, feeling the state of a factory, of your equipment, that’s one place it’ll go right away.

Patsy Z and TEDxSKE shared athis link on FB
As humans, we can perceive less than a ten-trillionth of all light waves. “Our experience of reality,” says neuroscientist David Eagleman, “is constrained by our biology.”
He wants to change that. His research into our brain processes has led…|By David Eagleman




December 2022

Blog Stats

  • 1,513,432 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by

Join 820 other followers
%d bloggers like this: