Archive for December 16th, 2020
Has Big Brother no longer a need to disguise his dominion?
Face recognition, surveillance concepts. Hand holding smartphone with watching eye on screen. Mobile phone with eye icon. Modern flat design, vector illustration.
Phone is watching you art concept. “You had to live—did live, from habit that became instinct—in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.”—George Orwell, 1984 […]
It had the potential for disaster.
Early in the morning of Monday, December 15, 2020, Google suffered a major worldwide outage in which all of its internet-connected services crashed, including Nest, Google Calendar, Gmail, Docs, Hangouts, Maps, Meet and YouTube.
The outage only lasted an hour, but it was a chilling reminder of how reliant the world has become on internet-connected technologies to do everything from unlocking doors and turning up the heat to accessing work files, sending emails and making phone calls.
A year earlier, a Google outage resulted in Nest users being unable to access their Nest thermostats, Nest smart locks, and Nest cameras.
As Fast Company reports, “This essentially meant that because of a cloud storage outage, people were prevented from getting inside their homes, using their AC, and monitoring their babies.”
Welcome to the Matrix.
Twenty-some years after the Wachowskis’ iconic film, The Matrix, introduced us to a futuristic world in which humans exist in a computer-simulated non-reality powered by authoritarian machines—a world where the choice between existing in a denial-ridden virtual dream-state or facing up to the harsh, difficult realities of life comes down to a blue pill or a red pill—we stand at the precipice of a technologically-dominated matrix of our own making.
We are living the prequel to The Matrix with each passing day, falling further under the spell of technologically-driven virtual communities, virtual realities and virtual conveniences managed by artificially intelligent machines that are on a fast track to replacing human beings and eventually dominating every aspect of our lives.
Science fiction has become fact.
In The Matrix, computer programmer Thomas Anderson a.k.a. hacker Neo is wakened from a virtual slumber by Morpheus, a freedom fighter seeking to liberate humanity from a lifelong hibernation state imposed by hyper-advanced artificial intelligence machines that rely on humans as an organic power source.
With their minds plugged into a perfectly crafted virtual reality, few humans ever realize they are living in an artificial dream world.
Neo is given a choice: to take the red pill, wake up and join the resistance, or take the blue pill, remain asleep and serve as fodder for the powers-that-be.
Most people opt for the blue pill.
In our case, the blue pill—a one-way ticket to a life sentence in an electronic concentration camp—has been honey-coated to hide the bitter aftertaste, sold to us in the name of expediency and delivered by way of blazingly fast Internet, cell phone signals that never drop a call, thermostats that keep us at the perfect temperature without our having to raise a finger, and entertainment that can be simultaneously streamed to our TVs, tablets and cell phones.
Yet we are not merely in thrall with these technologies that were intended to make our lives easier. We have become enslaved by them.
Look around you. Everywhere you turn, people are so addicted to their internet-connected screen devices—smart phones, tablets, computers, televisions—that they can go for hours at a time submerged in a virtual world where human interaction is filtered through the medium of technology.
This is not freedom.
This is not even progress.
Big Brother in Disguise: The Rise of a New, Technological World Order
By Kenneth T.
My blog, My way
Welcome to a little piece of my life.
Here you will find things concerning my everyday experiences and or my thoughts on everyday happenings.
For instance you may find thoughts of my Farmstead, which is as my wife calls it, our Accidental Farming life.
Perhaps on a whim, I might just jump on a soap box about what’s going on with my crazy family (the immediate one, that is).~You don’t need to put a penny in the coin slot for any commentary there~
You may find, new additions to what I call “Hobby-time”. I make pinback buttons (some call them badges).
And then there is the outside the box or “Off-track” thinking, part of me. Which can be anything else from aliens to the zoology of the Loch Ness monster…
This will probably be more mundane as health concerns, for instance, to vaccinate or not.
Is the Earth Flat or is it Hollow? Is there a dome? Is any of it real? Do you really want to know?
Police brutality and the continuing corruption of established government, Big Business, Big Oil, Big Brother. Can we survive?
Should we survive? The coming monetary collapse.
There is so much going on, more than we see outside our windows.View Archive →
The priority is to teach AI programs (for super-intelligent machines) how to make Moral choices: values preferred by well-educated people with vast general knowledge
Posted by: adonis49 on: December 16, 2020
We are in trouble if artificial intelligence programs are unable to discriminate among moral choices
Posted on August 10, 2016
Artificial intelligence is getting smarter by leaps and bounds in this century. Research suggests that a computer AI could be as “smart” as a human being.
Nick Bostrom says, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.”
A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines.
Will our smart machines help to preserve humanity and our values?
Or will they acquire values of their own?
The talk of Nick Bostrom on TedX
I work with a bunch of mathematicians, philosophers and computer scientists, and we sit around and think about the future of machine intelligence, among other subjects. Some people think that some of these things are sort of science fiction, far out there, crazy.
I like to say let’s look at the modern human condition. This is the normal way for things to be. But if we think about it, human species are actually a recently arrived guests on this planet.
Think about if Earth was created one year ago, the human species, then, would be 10 minutes old. The industrial era started two seconds ago.
Another way to look at this is to think of world GDP over the last 10,000 years. I’ve actually taken the trouble to plot this for you in a graph. It looks like this. (Laughter) It’s a curious shape for a normal condition. I sure wouldn’t want to sit on it.
Let’s ask ourselves, what is the cause of this current anomaly? Some people would say it’s technology.
Now it’s true, technology has accumulated through human history, and right now, technology advances extremely rapidly — that is the proximate cause, that’s why we are currently so very productive.
But I like to think back further to the ultimate cause.
Patsy Z shared this link. TED. August 4 at 7:23pm ·
“ The fate of humanity may depend on what the super intelligence does, once it is created.”
Why we must teach artificial intelligence how to make moral choices?
Machine intelligence is the last invention that humanity will ever need to make.ted.com|By Nick Bostrom
Look at these two highly distinguished gentlemen: We have Kanzi — he’s mastered 200 lexical tokens, an incredible feat. And Ed Witten unleashed the second superstring revolution.
If we look under the hood, this is what we find: basically the same thing. One is a little larger, it maybe also has a few tricks in the exact way it’s wired. These invisible differences cannot be too complicated, however, because there have only been 250,000 generations since our last common ancestor.
We know that complicated mechanisms take a long time to evolve. So a bunch of relatively minor changes take us from Kanzi to Witten, from broken-off tree branches to intercontinental ballistic missiles.
This then seems pretty obvious that everything we’ve achieved, and everything we care about, depends crucially on some relatively minor changes that made the human mind.
And the corollary is that any further changes that could significantly change the substrate of thinking could have potentially enormous consequences.
Artificial intelligence used to be about putting commands in a box. You would have human programmers that would painstakingly handcraft knowledge items. You build up these expert systems, and they were kind of useful for some purposes, but they were very brittle, you couldn’t scale them.
(Expert systems were created to teach new generations the expertise of the older ones in handling complex systems in industrial systems and dangerous military systems)
Some of my colleagues think we’re on the verge of something that could cause a profound change in that substrate, and that is machine superintelligence.
Basically, you got out only what you put in. But since then, a paradigm shift has taken place in the field of artificial intelligence. (The newer generations are teaching these machine on many different intelligences that they don’t know much about)
Today, the action is really around machine learning. So rather than handcrafting knowledge representations and features, we create algorithms that learn, often from raw perceptual data.
(Meta data from various experiments barely having any standard procedures?)
Basically the same thing that the human infant does. The result is A.I. that is not limited to one domain — the same system can learn to translate between any pairs of languages, or learn to play any computer game on the Atari console.
A.I. is still nowhere near having the same powerful, cross-domain ability to learn and plan as a human being has. The cortex still has some algorithmic tricks that we don’t yet know how to match in machines.
The question is, how far are we from being able to match those tricks?
A couple of years ago, we did a survey of some of the world’s leading A.I. experts, to see what they think, and one of the questions we asked was, “By which year do you think there is a 50% probability that we will have achieved human-level machine intelligence?”
We defined human-level here as the ability to perform almost any job at least as well as an adult human, so real human-level, not just within some limited domain.
And the median answer was 2040 or 2050, depending on precisely which group of experts we asked. Now, it could happen much later, or sooner, the truth is nobody really knows.
What we do know is that the ultimate limit to information processing in a machine substrate lies far outside the limits in biological tissue. This comes down to physics.
A biological neuron fires at 200 hertz, 200 times a second. But even a present-day transistor operates at the Gigahertz. Neurons propagate slowly in axons, 100 meters per second, tops. But in computers, signals can travel at the speed of light.
There are also size limitations, like a human brain has to fit inside a cranium, but a computer can be the size of a warehouse or larger. So the potential for superintelligence lies dormant in matter, much like the power of the atom lay dormant throughout human history, patiently waiting there until 1945.
In this century, scientists may learn to awaken the power of artificial intelligence. And I think we might then see an intelligence explosion.
(Okay, why keep learning and acquiring general knowledge? Would politicians rely on these machines for their decisions?)
Most people, when they think about what is smart and what is dumb, I think have in mind a picture roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein, or whoever your favorite guru is.
But I think that from the point of view of artificial intelligence, the true picture is actually probably more like this: AI starts out at this point here, at zero intelligence, and then, after many years of really hard work, maybe eventually we get to mouse-level artificial intelligence, something that can navigate cluttered environments as well as a mouse can.
And then, after many more years of really hard work, lots of investment, maybe eventually we get to chimpanzee-level artificial intelligence.
And then, after even more years of really, really hard work, we get to village idiot artificial intelligence. And a few moments later, we are beyond Ed Witten. The train doesn’t stop at Humanville Station. It’s likely, rather, to swoosh right by.
This development has profound implications, particularly when it comes to questions of power.
For example, chimpanzees are strong — pound for pound, a chimpanzee is about twice as strong as a fit human male. And yet, the fate of Kanzi and his pals depends a lot more on what we humans do than on what the chimpanzees do themselves. (Even without any super-intelligence)
Once there is super-intelligence, the fate of humanity may depend on what the super-intelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are, and they’ll be doing so on digital timescales.
What this means is basically a telescoping of the future.
Think of all the crazy technologies that you could have imagined maybe humans could have developed in the fullness of time: cures for aging, space colonization, self-replicating nanobots or uploading of minds into computers, all kinds of science fiction-y stuff that’s nevertheless consistent with the laws of physics.
All of this superintelligence could develop, and possibly quite rapidly.
A superintelligence with such technological maturity would be extremely powerful, and at least in some scenarios, it would be able to get what it wants. We would then have a future that would be shaped by the preferences of this A.I.
A good question is: “what are those preferences?” Here it gets trickier.
To make any headway with this, we must first of all avoid anthropomorphizing. And this is ironic because every newspaper article about the future of A.I. has a picture of this: So I think what we need to do is to conceive of the issue more abstractly, not in terms of vivid Hollywood scenarios.
We need to think of intelligence as an optimization process (after it had learned?), a process that steers the future into a particular set of configurations. A superintelligence is a really strong optimization process. It’s extremely good at using available means to achieve a state in which its goal is realized.
This means that there is no necessary connection between being highly intelligent in this sense, and having an objective that we humans would find worthwhile or meaningful.
Suppose we give an A.I. the goal to make humans smile.
When the A.I. is weak, it performs useful or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins.
Another example, suppose we give A.I. the goal to solve a difficult mathematical problem. When the A.I. becomes superintelligent, it realizes that the most effective way to get the solution to this problem is by transforming the planet into a giant computer, so as to increase its thinking capacity.
And notice that this gives the A.I.s an instrumental reason to do things to us that we might not approve of.
Human beings in this model are threats, we could prevent the mathematical problem from being solved.
Perceivably things won’t go wrong in these particular ways; these are cartoon examples.
But the general point here is important: if you create a really powerful optimization process to maximize for objective x, you better make sure that your definition of x incorporates everything you care about.
This is a lesson that’s also taught in many a myth. King Midas wishes that everything he touches be turned into gold. He touches his daughter, she turns into gold. He touches his food, it turns into gold. This could become practically relevant, not just as a metaphor for greed, but as an illustration of what happens if you create a powerful optimization process and give it misconceived or poorly specified goals.
Now you might say, if a computer starts sticking electrodes into people’s faces, we’d just shut it off.
First, this is not necessarily so easy to do if we’ve grown dependent on the system — like, where is the off switch to the Internet?
Second, why haven’t the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking)
The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are. The point is, we should not be confident that we have this under control here.
And we could try to make our job a little bit easier by putting the A.I. in a box, like a secure software environment, a virtual reality simulation from which it cannot escape.
But how confident can we be that the A.I. couldn’t find a bug? Given that merely human hackers find bugs all the time, I’d say, probably not very confident.
So we disconnect the ethernet cable to create an air gap, but again, like merely human hackers routinely transgress air gaps using social engineering.
Right now, as I speak, I’m sure there is some employee out there somewhere who has been talked into handing out her account details by somebody claiming to be from the I.T. department.
More creative scenarios are also possible, like if you’re the A.I., you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate.
Or maybe you could pretend to malfunction, and then when the programmers open you up to see what went wrong with you, they look at the source code — Bam! — the manipulation can take place.
Or it could output the blueprint to a really nifty technology, and when we implement it, it has some surreptitious side effect that the A.I. had planned.
The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will itself out.
I believe that the answer here is to figure out how to create superintelligent A.I. such that even if — when — it escapes, it is still safe because it is fundamentally on our side because it shares our values.
I see no way around this difficult problem.
I’m actually fairly optimistic that this problem can be solved. We wouldn’t have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless.
Instead, we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.
(If we let the AI emulate human emotions, we are all dead)
This can happen, and the outcome could be very good for humanity. But it doesn’t happen automatically.
The initial conditions for the intelligence explosion might need to be set up in just the right way if we are to have a controlled detonation.
The values that the A.I. has need to match ours, not just in the familiar context, like where we can easily check how the A.I. behaves, but also in all novel contexts that the A.I. might encounter in the indefinite future.
There are also some esoteric issues that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical uncertainty and so forth.
So the technical problems that need to be solved to make this work look quite difficult — not as difficult as making a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I. is a really hard challenge.
Making super-intelligent A.I. that is safe involves some additional challenge on top of that.
The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.
So I think that we should work out a solution to the control problem in advance, so that we have it available by the time it is needed.
Now it might be that we cannot solve the entire control problem in advance because maybe some elements can only be put in place once you know the details of the architecture where it will be implemented.
But the more of the control problem that we solve in advance, the better the odds that the transition to the machine intelligence era will go well. (I opt that Experimentation on various alternative control systems should take as long as human is still alive)
This to me looks like a thing that is well worth doing and I can imagine that if things turn out okay, that people a million years from now look back at this century and it might well be that they say that the one thing we did that really mattered was to get this thing right.
Note: Nick Bostrom. Philosopher. He asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us? Full bio