Adonis Diaries

Posts Tagged ‘learning

Can’t control what our intelligent machines are learning: Once they learned?

We’re asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden

Machines that could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us, while we’re telling ourselves, “We’re just doing objective, neutral computation.”

Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control.

Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”

Zeynep Tufekci. Techno-sociologist. She asks big questions about our societies and our lives, as both algorithms and digital connectivity spread. Full bio
Filmed June 2016

I started my first job as a computer programmer in my very first year of college — basically, as a teenager.

0:19 Soon after I started working, writing software in a company, a manager who worked at the company came down to where I was, and he whispered to me, “Can he tell if I’m lying?” There was nobody else in the room.

“Can who tell if you’re lying? And why are we whispering?”

The manager pointed at the computer in the room. “Can he tell if I’m lying?” Well, that manager was having an affair with the receptionist.

And I was still a teenager. So I whisper-shouted back to him, Yes, the computer can tell if you’re lying.”

I laughed, but actually, the laugh’s on me. Nowadays, there are computational systems that can suss out emotional states and even lying from processing human faces. Advertisers and even governments are very interested.

I had become a computer programmer because I was one of those kids crazy about math and science.

But somewhere along the line I’d learned about nuclear weapons, and I’d gotten really concerned with the ethics of science. I was troubled.

Because of family circumstances, I also needed to start working as soon as possible. So I thought to myself, hey, let me pick a technical field where I can get a job easily and where I don’t have to deal with any troublesome questions of ethics. So I picked computers.

Patsy Z shared this link. TED. October 19 at 8:25pm ·

Using a computer program to decide which applicant to hire is a pretty terrible idea. Here’s why:

ted.com|By Zeynep Tufekci

Well, ha, ha, ha! All the laughs are on me. Nowadays, computer scientists are building platforms that control what a billion people see every day. They’re developing cars that could decide who to run over. They’re even building machines, weapons, Drones that might kill human beings in war. It’s ethics all the way down.

2:18 Machine intelligence is here. We’re now using computation to make all sort of decisions, but also new kinds of decisions. We’re asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden.

We’re asking questions like, “Who should the company hire?” “Which update from which friend should you be shown?” “Which convict is more likely to reoffend?” “Which news item or movie should be recommended to people?”

we’ve been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon.

Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs.

To make things more complicated, our software is getting more powerful, but it’s also getting less transparent and more complex.

Recently, in the past decade, complex algorithms have made great strides. They can recognize human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go.

Much of this progress comes from a method called “machine learning.” Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions. It’s more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives.

And the system learns by churning through this data. (Machine learning to apply specific statistical packages?) And also, crucially, these systems don’t operate under a single-answer logic. They don’t produce a simple answer; it’s more probabilistic: “This one is probably more like what you’re looking for.”

the upside is: this method is really powerful. The head of Google’s AI systems called it, the unreasonable effectiveness of data.” (Unreasonable if data are arranged or fed the wrong ways, and into statistical models that barely match human behavior)

The downside is, we don’t really understand what the system learned. In fact, that’s its power. This is less like giving instructions to a computer; it’s more like training a puppy-machine-creature we don’t really understand or control. So this is our problem.

It’s a problem when this artificial intelligence system gets things wrong. It’s also a problem when it gets things right, because we don’t even know which is which when it’s a subjective problem. We don’t know what this thing is thinking.

consider a hiring algorithm — a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees’ data and instructed to find and hire people like the existing high performers in the company. (And the idiosyncrasies among cultures?)

Sounds good. I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring. They were super excited. They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers.

And look — human hiring is biased. I know. I mean, in one of my early jobs as a programmer, my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she’d say, “Zeynep, let’s go to lunch!” I’d be puzzled by the weird timing. It’s 4pm. Lunch?

I was broke, so free lunch. I always went. I later realized what was happening. My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work. I was doing a good job, I just looked wrong and was the wrong age and gender.

So hiring in a gender- and race-blind way certainly sounds good to me. But with these systems, it is more complicated, and here’s why: Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things.

They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy. Remember — for things you haven’t even disclosed. This is inference. (or interference in personal rights)

I have a friend who developed such computational systems to predict the likelihood of clinical or postpartum depression from social media data. The results are impressive. Her system can predict the likelihood of depression months before the onset of any symptoms — months before.

No symptoms, there’s prediction. She hopes it will be used for early intervention. Great! But now put this in the context of hiring.

at this human resources managers conference, I approached a high-level manager in a very large company, and I said to her, “Look, what if, unbeknownst to you, your system is weeding out people with high future likelihood of depression? They’re not depressed now, just maybe in the future, more likely. What if it’s weeding out women more likely to be pregnant in the next year or two but aren’t pregnant now? What if it’s hiring aggressive people because that’s your workplace culture?” You can’t tell this by looking at gender breakdowns.

Those may be balanced. And since this is machine learning, not traditional coding, there is no variable there labeled “higher risk of depression,” “higher risk of pregnancy,” “aggressive guy scale.” Not only do you not know what your system is selecting on, you don’t even know where to begin to look. It’s a black box. It has predictive power, but you don’t understand it.

“What safeguards,” I asked, “do you have to make sure that your black box isn’t doing something shady?” She looked at me as if I had just stepped on 10 puppy tails.

She stared at me and she said, “I don’t want to hear another word about this.” And she turned around and walked away. Mind you — she wasn’t rude. It was clear: what I don’t know isn’t my problem, go away, death stare.

such a system may even be less biased than human managers in some ways. And it could make monetary sense. But it could also lead to a steady but stealthy shutting out of the job market of people with higher risk of depression. Is this the kind of society we want to build, without even knowing we’ve done this, because we turned decision-making to machines we don’t totally understand?

Another problem is this: these systems are often trained on data generated by our actions, human imprints. Well, they could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us, while we’re telling ourselves, “We’re just doing objective, neutral computation.”

Researchers found that on Google, women are less likely than men to be shown job ads for high-paying jobs. And searching for African-American names is more likely to bring up ads suggesting criminal history, even when there is none. Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don’t know, can have life-altering consequences.

In Wisconsin, a defendant was sentenced to six years in prison for evading the police. You may not know this, but algorithms are increasingly used in parole and sentencing decisions. He wanted to know: How is this score calculated? It’s a commercial black box.

The company refused to have its algorithm be challenged in open court. But ProPublica, an investigative nonprofit, audited that very algorithm with what public data they could find, and found that its outcomes were biased and its predictive power was dismal, barely better than chance, and it was wrongly labeling black defendants as future criminals at twice the rate of white defendants.

consider this case: This woman was late picking up her godsister from a school in Broward County, Florida, running down the street with a friend of hers. They spotted an unlocked kid’s bike and a scooter on a porch and foolishly jumped on it. As they were speeding off, a woman came out and said, “Hey! That’s my kid’s bike!” They dropped it, they walked away, but they were arrested.

She was wrong, she was foolish, but she was also just 18. She had a couple of juvenile misdemeanours. Meanwhile, that man had been arrested for shoplifting in Home Depot — 85 dollars’ worth of stuff, a similar petty crime. But he had two prior armed robbery convictions.

But the algorithm scored her as high risk, and not him. Two years later, ProPublica found that she had not reoffended. It was just hard to get a job for her with her record. He, on the other hand, did reoffend and is now serving an eight-year prison term for a later crime. Clearly, we need to audit our black boxes and not have them have this kind of unchecked power.

Audits are great and important, but they don’t solve all our problems. Take Facebook’s powerful news feed algorithm — you know, the one that ranks everything and decides what to show you from all the friends and pages you follow. Should you be shown another baby picture?

A sullen note from an acquaintance? An important but difficult news item? There’s no right answer. Facebook optimizes for engagement on the site: likes, shares, comments.

 In August of 2014, protests broke out in Ferguson, Missouri, after the killing of an African-American teenager by a white police officer, under murky circumstances. The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook. Was it my Facebook friends?

I disabled Facebook’s algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm’s control, and saw that my friends were talking about it. It’s just that the algorithm wasn’t showing it to me. I researched this and found this was a widespread problem.

The story of Ferguson wasn’t algorithm-friendly. It’s not “likable.” Who’s going to click on “like?” It’s not even easy to comment on. Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn’t get to see this.

Instead, that week, Facebook’s algorithm highlighted this, which is the ALS Ice Bucket Challenge. Worthy cause; dump ice water, donate to charity, fine. But it was super algorithm-friendly. The machine made this decision for us. A very important but difficult conversation might have been smothered, had Facebook been the only channel.

finally, these systems can also be wrong in ways that don’t resemble human systems. Do you guys remember Watson, IBM’s machine-intelligence system that wiped the floor with human contestants on Jeopardy? It was a great player.

But then, for Final Jeopardy, Watson was asked this question: “Its largest airport is named for a World War II hero, its second-largest for a World War II battle.”

14:58 (Hums Final Jeopardy music)

Chicago. The two humans got it right. Watson, on the other hand, answered “Toronto” — for a US city category! The impressive system also made an error that a human would never make, a second-grader wouldn’t make.

Our machine intelligence can fail in ways that don’t fit error patterns of humans, in ways we won’t expect and be prepared for. It’d be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine.

In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street’s “sell” algorithm wiped a trillion dollars of value in 36 minutes. I don’t even want to think what “error” means in the context of lethal autonomous weapons.

yes, humans have always made biases. Decision makers and gatekeepers, in courts, in news, in war … they make mistakes; but that’s exactly my point. We cannot escape these difficult questions. We cannot outsource our responsibilities to machines.

Artificial intelligence does not give us a “Get out of ethics free” card.

 Data scientist Fred Benenson calls this math-washing. We need the opposite.

We need to cultivate algorithm suspicion, scrutiny and investigation.

We need to make sure we have algorithmic accountability, auditing and meaningful transparency.

We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms.

Yes, we can and we should use computation to help us make better decisions. But we have to own up to our moral responsibility to judgment, and use algorithms within that framework, not as a means to abdicate and outsource our responsibilities to one another as human to human.

17:24 Machine intelligence is here. That means we must hold on ever tighter to human values and human ethics.

Era of Abundant Information and Fleeting Expertise

And how could we deeply learn anything of value?

How to learn is changing, and it’s changing fast.

In the past, we used to learn by doing — we called them apprenticeships.

Then the model shifted, and we learned by going to school.

Now, it’s going back to the apprenticeship again, but this time, you are both the apprentice and the master.

This post is about how to learn during exponential times, when information is abundant and expertise is fleeting.

Passion, Utility, Research and Focus

First, choosing what you want to learn and becoming great at it is tough.

As I wrote in my last post, doing anything hard and doing it well takes grit. (It takes 10,000 hours of doing to become talented in anything you like)

Here are a few tips I’ve learned over the years to help choose what you want to learn:

  1. Start with your passions: Focus on something you love, or learn a new skill in service of your passion. If you want to learn how to code because it will land you a high-paying job, you’re not going to have the drive to spend countless, frustrating hours debugging your code. If you want to become a doctor because your parents want you to, you’re not going to make it through med school. Focus on the things YOU love and do it because it’s YOUR choice. (Money is second in rank. The first is the passion that no money can buy. Adonis49 quote)
  2. Make it useful: Time is the scarcest resource. While you can spend the time learning for the sake of learning, I think learning should be a means to an end. Without a target, you’ll miss every time. Figure out what you want to do, and then identify the skills you need to acquire to accomplish that goal. (And the end of learning?)
  3. Read, watch and analyze: Read everything. Read all the time . (The writing of just the experts in the field?) Start with the experts. Read the material they write or blog. Watch their videos, their interviews. Do you agree with them? Why?
  4. Talk to people: Once you’re done reading, actually talk to real human beings that are doing what you want to do. Do whatever you can to reach them. Ask for their advice. You’ll be shocked by what you can learn this way. (Connectivity part of the learning process?)
  5. Focus on your strengths: Again, time is precious. You can’t be a doctor, lawyer, coder, writer, rocket scientist, and rock star all at the same time… at least not right now. Focus on what you are good at and enjoy most and try to build on top of those skills. Many people, especially competitive people, tend to feel like they need to focus on improving the things they are worst at doing. This is a waste of time. Instead, focus on improving the things you are best at doing — you’ll find this to be a much more rewarding and lucrative path. (When it becomes an automatic reaction, there is no need to focus much?)

Learn by Doing

There is no better way to learn than by doing. (After you learned the basics?)

I’m a fan of the “apprentice” model. Study the people who have done it well and then go work for them.

If they can’t (or won’t) pay you, work for free until you are good enough that they’ll need to hire you. (For how long? Slaves get paid somehow)

Join a startup doing what you love — it’s much cheaper than paying an expensive tuition, and a hell of a lot more useful.

I don’t think school (or grad school) is necessarily the right answer anymore.

Here’s one reason why:

This week I visited the Hyperloop Technologies headquarters in Los Angeles (full disclosure: I am on the board of the company).

The interim CEO and CTO Brogan Bambrogan showed me around the office, and we stopped at one particularly impressive-looking, massive machine (details confidential).

As it turns out, the team of Hyperloop engineers who had designed, manufactured, tested, redesigned, remanufactured, and operated this piece of equipment did so in 11 weeks, for pennies on the dollar.

At MIT, Stanford or CalTech, building this machine would have been someone’s PhD thesis…

Except that the PhD candidate would have spent three years doing the same amount of work, and written a paper about it, rather than help to redesign the future of transportation.

Meanwhile, the Hyperloop engineers created this tech (and probably a half-dozen other devices) in a fraction of the time while creating value for a company that will one day be worth billions.

Full Immersion and First Principles

You have to be fully immersed if you want to really learn.

Connect the topic with everything you care about — teach your friends about it, only read things that are related to the topic, surround yourself with it.

Make learning the most important thing you can possibly do and connect to it in a visceral fashion.

As part of your full immersion, dive into the very basic underlying principles governing the skill you want to acquire.

This is an idea Elon Musk (CEO of Tesla, SpaceX) constantly refers to: “The normal way we conduct our lives is we reason by analogy. We are doing this because it’s like what other people are doing. [With first principles] you boil things down to the most fundamental truths … and then reason up from there.”

You can’t skip the fundamentals — invest the time to learn the basics before you get to the advanced stuff.

Experiment, Experiment, Experiment

Experiment, fail, experiment, fail, and experiment. (The problem is that few disciplines teach you Experimental Designing Mind and fundamentals)

One of Google’s innovation principles and mantras is: “Never fail to fail.”

Don’t be afraid if you are really bad at the beginning: you learn most from your mistakes.

When Elon hires people, he asks them to describe a time they struggled with a hard problem. “When you struggle with a problem, that’s when you understand it,” he says, “Anyone who’s struggled hard with a problem never forgets it.”

(You struggle because you fail to listen to the new perspectives of other people to tackle the problem)

Digital Tools

We used to have to go to school to read textbooks and gain access to expert teachers and professors.

Nowadays, literally all of these resources are available online for free.

There are hundreds of free education sites like Khan Academy, Udemy, or Udacity.

There are thousands of MOOCs (massive online open courses) from the brightest experts from top universities on almost every topic imaginable.

Want to learn a language? Download an app like Duolingo (or even better, pack up your things and move to that country).

Want to learn how to code? Sign up for a course on CodeAcademy or MIT Open Courseware.

The resources are there and available — you just have to have the focus and drive to find them and use them.

Finally…The Next Big Shift in Learning

In the future, the next big shift in learning will happen as we adopt virtual worlds and augmented reality.

It will be the next best thing to “doing” — we’ll be able to simulate reality and experiment (perhaps beyond what we can experiment with now) in virtual and augmented environments.

Add that to the fact that we’ll have an artificial intelligence tutor by our side, showing us the ropes and automatically customizing our learning experience.

Patsy Z shared this link via Singularity Hub

As usual, the best advise on “Learning” from the man himself Peter H. Diamandis.

Learning in an Era of Abundant Information and Fleeting Expertise?
How to learn is changing, and it’s changing fast. In the past, we used to learn by doing — we called them apprenticeships.
Then the model shifted, and we…
singularityhub.com

Different urgent learning resolutions  

I got this revelation.  Schools use different methods for comprehending languages and natural sciences.  Kids are taught the alphabet, words, syntaxes, grammars, spelling and then much later are asked to compose essays.  Why this process is not applied in learning natural and behavioral sciences?

 

I have strong disagreement on the pedagogy of learning languages.  First, we know that children learn to talk years before they can read; why then kids are not encourage to tell verbal stories before they can read?  Why kids’ stories are not recorded and then translated into the written words to encourage the kids into realizing that what they read is indeed another story telling medium?

Second, we know that kids have excellent capabilities to memorize verbally and visually whole short sentences before they understand the fundamentals. Why don’t we develop their cognitive abilities before we force upon them the traditional malignant methodology?  The proven outcomes are that kids are devoid of verbal intelligence, hate to read, and would not attempt to write even after they graduate from universities.

 

Arithmetic and math are used as the foundations for learning natural sciences. We learn to manipulate equations; then solving examples and problems by finding the proper equation that correspond to the natural problem (actually, we are trained to memorize the appropriate equations that apply to the problem given!). 

Why we are not trained to compose a story that corresponds to an equation, or set of equations (model)?

If kids are asked to compose essays as the final outcome of learning languages then why students are not trained to compose the natural phenomena from given set of equations? Would not that be the proper meaning for comprehending the physical world or even the world connected with human behavior? 

Would not the skill of modeling a system be more meaningful and straightforward after we learn to compose a world from a model or set of equations?  Consequently, scientists and engineers, by researching natural phenomena and man-made systems that correspond to the mathematical models, would be challenged to learn about natural phenomena; thus, their modeling abilities would be enhanced, more valid, and more instructive!

If mathematicians are trained to compose or view the appropriate natural phenomenon and human behavior from equations and mathematical models then the scientific communities in natural and human sciences would be far richer in quality and quantity.

Article #22, (April 22, 2005)

“How can an under graduate class assimilate a course material of 1000 pages?  Why so much material for a single course in the first place?”

Assimilating a new discipline or new methods in a single course is too strong a term. 

You indeed can scarcely describe the process of comprehending a topic and assimilating it, even within a specialized discipline, without overshooting the mark.

Now that the title might have captured your attention let me describe my teaching methods that may permit students to cover an overview of such a vast discipline as Human Factors in one semester course.

I encourage my students to learn and read as trained engineers should.

They are to locate first the graphs, tables and figures in a chapter, try to understand the topic by concentrating their attention on these tools of learning and then read the preceding and following sections if they fail to comprehend the graphs, tables and figure on their own merit.

You should all know that if a picture is worth a thousand words then a graph, table or a figure might be worth ten thousands words.

I assign a graph, table or a figure to students to hand copy it, write a short presentation, and then copy it on a transparency sheet to present to class.

After the presentation of a unique graph the student will field a few questions from class and then I take over and explain and expand on the content of the transparency.  

This method of training students to learn through these learning tools and giving them an opportunity to appreciate them, as engineers should, I am able to cover most of the course material throughout the semester.

Another method is by handing out two take home exams in addition to the regular exams.  Take home exams are handed out three weeks in advance of the due dates and cover questions from all chapters that need to be read thoroughly and supplemented from other sources for substantiation. 

Students are encouraged to take very seriously these take home assignments not only because they weight heavily in points but also because a few of the exam questions will be selected from the take home assignment. 

Assignments and lab projects are other methods for revisiting the course materials and other sources.

The quizzes and regular exams are open books, open notes and whatever printouts from the internet students are willing to carry to class. 

I even encouraged students to use an efficient cheat sheets technique that might convey the message effectively based on the fact that most of the chapters are interconnected. 

The main subjects such as designing interfaces, displays and controls, occupational safety and health, environmental and organizational factors in the workplace, designing workstations, capabilities and limitations of human users, sensing and perception capacities, and physical and cognitive methods have links to many other chapters in addition to the main one. 

Thus, if a student selects a subject as the central item he would be able to link different sections of other chapters to it by writing down the page numbers of the source section.  These cheat sheets could be excellent learning methods to answer open book exams without the need to fumble through hundreds of pages for each question.

A different technique to assimilating course materials is through questions. 

The catch is that asking questions on assignments, lab projects or take home exams have to be submitted in writing. 

The written question has to follow a certain process: first, stating in complete sentences the subject matter; second explaining how the question was understood and the last step is expressing the problems with links to the chapters they had to read in order to comprehend the subject.

I am still waiting for a single written question and it might be for the best because it eliminates a host of redundant questions that are asked out of laziness, failing to carefully read the whole question sheet or shirking from diligently doing their best to browse through the course materials.


adonis49

adonis49

adonis49

September 2021
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930  

Blog Stats

  • 1,479,963 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 810 other followers

%d bloggers like this: