Adonis Diaries

Posts Tagged ‘Moore’s law

Your primer on how to talk about the “fourth industrial revolution”

As world leaders gather for Davos, one of the common and continuing themes is the emerging threat of automation and the consequent effect on economic inequality and global stability.

Responding to the so-called fourth industrial revolution has become one of the biggest topics of discussion in the world of technology and politics, and it’s not surprising that anxiety runs high.

A lot of the current conversation has been shaped by research with scary conclusions such as “47% of total US employment is at risk from automation.”

In a survey last year of 1,600 Quartz readers, 90% of respondents thought that up to half of jobs would be lost to automation within five years—and we found that everyone thought it was going to happen to someone else.

In our survey, 91% of those who work don’t think there’s any risk to their job, for example.

If it’s true that half the jobs will disappear, then it’s going to be an entirely different world.

As leaders and policy makers consider the broader implications of automation, we believe it’s important that they remember that the predictions and conclusions in the analytically derived studies—such as the 47% number—come from just a few sources.

All the studies on the impact of AI have strengths and weaknesses in their approach. To draw deeper insight requires taking a closer look at the methodology and data sources they use.

🤖🤖🤖

The studies

We have attempted to summarize the outputs and approach of 3 studies—from Oxford University (pdf), McKinsey Global Institute, and Intelligentsia.ai (our own research firm acquired by Quartz in 2017).

We chose the Oxford study because it was the first of its kind and highly influential as a result. We chose MGI because of its scale. And we chose our own because we understand it in great detail.

🤖🤖🤖

Our conclusions

We conducted our own research because we wanted to understand the key drivers of human skills and capability replacement.

We were both surprised and pleased to find that, even though machines indeed meet or exceed human capabilities in many areas, there is one common factor in the research that artificial intelligence is No match for humans: unpredictability.

Where a job requires people to deal with lots of unpredictable things and messiness—unpredictable people, unknown environments, highly complex and evolving situations, ambiguous data—people will stay ahead of robots.

Whether it’s creative problem solving or the ability to read people, if the environment is fundamentally unpredictable, humans have the edge. And likely will for some time.

In fact, we found 4 themes where jobs for humans will thrive:

  • People: This includes jobs that rely on strong interpersonal skills like chief executives, school psychologists, social work teachers, and supervisors of a variety of trades.
  • Numbers: These are jobs that apply math to business problems, like economists, management analysts, and treasurers. (Doubtful)
  • Bugs and bad things: This includes human health-related jobs, like allergists, immunologists, and microbiologists and other environmentally-oriented professions such as toxicology. (As a second opinion?)
  • Spaces and structures: These are jobs that manage the physical world, like engineers and environmental scientists.

When work is unpredictable, humans are superior.

Our conclusions about their conclusions

In all of the studies, researchers had to grapple with the sheer level of uncertainty in the timing and degree of technological change. This is a conclusion in itself and a serious challenge for policy makers whose goal it is to plan for social support and education across generations.

Common across the studies was a recognition of a new kind of automation; one where machines learn at a scale and speed that has fundamentally changed the opportunity for AI systems to demonstrate creative, emotional and social skills, those skills previously thought as solely human.

Machine-learning systems operate, Not as task-specification systems, but as goal-specification systems. This is important because it means that, increasingly, many automated systems adapt and reconfigure themselves on their own.

The biggest weakness of all the studies is that jobs aren’t islands; boundaries change. The story of automation is far more complex and beyond the reach of the models and the data we have at hand. Jobs rarely disappear. Instead, they transform into tasks as new technology and business models emerge. (Those Not ready to be flexible in their skills, disappear?)

None of these studies is able to forecast the impact of re-imagining scenarios of business process changes that fundamentally alter how an occupation functions.

None of them can take into account the “last mile” of a job, where the automation can be relied upon for 99% of the job but it still takes an on-the-job human to do the 1%.

None of them conveniently spit out what knowledge will be most valuable.

There are counterintuitive effects to automation such as how the value of a job changes after the automation of one component.

If a specific task in a job is automated, creating value through an increase in productivity, it tends to raise the value of the whole chain of tasks that make up that job. So investment in capabilities that can’t be automated will be a good investment.

Finally, there are new jobs. We are far from solving all the world’s problems and we have an insatiable appetite for more. Just because people today can’t think of the new jobs of tomorrow doesn’t mean someone else won’t.

🤖🤖🤖

A note on the data

The common data set used by many of the big studies is O*Net (Occupational Information Network). This is the best data, anywhere. It was built for the US Department of Labor, primarily to help people match things they care about (such as skills, knowledge, work style and work preferences) to occupations.

For every occupation, there is a different mix of knowledge, skills, and abilities for multiple activities and tasks.

When all of these are described and assigned standardized measures such as importance, frequency, and hierarchical level, the final O*Net model expands to more than 270 descriptors across more than 1,000 jobs.

Why does all this matter? Because this level of complexity is what it takes to make it fit for purpose.

The data isn’t gathered for the purposes of analyzing automation potential so any and all automation modeling has to transform this complex and handcrafted dataset.

Subjective judgement of researchers or statistical manipulation of standard measures are the most important new inputs to a novel use of this data store.

There’s a lot of room for fudging, personal bias and lies, damned lies. Absurd results can happen.

Previously, when the data was used to predict off-shorability, lawyers and judges were offshored while data entry keyers, telephone operators and billing clerks could never be.

Still, it’s the best data available and if it’s good enough for designing jobs, it’s probably good enough for deconstructing them.

It’s all a question of how and where data is manipulated to fit the modeling goal.

🤖🤖🤖

Evaluating those studies in detail

Oxford University

Why it’s important

Novelty. It was the first of its kind, published five years ago. This study set up the conversation about jobs and robots.

Purpose

Analyze how susceptible jobs are to computerization.

Headline result

47% of total US employment at risk from automation.

Timing

2025-2035.

Primary job attributes considered

Nine proprietary capabilities. Three levels of human skill from labor data.

Limitations

Subjective input of a narrowly focused group, no economic analysis, no technology adoption, or timing. Using labor data for all levels may overstate impact on low-wage jobs.

What’s interesting

Two waves of automation with advancement in physical robots doing physical things first, then a second wave driven by breakthroughs in social perceptiveness and creativity. Specification of goals rather than specification of tasks will be the main indicator of computerization.

What’s useful

If you only go deep on one technical field, make it reinforcement learning.

The detail

This research, first published in 2013, kicked off the automation story with the finding of 47% of total US employment being at risk from automation. This was an academic study to figure out the number of jobs at risk. It turned out that it wasn’t realistic to pinpoint the number of jobs that would actually be automated so instead they developed a model that calculated the probability of computerization of any given job.

The most important human touch was a binary yes/no assessment of the ability to automate a job. In a workshop at the University of Oxford, a handful of experts, probably clustered around a whiteboard, went through a sample list of 70 jobs, answering “yes” or “no” to the question: “Can the tasks of this job be sufficiently specified, conditional on the availability of big data, to be performed by state of the art computer-controlled equipment?”

We don’t know what jobs they chose, it’s safe to assume the people in the room were not experts in those 70 jobs nor do we know whether there was enough tea and biscuits on hand for them to be able to think as deeply about job number 70 as job number 1.

The researchers were super aware of the subjective nature of this step. The next step was designed to be more objective and involved ranking levels of human capabilities to find the nine most human capabilities that matched to three engineering bottlenecks they were interested in: perception and manipulation, creativity, and social awareness. From this ranking, they were then able to apply statistical methods to come up with probabilities of these capabilities being computerized and therefore the probability of any whole job being automated.

The limitations of their approach are twofold. First, they looked at whole jobs. In the real world, whole jobs are not automated, parts of jobs are. It’s not possible to fully critique the effect of this on the final results—it’s all hidden in the stats—but it’s intuitive that the “whole job” aspect of this is highly overstated. Second, using “level” as the objective ranking mechanism introduces an important bias.

Machines and humans are good at different things. More importantly, what’s easy and “low level” for a human is often an insanely difficult challenge for a machine. Choosing “level” as the primary objective measure risks overstating the risk to low-wage, low-skill perception and manipulation-heavy jobs that operate in the uncontrolled complexity of the real world.

Given the researchers would have been very aware of this effect—which is known as Moravec’s Paradox (paywall)—it’s surprising that they didn’t specifically account for it in the methodology. It is potentially a significant distortion.

One more thing. The researchers did not take into account any dimensions of importance, frequency, cost, or benefit. So all capabilities, whether important or not, used every hour or once a year, highly paid or low wage, were treated the same and no estimates of technology adoption timelines were made.

So while this is a rigorous treatment of 702 jobs representing almost all the US labor market, it has the limitation that it relied on a group of computer-science researchers assessing jobs they’d never done at a moment in time when machine learning, robotics, and autonomous vehicles were top of mind and likely firmly inside their areas of expertise (as opposed to, say, voice, virtual assistants and other emotional/social AI) and without any way of modeling adoption over time.

A figure of 47% “potentially automatable over some unspecified number of years, perhaps a decade or two” leaves us hanging for more insight on the bottlenecks they saw and when they saw them being overcome.

Perhaps their most important contribution is their crisp articulation of the importance of goal specification. Prior waves of automation relied on human programmers meticulously coding tasks. Now, with machine learning, particularly with significant progress being made in reinforcement learning, the important insight is that it’s far more important to be able to specify the goal for an AI than to input defined tasks for the AI to perform. In many circumstances, there are now the tools for machines to figure out how to get there on their own. Creative indeed.

McKinsey Global Institute

Why it’s important

Comprehensiveness. The deep and broad analysis tapped into extensive knowledge. It was also the first study to look for correlations between human capabilities and job activities.

Purpose

Analyze the automation potential of the global economy.

Headline result

51% of activities in US susceptible to automation, representing $2.7 trillion in wages.

Timing

2035-2075.

Primary job attributes considered

18 proprietary capabilities, at four levels of human skill. Two-thousand activities from labor data.

Limitations

Doesn’t take into account importance of skills to a job, thereby limiting the economic evaluation at a job level.

What’s interesting

Automation most attractive in highly structured and predictable environments and there’s more of those than you might guess: accommodation, food service, and retail.

What’s useful

Largest near-term opportunities will be extending business-as-usual (not always sexy) automation (such as data collection and processing) into more places. Policy responses will be challenging given the high degree of uncertainty in timing.

The detail

The heavyweights of business analysis, MGI published their report in early 2017 analyzing the automation potential of the global economy, including productivity gains.

It’s comprehensive and the analytical process is extensive. They recognized the weakness of analyzing whole jobs and, instead, used O*Net activities as a proxy for partial jobs.

They also introduced adoption curves for technology so they could not only report on what’s possible but also on what’s practical. As such, their conclusions were more nuanced with around 50% of all activities (not jobs), representing $2.7 trillion in wages in the US, being automatable. They found that less than 5% of whole jobs were capable of being fully automated. Adoption timing shows a huge variance with the 50% level reached in around 2055—plus or minus 20 years.

MGI took around 800 jobs (from O*Net) and their related 2,000 activities, which they then broke into 18 capabilities. These 18 capabilities, with four levels each, were uniquely designed by the MGI team. This capability/level framework is at the core of the technological potential analysis.

These 18 capabilities are perhaps the most important human-touch point of the MGI analysis. “Academic research, internal expertise and industry experts” informed this framework.

Their framework offers a far more appropriate description of human skill-level in relation to automation than does the O*Net data. This framework was then used by experts to train a machine learning algorithm and apply the capabilities across 2,000 O*Net activities to create a score for each activity.

There’s some “secret sauce” at work here. It’s impossible for any outsider to tell how capability levels or automation potential are assigned against activities. It’s a mix of human, machine, and consulting nuance.

Finally, to analyze technical potential, they developed “progression scenarios” for each capability. This step must have taken quite some effort.

Surveys, extrapolation of metrics, interviews with experts, recent commercial applications, press reports, patents, technical publications, Moore’s Law all went into the mix. Perhaps there was some key factor in the model that got tweaked at the last minute by an individual analyst. We’ll never know. Nevertheless, they are experts with access to vast resources of academic expertise and they have a ton of practical operating chops.

In the second major stage of their analysis, they created adoption timelines.

Here, they use data from 100 automation solutions that have already been developed and create solution times for the 18 capabilities. To assess the impact of automation across industries and jobs, they use proxies from example jobs (there’s a lot of expert consulting input to this) to convert the frequency of an activity into time spent in a job, leading finally to economic impact by industry, activity, and job.

This is the sort of modelling that only a handful of groups can pull off. With so many inputs and the creation of databases as well as models, it would be daunting to recreate.

Weaknesses?

The MGI analysis has two important limitations.

First, by using only the activities from O*Net and defining their own capabilities, they miss the rich detail of how important a given capability is for a job, instead, they are all treated as equally important. This may have the effect of underestimating the incentive to automate particular activities in higher wage jobs where the importance of a capability is high, say, an interface between professionals and clients.

Second, how they determined adoption timelines is completely opaque to an outsider. But, because of the huge uncertainty of the 40 year span, it doesn’t really matter.

What’s important is that one of the world’s premier analytical agencies has been able to demonstrate just how uncertain this all is. The takeaway is that there’s no way to get a sense overall of when the breakthroughs may happen and how they may affect jobs. The most difficult job now? Being a policymaker developing long-range plans in such an uncertain techno-socio-political environment.

A key piece of information that is easily overlooked in the MGI report is how much more there is to harvest from current systems and how big the opportunity is to make business technology interfaces more human and more seamless.

Maybe it just doesn’t sound sexy when “collecting and processing data” is put up against other, more exciting ideas but these activities consume vast amounts of time and it’s a safe bet that it’s one of the most boring parts of many people’s jobs.

Even with the Internet of Things and big data technologies, there’s still an enormous of amount of data work that’s done by human hand, consuming hours of time per day, before people get on with the real work of making decisions and taking action. With advances in conversational AI and vision interfaces, we would expect to see an explosion in developments specifically to help people better wrangle data.

Intelligentsia.ai

Why it’s important

It’s ours. We know its flaws, shortcuts, and fudges.

Purpose

Analyze the opportunity to invest in automation. That is, to create machine employees.

Headline result

Market opportunity of $1.3 trillion over the next ten years. In that time, up to 46% of current capabilities offer attractive investment opportunities.

Timing

2026-2036.

Primary job attributes considered

128 capabilities (skills, abilities, activities) from labor data. Importance of capabilities from labor data.

Limitations

No accounting for time spent in a job. Subjective projections of technical progress. No accounting of dynamic changes in value as automation changes a job.

What’s interesting

Largest market opportunities and greatest value add are different. Communication drives value-add while product opportunities are in management and planning.

What’s useful

Be hyper aware of the frontier of emotionally aware AI: it will be a breakthrough when an AI can respond with the appropriate emotion to a person in a difficult situation or high emotional state with the intent of influencing or persuading them to take action.

The detail

At Intelligentsia.ai, we were fascinated by the debate over automation of jobs and decided to do our own analysis of the opportunity to invest in automation, that is, invest in some new kind of machine employee. We, too, turned to O*Net. “Level” was required but not enough to really understand the incentive to invest; we needed both level and importance.

Our methodology did not employ any statistical methods or machine learning. We had to scale everything by hand. Our subjective assessments were primarily predictions of what a machine can do today versus what we think a machine will be able to do in 20 years. This relied on both research expertise and considered opinion plus the concentration it took to assess and rank 128 capabilities.

Our view is that there is more intra-technology uncertainty than there is inter-technology uncertainty. That is, there’s more chance of being completely wrong by forecasting a single technology than across a set of technologies so we felt comfortable that technology forecasting uncertainty would broadly average out across the analysis. However, it’s the biggest weakness in our analysis, primarily because it would be highly unlikely that we or anybody else could reproduce our technology capability curves.

We used these forecasts to determine when a machine could match each capability within a job. This allowed us to create an attractiveness ranking using both importance and skill for each job to which we could apply a dollar figure. From there it was an excel number crunch to create a list of the most attractive AI capabilities to invest in and the most likely jobs to be impacted.

We found that a market opportunity for machine employees of $1.3 trillion in the US.

Because we weren’t trying to determine the jobs at risk, we didn’t get to a set percentage. However, we did find the percentage of capabilities where a machine could perform the role as well as a human to max out at around 46% in 10 years and 62% in 20 years. Most jobs were significantly less than this.

From our admittedly biased perspective, the most useful part of our analysis was that it helped to hone in on the best opportunities for investing in AI in the next 10 years. If you’re an entrepreneur and want to create products with the greatest market opportunity, invest in AI for combining information and choosing methods to solve problems, as well as emotionally intelligent AI that can assist people in caring for others or motivating people.

If you’re an “entrepreneur” and looking for the highest value-add inside a company, invest in AI that listens and speaks in a socially perceptive way as well as the next generation of insight discovery AI for analysis and decision support.

Why the death of Moore’s Law

could give birth to more human-like machines

Futurists and AI experts reveal why the demise of Moore’s Law will help us jump from AI to natural machine intelligence


For decades, chip makers have excelled in meeting a recurring set of deadlines to double their chip performance.

These deadlines were set by Moore’s Law and have been in place since the 1960s. But as our demand for smaller, faster and more efficient devices soars, many are predicting the Moore’s Law era might finally be over.

Only last month, a report from the International Technology Roadmap for Semiconductors (ITRS) – which includes chip giants Intel and Samsung – claimed transistors will get to a point where they can shrink no further by as soon as 2021.

The companies argue that, by that time, it will be no longer economically viable to make them smaller.

But while the conventional thinking is that the law’s demise would be bad news, it could have its benefits – namely fuelling the rise of AI.

Bassam Jalgha shared this link. August 14 at 11:25am ·

Why the death of Moore’s Law could give birth to more human like machines… Or not.

“If you care about the development of artificial intelligence, you should pray for that prediction to be true,” John Smart, a prominent futurist and writer told WIRED.

“Moore’s law ending allows us to jump from artificial machine intelligence – a top down, human engineered approach; to natural machine intelligence – one that is bottom up and self-improving.”

As AIs no longer emerge from explicitly programmed designs, engineers are focused on building self-evolving systems like deep learning, an AI technique modelled from biological systems.

In his Medium series, Smart is focusing attention on a new suite of deep learning-powered chat-bots, similar to Microsoft’s Cortana and Apple’s Siri, that are emerging as one of the most notable IT developments in the coming years.

“Conversational smart agents, particularly highly personalised ones that I call personal sims, are the most important software advance I can foresee in the coming generation as they promise to be so individually useful to everyone who employs them,” Smart continued

These may soon be integrating with our daily lives as they come to know our preferences and take over routine tasks. Many in Silicon Valley already use these AIs to manage increasingly busy schedules, while others claim they may soon bring an end to apps.

To get there, chatbots will need to become intelligent. As a result companies are relying on deep learning neural nets; algorithms made to approximate the way neurons in the brain process information.

The challenge for AI engineers, however, is that brain-inspired deep learning requires processing power far beyond today’s consumer chip capabilities.

In 2012, when a Google neural net famously taught itself to recognise cats, the system required the computer muscle of 16,000 processors spread across 1,000 different machines.

More recently, AI researchers have turned to the processing capabilities of GPUs; like the ones in graphics cards used in video games.

The benefit of GPUs is they allow for more parallel computing, a type of processing in which computational workload is divided among several processors at the same time.

As data processing tasks are chunked into smaller pieces, computers can divide the workload across their processing units. This divide and conquer approach is critical to the development of AI.

“High-end computing will be all about how much parallelism can be stuffed on a chip,” said Naveen Rao, chief executive of Nervana Systems – a company working to build chips tailor-made for deep learning.

Rao believes the chip industry has relied on a strict definition of chip innovation. “We stopped doing chip architecture exploration and essentially moved everything into software innovation. This can be limiting as hardware can do things that software can’t.”

By piggybacking off the advances of video game technology, AI researchers found a short-term solution, but market conditions are now suitable for companies like Nervana Systems to innovate. “Things like our chip are a product of Moore’s law ending,” says Rao.

Rao’s chip design packages the parallel computing of graphic cards into their hardware, without unnecessary features like caches – a component used to store data on GPUs.

Instead, the chip moves data around very quickly, while leveraging more available computing power. According to Rao, the result will be a chip able to run deep learning algorithms “with much less power per operation and have many more operational elements stuffed on the chip.”

“Individual exponentials [like Moore’s Law] always end,” Smart added. “But when a critical exponential ends, it creates opportunities for new exponentials to emerge.”

For Smart, Moore’s Law ending is old news.

Back in 2005, engineers reached the limits of Dennard scaling, meaning that while chips were continuing to shrink, they started leaking current and got too hot; forcing chip makers to build multi-core CPUs rather than continuing to shrink their size. This heat build-up issue is exactly the sort of challenge that chip designs like Nervana Systems promise to address.

“As exponential miniaturisation ended in 2005, we created the opportunity for exponential parallelisation.” For Smart, that’s the new exponential trend to watch out for. We just don’t have a law named for it yet.

Vision of crimes in the future? Any worse in quality of pain or numbers?

I study the future of crime and terrorism, and I’m afraid. I’m afraid by what I see.

All the drug dealers and gang members with whom I dealt had the latest technology items long before any police officer I knew did, or even knew they existed.

Criminals are still using mobile phones, but they’re also building their own mobile phone networks, which has been deployed in all 31 states of Mexico by the narcos.

They have a national encrypted radio communications system.

Think about the innovation that went into that. Think about the infrastructure to build it. And then think about this: Why can’t I get a cell phone signal in San Francisco? How is this possible?

They have Operations Center.  Within seconds they can identify any person.

Patsy Z and TEDxSKE shared a link.
The world is becoming increasingly open, and that has implications both bright and dangerous.
ted.com|By Marc Goodman

I sincerely want to believe that technology can bring us the techno-utopia that we’ve been promised.

But, you see, I’ve spent a career in law enforcement, and that’s informed my perspective on things. I’ve been a street police officer, an undercover investigator, a counter-terrorism strategist, and I’ve worked in more than 70 countries around the world. I’ve had to see more than my fair share of violence and the darker underbelly of society, and that’s informed my opinions.

My work with criminals and terrorists has actually been highly educational. They have taught me a lot, and I’d like to be able to share some of these observations with you.

1:07 Today I’m going to show you the flip side of all those technologies that we marvel at, the ones that we love. In the hands of the TED community, these are awesome tools which will bring about great change for our world, but in the hands of suicide bombers, the future can look quite different.

I started observing technology and how criminals were using it as a young patrol officer. In those days, this was the height of technology. Laugh though you will, all the drug dealers and gang members with whom I dealt had one of these long before any police officer I knew did.

Twenty years later, criminals are still using mobile phones, but they’re also building their own mobile phone networks, like this one, which has been deployed in all 31 states of Mexico by the narcos.

They have a national encrypted radio communications system. Think about that. Think about the innovation that went into that. Think about the infrastructure to build it. And then think about this: Why can’t I get a cell phone signal in San Francisco? (Laughter) How is this possible? (Laughter) It makes no sense. (Applause)

We consistently underestimate what criminals and terrorists can do. Technology has made our world increasingly open, and for the most part, that’s great, but all of this openness may have unintended consequences.

Consider the 2008 terrorist attack on Mumbai.

The men that carried that attack out were armed with AK-47s, explosives and hand grenades. They threw these hand grenades at innocent people as they sat eating in cafes and waited to catch trains on their way home from work. But heavy artillery is nothing new in terrorist operations. Guns and bombs are nothing new.

What was different this time is the way that the terrorists used modern information communications technologies to locate additional victims and slaughter them. They were armed with mobile phones. They had BlackBerries.

They had access to satellite imagery. They had satellite phones, and they even had night vision goggles.

But perhaps their greatest innovation was this. We’ve all seen pictures like this on television and in the news. This is an operations center. And the terrorists built their very own op center across the border in Pakistan, where they monitored the BBC, al Jazeera, CNN and Indian local stations. They also monitored the Internet and social media to monitor the progress of their attacks and how many people they had killed. They did all of this in real time.

4:04 The innovation of the terrorist operations center gave terrorists unparalleled situational awareness and tactical advantage over the police and over the government. What did they do with this? They used it to great effect.

4:20 At one point during the 60-hour siege, the terrorists were going room to room trying to find additional victims. They came upon a suite on the top floor of the hotel, and they kicked down the door and they found a man hiding by his bed. And they said to him, “Who are you, and what are you doing here?” And the man replied, “I’m just an innocent schoolteacher.”

Of course, the terrorists knew that no Indian schoolteacher stays at a suite in the Taj. They picked up his identification, and they phoned his name in to the terrorist war room, where the terrorist war room Googled him, and found a picture and called their operatives on the ground and said, “Your hostage, is he heavyset? Is he bald in front? Does he wear glasses?” “Yes, yes, yes,” came the answers.

The op center had found him and they had a match. He was not a schoolteacher. He was the second-wealthiest businessman in India, and after discovering this information, the terrorist war room gave the order to the terrorists on the ground in Mumbai. (“Kill him.”)

We all worry about our privacy settings on Facebook, but the fact of the matter is, our openness can be used against us. Terrorists are doing this. A search engine can determine who shall live and who shall die. This is the world that we live in.

5:54 During the Mumbai siege, terrorists were so dependent on technology that several witnesses reported that as the terrorists were shooting hostages with one hand, they were checking their mobile phone messages in the very other hand.

In the end, 300 people were gravely wounded and over 172 men, women and children lost their lives that day.

Think about what happened. During this 60-hour siege on Mumbai, 10 men armed not just with weapons, but with technology, were able to bring a city of 20 million people to a standstill. Ten people brought 20 million people to a standstill, and this traveled around the world. This is what radicals can do with openness.

This was done nearly four years ago. What could terrorists do today with the technologies available that we have? What will they do tomorrow?

The ability of one to affect many is scaling exponentially, and it’s scaling for good and it’s scaling for evil.

It’s not just about terrorism, though. There’s also been a big paradigm shift in crime. You see, you can now commit more crime as well. In the old days, it was a knife and a gun. Then criminals moved to robbing trains. You could rob 200 people on a train, a great innovation. Moving forward, the Internet allowed things to scale even more.

In fact, many of you will remember the recent Sony PlayStation hack. In that incident, over 100 million people were robbed. Think about that. When in the history of humanity has it ever been possible for one person to rob 100 million?

Of course, it’s not just about stealing things. There are other avenues of technology that criminals can exploit.

Many of you will remember this super cute video from the last TED, but not all quadcopter swarms are so nice and cute. They don’t all have drumsticks. Some can be armed with HD cameras and do countersurveillance on protesters, or, as in this little bit of movie magic, quadcopters can be loaded with firearms and automatic weapons.

Little robots are cute when they play music to you. When they swarm and chase you down the block to shoot you, a little bit less so.

Criminals and terrorists weren’t the first to give guns to robots. We know where that started. But they’re adapting quickly. Recently, the FBI arrested an al Qaeda affiliate in the United States, who was planning on using these remote-controlled drone aircraft to fly C4 explosives into government buildings in the United States. By the way, these travel at over 600 miles an hour.

Every time a new technology is being introduced, criminals are there to exploit it. We’ve all seen 3D printers. We know with them that you can print in many materials ranging from plastic to chocolate to metal and even concrete.

With great precision I actually was able to make this just the other day, a very cute little ducky. But I wonder to myself, for those people that strap bombs to their chests and blow themselves up, how might they use 3D printers?

You see, if you can print in metal, you can print one of these, and in fact you can also print one of these too. The UK I know has some very strict firearms laws. You needn’t bring the gun into the UK anymore. You just bring the 3D printer and print the gun while you’re here, and, of course, the magazines for your bullets.

But as these get bigger in the future, what other items will you be able to print? The technologies are allowing bigger printers.

As we move forward, we’ll see new technologies also, like the Internet of Things. Every day we’re connecting more and more of our lives to the Internet, which means that the Internet of Things will soon be the Internet of Things To Be Hacked.

All of the physical objects in our space are being transformed into information technologies, and that has a radical implication for our security, because more connections to more devices means more vulnerabilities. Criminals understand this. Terrorists understand this. Hackers understand this. If you control the code, you control the world. This is the future that awaits us.

There has not yet been an operating system or a technology that hasn’t been hacked.

That’s troubling, since the human body itself is now becoming an information technology. As we’ve seen here, we’re transforming ourselves into cyborgs.

Every year, thousands of cochlear implants, diabetic pumps, pacemakers and defibrillators are being implanted in people.

In the United States, there are 60,000 people who have a pacemaker that connects to the Internet. The defibrillators allow a physician at a distance to give a shock to a heart in case a patient needs it. But if you don’t need it, and somebody else gives you the shock, it’s not a good thing.

11:40 Of course, we’re going to go even deeper than the human body. We’re going down to the cellular level these days.

Up until this point, all the technologies I’ve been talking about have been silicon-based, ones and zeroes, but there’s another operating system out there: the original operating system, DNA. And to hackers, DNA is just another operating system waiting to be hacked. It’s a great challenge for them. There are people already working on hacking the software of life, and while most of them are doing this to great good and to help us all, some won’t be.

So how will criminals abuse this? Well, with synthetic biology you can do some pretty neat things.

For example, I predict that we will move away from a plant-based narcotics world to a synthetic one. Why do you need the plants anymore? You can just take the DNA code from marijuana or poppies or coca leaves and cut and past that gene and put it into yeast, and you can take those yeast and make them make the cocaine for you, or the marijuana, or any other drug.

So how we use yeast in the future is going to be really interesting. In fact, we may have some really interesting bread and beer as we go into this next century

The cost of sequencing the human genome is dropping precipitously. It was proceeding at Moore’s Law pace, but then in 2008, something changed. The technologies got better, and now DNA sequencing is proceeding at a pace five times that of Moore’s Law. That has significant implications for us. 

It took us 30 years to get from the introduction of the personal computer to the level of cybercrime we have today, but looking at how biology is proceeding so rapidly, and knowing criminals and terrorists as I do, we may get there a lot faster with biocrime in the future. It will be easy for anybody to go ahead and print their own bio-virus, enhanced versions of ebola or anthrax, weaponized flu.

13:49 We recently saw a case where some researchers made the H5N1 avian influenza virus more potent. It already has a 70 percent mortality rate if you get it, but it’s hard to get. Engineers, by moving around a small number of genetic changes, were able to weaponize it and make it much more easy for human beings to catch, so that not thousands of people would die, but tens of millions.

You see, you can go ahead and create new pandemics, and the researchers who did this were so proud of their accomplishments, they wanted to publish it openly so that everybody could see this and get access to this information.

But it goes deeper than that. DNA researcher Andrew Hessel has pointed out quite rightly that if you can use cancer treatments, modern cancer treatments, to go after one cell while leaving all the other cells around it intact, then you can also go after any one person’s cell.

Personalized cancer treatments are the flip side of personalized bioweapons, which means you can attack any one individual, including all the people in this picture. How will we protect them in the future?

What to do? What to do about all this? That’s what I get asked all the time. For those of you who follow me on Twitter, I will be tweeting out the answer later on today. (Laughter)

Actually, it’s a bit more complex than that, and there are no magic bullets. I don’t have all the answers, but I know a few things. In the wake of 9/11, the best security minds put together all their innovation and this is what they created for security. If you’re expecting the people who built this to protect you from the coming robopocalypse — (Laughter) — uh, you may want to have a backup plan. (Laughter) Just saying. Just think about that. (Applause)

Law enforcement is currently a closed system. It’s nation-based, while the threat is international.

Policing doesn’t scale globally. At least, it hasn’t, and our current system of guns, border guards, big gates and fences are outdated in the new world into which we’re moving. So how might we prepare for some of these specific threats, like attacking a president or a prime minister?

This would be the natural government response, to hide away all our government leaders in hermetically sealed bubbles. But this is not going to work. The cost of doing a DNA sequence is going to be trivial. Anybody will have it and we will all have them in the future.

16:27 So maybe there’s a more radical way that we can look at this. What happens if we were to take the President’s DNA, or a king or queen’s, and put it out to a group of a few hundred trusted researchers so they could study that DNA and do penetration testing against it as a means of helping our leaders?

Or what if we sent it out to a few thousand? Or, controversially, and not without its risks, what happens if we just gave it to the whole public? Then we could all be engaged in helping.

We’ve already seen examples of this working well. The Organized Crime and Corruption Reporting Project is staffed by journalists and citizens where they are crowd-sourcing what dictators and terrorists are doing with public funds around the world, and, in a more dramatic case, we’ve seen in Mexico, a country that has been racked by 50,000 narcotics-related murders in the past six years.

They’re killing so many people they can’t even afford to bury them all in anything but these unmarked graves like this one outside of Ciudad Juarez. What can we do about this? The government has proven ineffective. So in Mexico, citizens, at great risk to themselves, are fighting back to build an effective solution. They’re crowd-mapping the activities of the drug dealers.

Whether or not you realize it, we are at the dawn of a technological arms race, an arms race between people who are using technology for good and those who are using it for ill. The threat is serious, and the time to prepare for it is now. I can assure you that the terrorists and criminals are.

My personal belief is that, rather than having a small, elite force of highly trained government agents here to protect us all, we’re much better off having average and ordinary citizens approaching this problem as a group and seeing what we can do.

If we all do our part, I think we’ll be in a much better space. The tools to change the world are in everybody’s hands. How we use them is not just up to me, it’s up to all of us.

This was a technology I would frequently deploy as a police officer. This technology has become outdated in our current world. It doesn’t scale, it doesn’t work globally, and it surely doesn’t work virtually.

We’ve seen paradigm shifts in crime and terrorism.

They call for a shift to a more open form and a more participatory form of law enforcement. So I invite you to join me.

After all, public safety is too important to leave to the professionals.

The World in 2025: 8 Predictions for the Next 10 Years

By  ON May 11, 2015

In 2025, in accordance with Moore’s Law, (lay it all on this law in matter of technology), we’ll see an acceleration in the rate of change as we move closer to a world of true abundance.  (Behavioral change?)

Here are eight areas where we’ll see extraordinary transformation in the next decade:

1. A $1,000 Human Brain

In 2025, $1,000 should buy you a computer able to calculate at 10^16 cycles per second (10,000 trillion cycles per second), the equivalent processing speed of the human brain.

2. A Trillion-Sensor Economy

The Internet of Everything describes the networked connections between devices, people, processes and data.

By 2025, the IoE will exceed 100 billion connected devices, each with a dozen or more sensors collecting data.

This will lead to a trillion-sensor economy driving a data revolution beyond our imagination.

Cisco’s recent report estimates the IoE will generate $19 trillion of newly created value.

(I doubt NSA will be pleased: Not many analysts to process all this massive collection of data. Pending the huge data center in Utah was planned for that many sensors)

3. Perfect Knowledge

We’re heading towards a world of perfect knowledge.

With a trillion sensors gathering data everywhere (autonomous cars, satellite systems, drones, wearables, cameras), you’ll be able to know anything you want, anytime, anywhere, and query that data for answers and insights.

(A vast difference between retrieving facts and comprehending the mechanism of any knowledge based system)

4. 8 Billion Hyper-Connected People

Facebook (Internet.org), SpaceX, Google (Project Loon), Qualcomm and Virgin (OneWeb) are planning to provide global connectivity to every human on Earth at speeds exceeding one megabit per second. (People will prefer to subscribe to Chinese platforms in order to avoid spying by NSA on personal communication)

We will grow from three to eight billion connected humans, adding five billion new consumers into the global economy.

They represent tens of trillions of new dollars flowing into the global economy. And they are not coming online like we did 20 years ago with a 9600 modem on AOL.

They’re coming online with a 1 Mbps connection and access to the world’s information on Google, cloud 3D printing, Amazon Web Services, artificial intelligence with Watson, crowdfunding, crowdsourcing, and more.

(How this progress will save the billion humans suffering from malnutrition and famine?)

5. Disruption of Healthcare

Existing healthcare institutions will be crushed as new business models with better and more efficient care emerge.

Thousands of startups, as well as today’s data giants (Google, Apple, Microsoft, SAP, IBM, etc.) will all enter this lucrative $3.8 trillion healthcare industry with new business models that dematerialize, demonetize and democratize today’s bureaucratic and inefficient system.

Biometric sensing (wearables) and AI will make each of us the CEOs of our own health.

Large-scale genomic sequencing and machine learning will allow us to understand the root cause of cancer, heart disease and neurodegenerative disease and what to do about it. Robotic surgeons can carry out an autonomous surgical procedure perfectly (every time) for pennies on the dollar.

Each of us will be able to regrow a heart, liver, lung or kidney when we need it, instead of waiting for the donor to die.

(And the cost? How many will still be able to afford it as monthly retirement shrinks, according to  Moore’s law?)

6. Augmented and Virtual Reality

Billions of dollars invested by Facebook (Oculus), Google (Magic Leap), Microsoft (Hololens), Sony, Qualcomm, HTC and others will lead to a new generation of displays and user interfaces.

The screen as we know it — on your phone, your computer and your TV — will disappear and be replaced by eyewear.

Not the geeky Google Glass, but stylish equivalents to what the well-dressed fashionistas are wearing today.

The result will be a massive disruption in a number of industries ranging from consumer retail, to real estate, education, travel, entertainment, and the fundamental ways we operate as humans.

7. Early Days of JARVIS

Artificial intelligence research will make strides in the next decade. If you think Siri is useful now, the next decade’s generation of Siri will be much more like JARVIS from Iron Man, with expanded capabilities to understand and answer.

Companies like IBM Watson, DeepMind and Vicarious continue to hunker down and develop next-generation AI systems.

In a decade, it will be normal for you to give your AI access to listen to all of your conversations, read your emails and scan your biometric data because the upside and convenience will be so immense.

8. Blockchain

If you haven’t heard of the blockchain, I highly recommend you read up on it.

You might have heard of bitcoin, which is the decentralized (global), democratized, highly secure cryptocurrency based on the blockchain.

But the real innovation is the blockchain itself, a protocol that allows for secure, direct (without a middleman), digital transfers of value and assets (think money, contracts, stocks, IP). Investors like Marc Andreesen have poured tens of millions into the development and believe this is as important of an opportunity as the creation of the Internet itself.

(The Intelligence agencies will outpace all these cryptocurrency  programs)

Bottom Line: We Live in the Most Exciting Time Ever

We are living toward incredible times where the only constant is change, and the rate of change is increasing.

Image Credit: Shutterstock.com

Bitcoin 2014? Any different from the previous years?

I decided to re-post this selected top posts in order to understand what is Bitcoin, the jargon associated with, and particularly that China is heavily engaged in that business.

With all the speculations about Bitcoin and the exciting 2013 behind us, I thought  (India Lightspeed Venture Partners) that a list of predictions for 2014 would be a good way to start this year.

These predictions are based on growth patterns of similar networks, the traction in various ecosystem activities last year and my conversations with various Bitcoin enthusiasts.

So here are my Top 10 predictions for Bitcoin 2014.

January 13, 2014

Bitcoin 2014 – Top 10 predictions

1. More than $100M of venture capital will flow into Bitcoin start-ups.

This pool of capital will be distributed across local/global exchange start-ups (e.g. BTC China*), merchant related services (e.g. Bitpay), wallet services (e.g. Coinbase) and a host of other innovative start-ups.

A large chuck of the capital is likely to flow into start-ups which have emerged winners in their respective segment with a majority of the market share. Building exchange liquidity and merchant network is tough.

These businesses are likely to command high valuations as well. There would be plenty of money available for start-ups trying to solve a plethora of other challenges (e.g. private insurance, security), that exist with Bitcoin growth and adoption today.

2. Mining ‘will not’ be dead (guess it is about mining data?)

A lot of press notes and individual viewpoints state that mining is dead as we are already in the petahash domain and are restricted by Moore’s law from a technological stand point.

I believed this until I heard Butterfly Labs and HighBitcoin talk about how enterprises can potential adopt mining.

With transactions and transaction fees rising, it would be highly profitable for large enterprises to have data centers with mining equipment to process daily transactions.

The medium enterprises, who cannot invest in capital expenditure, would resort to cloud based mining.

Finally, the small enterprises would have to pay the transaction fees, to the network. This fees would still be lower than in comparison to Visa and Mastercard. In conclusion, we can potentially witness investment from large and medium enterprises in mining farms as early as the end of 2014.

3. There will be less than 5 alt-coins (out of the 50+ in existence) that will survive 2014

The open source nature of the Bitcoin protocol led to the advent of over 50+ alt-coins, most of which are blatant rip-offs with a tweak or two here and there. These can be divided into three categories

  1. Coins which are Ponzi schemes, where the sole purpose of the inventor is to drive the price of the alt-coin up and them dump
  2. Coins which can be mined easily and can have potentially more liquidity than Bitcoin
  3. Coins, which are based on a fundamental innovation and can result in specific adoption or security led use cases.

In my opinion, only the category 3 ones would survive.

PPC coin, which has introduced a proof-of-stake system in addition to proof-of-work is one such coin. It is in my list of survivors. It is also important to note that presently, other than Bitcoin, no other alt-coin has shown the potential for a growth in its acceptance network among merchants or companies. This is likely to remain true for 2014 as well.

4. Bitcoin community will solve problems including that of ‘anonymity’

One of the key roadblocks for governments and financial institutions to start participating in Bitcoins is the anonymous nature of its transactions.

This has led regulators to believe that Bitcoin can potentially be used for money laundering, terrorist support etc.

The good news that we have a very active Bitcoin community globally, which is constantly evolving the protocol. Hence, my prediction that in an effort to make Bitcoin more accepted, this community will come out with a solution to ‘anonymity’ that regulators can live with.

One of the ways is it being done today is by forcing exchanges, wallet services and other Bitcoin companies to have KYC practices similar to those of financial institutions. As a side thought – Internet was and is still used for porn. That does not make it ‘not useful’!

5. US, China and other global forces will not be at the forefront of Bitcoin adoption

Fincen, PBOC and RBI’s reaction to Bitcoin in US, China and India points to one single conclusion – we are not going to let a ‘controlled’ and ‘vast’ financial system adopt a decentralized cyprto-currency, which can be anonymous and used for illegal activities…as yet.

Countries which have had a history of currency issues and have not had effective monetary policies are the ones who will be at the forefront of Bitcoin adoption.

With China out of the mix currently, one can look at Argentina, Cyprus and others to lead. These may be smaller as a % of the global base. However they are likely to have much more local penetration and most importantly more government support or less government intervention – whichever way you want to look at it.

Successful internet and mobile companies in the US/Europe are the ones, who are most likely to offer digital goods in Bitcoins. Zynga just announced their experiment. I would not be surprised if Spotify, Netflix etc are next.

6. Indian ecosystem will be slow to evolve; limited to speculators and mining pools

The Indian Bitcoin start-up ecosystem today is limited to less than 10 start-ups across exchanges such as Unocoin, wallet services such as Zuckup, mining pools such as Coinmonk and some other ideas – compared to 100s of them in each US and China.

There is little evidence today to ascertain whether any of these start-ups are going to create a home market or serve an international market. In fact on the contrary, the Indian market is likely to be served by global Bitcoin companies.

For instance, Itbit, a Singapore based exchange has already started targeting Indian consumers. Global services have demonstrated the capability to be credible, especially when it comes to convenience and security by solving complex algorithmic problems.

This also makes them more defensible in the long run (e.g. Coinbase’s splitting of private keys to prevent theft) and poses a big challenge for Indian Bitcoin start-ups. There is an active Bitcoin community in India (about 15-20 people), which is trying hard to create awareness among consumers and regulators. I sincerely hope to see at-least 1 world-class Bitcoin start-up to comes out of India.

7. The use of Bitcoin will evolve beyond ‘store of value’ or ‘transactions’

The underlying Bitcoin protocol makes itself applicable beyond the use cases of ‘store of value’ and ‘payments’. The Bitcoin foundation took a huge step in allowing meta data to be included in the blockchain.

This will unlock a lot of innovation and maybe even prompt regulators to acknowledge the potential of Bitcoin, making it all the more difficult for them to shut it down or suppress it.

As one can see from the current Bitcoin ecosystem map (http://bit.ly/1krEd0Z) that there are almost no start-ups, which solely use the protocol without using the ‘coin’ or the ‘currency’ as a function. 2014 will be the first year to see some of these.

8. The ‘browser’ of Bitcoin will come this year

Netscape browser made Internet happen. ‘Something’ will make Bitcoin happen.

It is still very difficult for the average ‘Joe’ to understand, acquire, store and use Bitcoins. Though Coinbase and several others are working on innovative security algorithms and making it easy to store Bitcoins digitally, it is still not enough to make Bitcoin mainstream.

Hence, what a ‘browser’ did to the Internet, a product or technology innovation will do it to ‘Bitcoin’ in 2014. This will make the transition to Bitcoins frictionless. Kryptokit and Eric Voorhees’ Coinapult are promising start-ups in this direction. Encouragingly, all the building blocks for that to happen – like mobile penetration, cryptography algos etc are already in place.

9. The price of Bitcoin is likely to range between $4000-5000 by the end of 2014

Well, though some people will argue otherwise, price is not the most important thing about Bitcoin.

But given the interest and its volatility, it does deserve a place in this blog post. Speculators have predicted Bitcoin to go upto $100, 000; some say the maximum it can reach is $1300.

Though, I’m sure that there is some underlying basis for these predictions; here is the one for mine. Bitcoin’s price is a function of supply and demand. While the supply is predictive, the demand is less so.

However, the increase in the demand of Bitcoin can be compared to networks such as Facebook and Twitter, which have followed a ‘S’ curve of adoption.

All such networks typically take 6-8 years to plateau out with year 4-5 being the steepest. Though Bitcoin was invested 4 years ago, I would say that 2013 was its 2nd real year.

Given the nature of the ‘S’ curve, the price increase in 2014 is likely to be 3-4 times more than the one this year. Hence, the $4000-$5000 range, where the Bitcoin price is likely to settle down in 2014.

10. Last but not the least – Satoshi nakamoto will be Time’s Person of the Year 2014.

Please read about him here.

* Investments of Lightspeed Venture Partners


adonis49

adonis49

adonis49

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Blog Stats

  • 1,518,813 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 764 other subscribers
%d bloggers like this: