Adonis Diaries

Archive for February 7th, 2016

How Many Bombs Did the United States Drop in 2015?

by Micah Zenko . January 7, 2016

The primary focus—meaning the commitment of personnel, resources, and senior leaders’ attention—of U.S. counterterrorism policies is the capture or killing (though, overwhelmingly killing) of existing terrorists. Far less money and programmatic attention is dedicated to preventing the emergence of new terrorists.

As an anecdotal example of this, I often ask U.S. government officials and mid-level staffers, “what are you doing to prevent a neutral person from becoming a terrorist?” They always claim this this is not their responsibility, and point toward other agencies, usually the Department of State (DOS) or Department of Homeland Security (DHS), where this is purportedly their obligation internationally or domestically, respectively. DOS and DHS officials then refer generally to “countering violent extremism” policies, while acknowledging that U.S. government efforts on this front have been wholly ineffective.

The primary method for killing suspected terrorists is with stand-off precision airstrikes.

With regard to the self-declared Islamic State, U.S. officials have repeatedly stated that the pathway to “destroying” the terrorist organization is by killing every one of its current members. (And the newer recruits from the refugee camps?)

Last February, Marie Harf, DOS spokesperson, said, “We are killing them and will continue killing ISIS terrorists that pose a threat to us.”

Then in June, Lt. Gen. John Hesterman, Combined Forces Air Component commander, stated, “We kill them wherever we find them,” and just this week, Col. Steve Warren, Operation Inherent Resolve spokesman, claimed, “If you’re part of ISIL, we will kill you. That’s our rule.”

The problem with this “kill-em’-all with airstrikes” rule, is that it is not working.

Pentagon officials claim that at least 25,000 Islamic State fighters have been killed (an anonymous official said 23,000 in November, while on Wednesday, Warren added “about 2,500” more were killed in December.)  (Excluding the number killed by the progressing Syrian army?)

Remarkably, they also claim that alongside the 25,000 fighters killed, only 6 civilians have “likely” been killed in the seventeen-month air campaign.

At the same time, officials admit that the size of the group has remained wholly unchanged. In 2014, the Central Intelligence Agency (CIA) estimated the size of the Islamic State to be between 20,000 and 31,000 fighters, while on Wednesday, Warren again repeated the 30,000 estimate. To summarize the anti-Islamic State bombing calculus: 30,000 – 25,000 = 30,000.

Given there is no publicly articulated interest by Obama administration officials in revisiting this approach, let’s review U.S. counterterrorism bombing for 2015.

Last year, the United States dropped an estimated total of 23,144 bombs in six countries.

Of these, 22,110 were dropped in Iraq and Syria. This estimate is based on the fact that the United States has conducted 77 percent of all airstrikes in Iraq and Syria, while there were 28,714 U.S.-led coalition munitions dropped in 2015.

This overall estimate is probably slightly low, because it also assumes one bomb dropped in each drone strike in Pakistan, Yemen, and Somalia, which is not always the case.

Sources: Estimate based upon Combined Forces Air Component Commander 2010-2015 Airpower Statistics; Information requested from CJTF-Operation Inherent Resolve Public Affairs Office, January 7, 2016; New America Foundation (NAF); Long War Journal (LWJ); The Bureau of Investigative Journalism (TBIJ).

Sources: Estimate based upon Combined Forces Air Component Commander 2010-2015 Airpower Statistics; Information requested from CJTF-Operation Inherent Resolve Public Affairs Office, January 7, 2016; New America Foundation (NAF); Long War Journal (LWJ); The Bureau of Investigative Journalism (TBIJ).


Lebanon: ICTJ Study Shows Viability of a National Commission to Uncover Fate of the Missing and Disappeared

BEIRUT, January 27, 2016—Twenty-five years after the end of the Lebanese Civil War, the families of the missing and forcibly disappeared in Lebanon are still waiting for answers about the fate of their loved ones.

A new report by the International Center for Transitional Justice says the country seems to be ready to address this issue through an independent national commission and lays out the features of a successful future commission.


The 28-page study, “The Missing in Lebanon: Inputs on the Establishment of the Independent National Commission for the Missing and Forcibly Disappeared in Lebanon,” includes expert financial and technical information and analysis, firmly rooted in the Lebanese context, to facilitate a discussion on what it would take to establish a commission to locate and identify the missing.

In 1992, the Lebanese government reported that 17,415 people went missing during the war.

It is thought that most were kidnapped by Lebanese or Palestinian militias and held on Lebanese territory. But there is no authoritative record of the names of the missing or their ante mortem data.

“The long years of waiting since the end of the war are an ongoing breach of the families’ right to know the truth, as recognized in international law and by Lebanese courts,” said David Tolbert, President of ICTJ. “The establishment of the commission would be a major step towards fulfilling that right.”

Many in civil society and victims’ groups believe the discussion in Lebanon on this issue is changing. Momentum is building toward addressing the issue of the missing and the forcibly disappeared.

Draft legislation to create a commission for the missing and forcibly disappeared is now before the Lebanese Human Rights Parliamentarian Committee, a judicial decision in 2014 recognized families’ right to the truth, and work is already underway to collect data on missing persons.

The Consolidated Draft Law for Missing and Disappeared Persons would establish an independent national commission to work as the primary institution responsible for coordinating an effective, meaningful response to the need of families to know the truth about their missing relatives.

“We are seeing an opening now for a national process to look for answers and clarify the truth,” said Nour El Bejjani, ICTJ’s Program Associate in Lebanon.

For the study, Lebanese and international professionals provided technical, operational and fiscal analysis and inputs on what it would take to establish a commission for the missing and forcibly disappeared.

To maximize the commission’s efficiency and sustainability, proposals in the study reflect international best practices, while remaining thoroughly tailored to the Lebanese reality and sensitive to Lebanese law, politics and history. The financial estimates that accompany the study offer only an initial estimate of the cost of establishing and operating such a commission.

The study’s modeling confirms the viability of the independent national commission, as envisaged in the consolidated draft law, in Lebanon today.

ICTJ hopes the study will help dispel the doubts of those who might oppose the commission on financial or operational grounds, and prove useful to overcoming future challenges in operationalizing a commission, for example, in preparing bylaws and operational budgets.

“It’s wrong to ask the families of the disappeared to continue to wait indefinitely for answers,” said Tolbert. “With this future commission lies the best hope of addressing the issue of the missing and forcibly disappeared in their lifetimes.”

The study, funded by the Embassy of Finland in Beirut, is intended to assist those advocating for the Lebanese government to fulfill families’ right to know the truth.

The full report can be downloaded in English and Arabic.

Note: Many military leaders of the Lebanese Forces who are speaking out have confirmed that they never kept records of those assassinated in their prisons: They didn’t even care to get their names.

Nour El Bejjani

How to mine 5 million books already scanned by Google

Mind you that this speech was delivered in 2011. At the time, Google was targeting to scan 15 million books, Not counting magazines and other artistic productions.

Erez Lieberman Aiden: Everyone knows that a picture is worth a thousand words. But we at Harvard were wondering if this was really true. (Laughter) So we assembled a team of experts, spanning Harvard, MIT, The American Heritage Dictionary, The Encyclopedia Britannica and even our proud sponsors, the Google. And we cogitated about this for about four years. And we came to a startling conclusion. Ladies and gentlemen, a picture is not worth a thousand words. In fact, we found some pictures that are worth 500 billion words.

Jean-Baptiste Michel: So how did we get to this conclusion? So Erez and I were thinking about ways to get a big picture of human culture and human history: change over time. So many books actually have been written over the years. So we were thinking, well the best way to learn from them is to read all of these millions of books. Now of course, if there’s a scale for how awesome that is, that has to rank extremely, extremely high. Now the problem is there’s an X-axis for that, which is the practical axis. This is very, very low.

01:32 Now people tend to use an alternative approach, which is to take a few sources and read them very carefully. This is extremely practical, but not so awesome. What you really want to do is to get to the awesome yet practical part of this space. So it turns out there was a company across the river called Google who had started a digitization project a few years back that might just enable this approach. They have digitized millions of books. So what that means is, one could use computational methods to read all of the books in a click of a button. That’s very practical and extremely awesome.

02:03 ELA: Let me tell you a little bit about where books come from. Since time immemorial, there have been authors. These authors have been striving to write books. And this became considerably easier with the development of the printing press some centuries ago. Since then, the authors have won on 129 million distinct occasions, publishing books. Now if those books are not lost to history, then they are somewhere in a library, and many of those books have been getting retrieved from the libraries and digitized by Google, which has scanned 15 million books to date.

02:33 Now when Google digitizes a book, they put it into a really nice format. Now we’ve got the data, plus we have metadata. We have information about things like where was it published, who was the author, when was it published. And what we do is go through all of those records and exclude everything that’s not the highest quality data. What we’re left with is a collection of five million books, 500 billion words, a string of characters a thousand times longer than the human genome — a text which, when written out, would stretch from here to the Moon and back 10 times over — a veritable shard of our cultural genome. Of course what we did when faced with such outrageous hyperbole … (Laughter) was what any self-respecting researchers would have done. We took a page out of XKCD, and we said, “Stand back. We’re going to try science.”

03:34 JM: Now of course, we were thinking, well let’s just first put the data out there for people to do science to it. Now we’re thinking, what data can we release? Well of course, you want to take the books and release the full text of these five million books.

Now Google, and Jon Orwant in particular, told us a little equation that we should learn. So you have five million, that is, five million authors and five million plaintiffs is a massive lawsuit. So, although that would be really, really awesome, again, that’s extremely, extremely impractical. (Laughter)

04:03 Now again, we kind of caved in, and we did the very practical approach, which was a bit less awesome. We said, well instead of releasing the full text, we’re going to release statistics about the books. So take for instance “A gleam of happiness.” It’s four words; we call that a four-gram. We’re going to tell you how many times a particular four-gram appeared in books in 1801, 1802, 1803, all the way up to 2008. That gives us a time series of how frequently this particular sentence was used over time. We do that for all the words and phrases that appear in those books, and that gives us a big table of two billion lines that tell us about the way culture has been changing.

04:34 ELA: So those two billion lines, we call them two billion n-grams. What do they tell us? Well the individual n-grams measure cultural trends. Let me give you an example. Let’s suppose that I am thriving, then tomorrow I want to tell you about how well I did. And so I might say, “Yesterday, I throve.” Alternatively, I could say, “Yesterday, I thrived.” Well which one should I use? How to know?

04:59 As of about six months ago, the state of the art in this field is that you would, for instance, go up to the following psychologist with fabulous hair, and you’d say, “Steve, you’re an expert on the irregular verbs. What should I do?” And he’d tell you, “Well most people say thrived, but some people say throve.” And you also knew, more or less, that if you were to go back in time 200 years and ask the following statesman with equally fabulous hair, (Laughter) “Tom, what should I say?” He’d say, “Well, in my day, most people throve, but some thrived.” So now what I’m just going to show you is raw data. Two rows from this table of two billion entries. What you’re seeing is year by year frequency of “thrived” and “throve” over time. Now this is just two out of two billion rows. So the entire data set is a billion times more awesome than this slide.  

06:05 JM: Now there are many other pictures that are worth 500 billion words. For instance, this one. If you just take influenza, you will see peaks at the time where you knew big flu epidemics were killing people around the globe.

06:16 ELA: If you were not yet convinced, sea levels are rising, so is atmospheric CO2 and global temperature.

06:24 JM: You might also want to have a look at this particular n-gram, and that’s to tell Nietzsche that God is not dead, although you might agree that he might need a better publicist.

06:35 ELA: You can get at some pretty abstract concepts with this sort of thing. For instance, let me tell you the history of the year 1950. Pretty much for the vast majority of history, no one gave a damn about 1950. In 1700, in 1800, in 1900, no one cared. Through the 30s and 40s, no one cared. Suddenly, in the mid-40s, there started to be a buzz.

People realized that 1950 was going to happen, and it could be big. (Laughter) But nothing got people interested in 1950 like the year 1950. (Laughter) People were walking around obsessed. They couldn’t stop talking about all the things they did in 1950, all the things they were planning to do in 1950, all the dreams of what they wanted to accomplish in 1950.

In fact, 1950 was so fascinating that for years thereafter, people just kept talking about all the amazing things that happened, in ’51, ’52, ’53. Finally in 1954, someone woke up and realized that 1950 had gotten somewhat passé. (Laughter) And just like that, the bubble burst.

And the story of 1950 is the story of every year that we have on record, with a little twist, because now we’ve got these nice charts. And because we have these nice charts, we can measure things. We can say, “Well how fast does the bubble burst?” And it turns out that we can measure that very precisely. Equations were derived, graphs were produced, and the net result is that we find that the bubble bursts faster and faster with each passing year. We are losing interest in the past more rapidly.

08:24 JM: Now a little piece of career advice. So for those of you who seek to be famous, we can learn from the 25 most famous political figures, authors, actors and so on. So if you want to become famous early on, you should be an actor, because then fame starts rising by the end of your 20s — you’re still young, it’s really great. Now if you can wait a little bit, you should be an author, because then you rise to very great heights, like Mark Twain, for instance: extremely famous.

But if you want to reach the very top, you should delay gratification and, of course, become a politician. So here you will become famous by the end of your 50s, and become very, very famous afterward. So scientists also tend to get famous when they’re much older. Like for instance, biologists and physics tend to be almost as famous as actors. One mistake you should not do is become a mathematician. (Laughter) If you do that, you might think, “Oh great. I’m going to do my best work when I’m in my 20s.” But guess what, nobody will really care.

09:17 ELA: There are more sobering notes among the n-grams. For instance, here’s the trajectory of Marc Chagall, an artist born in 1887. And this looks like the normal trajectory of a famous person. He gets more and more and more famous, except if you look in German.

If you look in German, you see something completely bizarre, something you pretty much never see, which is he becomes extremely famous and then all of a sudden plummets, going through a nadir between 1933 and 1945, before rebounding afterward. And of course, what we’re seeing is the fact Marc Chagall was a Jewish artist in Nazi Germany.

09:55 Now these signals are actually so strong that we don’t need to know that someone was censored. We can actually figure it out using really basic signal processing. Here’s a simple way to do it. Well, a reasonable expectation is that somebody’s fame in a given period of time should be roughly the average of their fame before and their fame after. So that’s sort of what we expect. And we compare that to the fame that we observe. And we just divide one by the other to produce something we call a suppression index. If the suppression index is very, very, very small, then you very well might be being suppressed. If it’s very large, maybe you’re benefiting from propaganda.

10:34 JM: Now you can actually look at the distribution of suppression indexes over whole populations. So for instance, here — this suppression index is for 5,000 people picked in English books where there’s no known suppression — it would be like this, basically tightly centered on one. What you expect is basically what you observe. This is distribution as seen in Germany — very different, it’s shifted to the left. People talked about it twice less as it should have been. But much more importantly, the distribution is much wider. There are many people who end up on the far left on this distribution who are talked about 10 times fewer than they should have been. But then also many people on the far right who seem to benefit from propaganda. This picture is the hallmark of censorship in the book record.

11:11 ELA: So culturomics is what we call this method. It’s kind of like genomics. Except genomics is a lens on biology through the window of the sequence of bases in the human genome. Culturomics is similar. It’s the application of massive-scale data collection analysis to the study of human culture. Here, instead of through the lens of a genome, through the lens of digitized pieces of the historical record. The great thing about culturomics is that everyone can do it. Why can everyone do it?

Everyone can do it because three guys, Jon Orwant, Matt Gray and Will Brockman over at Google, saw the prototype of the Ngram Viewer, and they said, “This is so fun. We have to make this available for people.” So in two weeks flat — the two weeks before our paper came out — they coded up a version of the Ngram Viewer for the general public. And so you too can type in any word or phrase that you’re interested in and see its n-gram immediately — also browse examples of all the various books in which your n-gram appears.

12:06 JM: Now this was used over a million times on the first day, and this is really the best of all the queries. So people want to be their best, put their best foot forward. But it turns out in the 18th century, people didn’t really care about that at all. They didn’t want to be their best, they wanted to be their beft. So what happened is, of course, this is just a mistake. It’s not that strove for mediocrity, it’s just that the S used to be written differently, kind of like an F. Now of course, Google didn’t pick this up at the time, so we reported this in the science article that we wrote. But it turns out this is just a reminder that, although this is a lot of fun, when you interpret these graphs, you have to be very careful, and you have to adopt the base standards in the sciences.

12:42 ELA: People have been using this for all kinds of fun purposes. (Laughter) Actually, we’re not going to have to talk, we’re just going to show you all the slides and remain silent. This person was interested in the history of frustration. There’s various types of frustration. If you stub your toe, that’s a one A “argh.” If the planet Earth is annihilated by the Vogons to make room for an interstellar bypass, that’s an eight A “aaaaaaaargh.” This person studies all the “arghs,” from one through eight A’s. And it turns out that the less-frequent “arghs” are, of course, the ones that correspond to things that are more frustrating — except, oddly, in the early 80s. We think that might have something to do with Reagan.

13:28 (Laughter)

13:30 JM: There are many usages of this data, but the bottom line is that the historical record is being digitized. Google has started to digitize 15 million books. That’s 12 percent of all the books that have ever been published. It’s a sizable chunk of human culture. There’s much more in culture: there’s manuscripts, there newspapers, there’s things that are not text, like art and paintings. These all happen to be on our computers, on computers across the world. And when that happens, that will transform the way we have to understand our past, our present and human culture.




February 2016

Blog Stats

  • 1,515,950 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by

Join 822 other subscribers
%d bloggers like this: