Adonis Diaries

Archive for July 7th, 2014

 

As a Jew living in America, the past week has changed me forever

Originally published in Tikkun Daily

Growing up outside of Atlanta, I learned to crawl with Bob Dylan’s “Only A Pawn In Their Game” as my soundtrack, anti-war posters hanging on the walls, beckoning me and my raw knees forward.

I was weaned with the voice of Martin Luther King, Jr. reverberating down the narrow halls of my parents’ apartment, formed my first words as though delivering a soliloquy on equality.

In first grade, I asked the teacher if the ‘Indians’ still celebrated Thanksgiving. When she asked why I wanted to know, I responded, “Because the people they ate with took their land,” something I’d learned from an honest mother.

During a Little League game, my father intervened when coaches tried to initiate a prayer circle, wanting us to give thanks in Jesus’ name. He fiercely believed in the separation of church and, well, everything.

As an American Jew, I was mostly instilled with progressive values as a child. Rather, I was instilled with progressive, American values – particularly those which aligned with liberal, Jewish ones.

A love of social justice, human rights, equality. A disdain for racism, fundamentalism, colonialism. Sure, I attended Hebrew school, but my scripture was more the Bill of Rights than the Torah, and my anthems came from hip-hop and rock, not the Book of Psalms (תהילים).

Despite this, my early love for progressivism was accompanied by a love for the State of Israel.

As a short, Jewish kid who wanted to be an NBA star, I was naturally inclined to root for the underdog. And at synagogue, we were taught that Jews were the ultimate underdogs, miraculously surviving the Holocaust and a history of oppression to create a contemporary “light unto the nations” which fought with dogged determination against evil and had a cool flag.

And I was taught that I was vulnerable, that there were people who wanted me dead, and that Israel was a safe haven, a beacon, a garden to which I could always escape.

Palestinians, accordingly, were portrayed as just one in a series of people who have risen up throughout history to destroy us, being painted as a caricature of evil. As a boy, I nodded and understood. Israel was not just good, it was necessary.

One Sunday morning, my parents dropped me off at our local, liberal synagogue for what was billed as the youth group’s pancake breakfast. Once inside, we were surprisingly herded into a multi-purpose room and sharply ordered to sit against the walls by masked men carrying plastic assault rifles.

Stale bread was thrown on the linoleum floor toward me and my friends, perplexed and unsure what the hell this was all about, but smart enough to know it was not actually a dangerous situation. Younger children started crying.

This is what the enemy is like, some teachers told us when it was over.

I nodded. We were the good ones.

As an adult, I’ve moved away from such naiveté while holding on to both my Zionist and progressive leanings, despite the growing struggle for coexistence between the two. And it’s not as though I’m mildly informed about the region or mildly invested in Israel and my Jewishness. The opposite, in fact, is the case.

I’m a Jewish studies teacher at a day school, yeshiva-educated with a master’s degree from Hebrew University in Jerusalem. I’ve authored a memoir about my experience with terror and reconciliation, and write extensively about the region, often critiquing Israel from a progressive perspective while maintaining my desire for a two-state solution to the conflict.

As an adult, I’ve learned about the cleansing of Arab villages which took place from 1947-1949 to make way for the Jewish state. I’ve learned about the ongoing settlement enterprise, the appropriation and bifurcation of Palestinian lands. I’ve learned the horrors of Israel’s decades-old occupation of the West Bank, about the suppression of basic human rights and the atrocities committed.

I’ve studied Israel’s use of indefinite detentions, home demolitions, restrictions on goods and movement, and the violence visited upon those being occupied.

I’ve learned that – and this is just one example of many – a Palestinian child has tragically been killed every 3 days for the past 14 years. That bears repeating, since such deaths are rarely, if ever, given any attention in America: Palestinian parents have had to bury a child every three days for the past 14 years.

Knowing all this, I’ve still held fast to my ‘progressive Zionism,’ hoping Israel could become that beacon of liberalism I was presented as a child, a beacon which never truly existed in the first place, despite the country’s socialist roots. Why have I done so? For two reasons: 1) deep down, I still believe in the promise of Israel, and 2) I can’t shake the notion that a Jewish state is absolutely necessary for our security.

Over the last decade, I’ve formed alliances with progressive Americans and the Israeli left, working in my own, small ways to try and move Israel away from those illegal, geopolitical policies causing so much suffering for Palestinians and undermining Israel’s ability to not just thrive, but survive.

All the while, I’ve watched the anti-war movement in Israel weaken, watched racism flourish and religious fundamentalism grow, watched Israel’s government build settlements at a record pace and make clear it has little interest in peace.

These realities have forced me to consider the incongruity between my American-borne progressivism and my Zionism. They have forced me to admit, like Peter Beinart, that in order to continue supporting Israel as a Jewish state, with everything it continues to do, I must compromise my progressivism.

However, the mind-numbingly horrific events of the past week have forced me, for the first time, to wonder whether such compromising can be sustained.

What has happened?

This: on June 12, three Israeli teenagers were kidnapped while hitchhiking in the West Bank by Palestinians belonging to a rogue branch of Hamas. I, along with friends and loved ones, worried they would become three more Jewish victims (added to the 1,100 killed since 2001) in an unending conflict, and watched closely as the Israeli military began combing the West Bank for them.

Only, it soon became clear that soldiers weren’t looking for them so much as collectively punishing Palestinians for the crime of a few people. Prime Minister Binyamin Netanyahu falsely blamed the kidnapping on Hamas – a move likely aimed at derailing the PA-Hamas unity government – and vowed they would “pay a heavy price.” But it was Palestinian civilians who paid a heavy price as for weeks soldiers raided over 1,600 sites in the West Bank, indefinitely detained hundreds, and killed five Palestinians.

Israel placed a gag order on details surrounding the teens’ abduction, and reports surfaced that Israeli officials knew the boys were dead, but wanted to justify ongoing military operations under the hope of bringing the boys back. (Alas, it seems such reports may have been accurate.)

And then, on June 30, the tragic news suddenly came: the three teens had been found dead. And just as suddenly, calls for blood and vengeance echoed from Israel, starting with Netanyahu, who turned a Chaim Bialik poem on its head by using it to call for blood:

In turn, calls for blood and revenge began echoing throughout Israel and on social media, with a Facebook page dedicated to such calls quickly receiving 35,000 likes. It featured soldiers posing with weapons, asking for permission to kill, along with countless Israelis calling for revenge:

On the left, Israelis hold a sign that reads,“Hating Arabs isn’t racism, it’s values! #IsraelDemandsRevenge,” while on the right, a soldier post a picture with the caption, “Let us simply spray [them with bullets].” <?center>

After the funeral for the three slain Israeli teens on July 1, angry mobs of hundreds began roaming the streets of Jerusalem chanting “Death to Arabs,” attacking Palestinians and promising blood by nightfall.

Chemi Shalev of Haaretz, witnessing the genocidal chants from Israelis and reading reports of Israeli police saving Palestinian citizens from the mobs, wrote the following:

Make no mistake: the gangs of Jewish ruffians man-hunting for Arabs are no aberration. Theirs was not a one-time outpouring of uncontrollable rage following the discovery of the bodies of the three kidnapped students. Their inflamed hatred does not exist in a vacuum: it is an ongoing presence, growing by the day, encompassing ever larger segments of Israeli society, nurtured in a public environment of resentment, insularity and victimhood, fostered and fed by politicians and pundits.

By nightfall, with the ink of Shalev’s pen barely dried, horrific news came that a Palestinian teen from East Jerusalem had been abducted and killed by Israeli settlers in an act of revenge, with reports revealing the unspeakable: he was likely burned alive.

Since that night on July 1, parts of Israel have been burning, and clashes between Palestinians and police in Shuafat, the East Jerusalem neighborhood where the killed teenager lived, have been particularly intense. The police have been unrelenting, raining rubber bullets and tear gas down upon a grieving neighborhood. And the scenes have been difficult to watch.

Perhaps the scene that has put me over the edge is one that should hit close to home: an American teenager from Tampa visiting Israel, who happens to be a cousin of the slain Palestinian teen, was almost beaten to death by police, ostensibly for throwing rocks, and remains in Israeli detention. [Video of the incident.]

brutal
Mother of the American teen beaten tells ABC, “He wasn’t recognizable.”

I have no words.–§–There are parts of me right now that feel defeated. Yes, there have been calls for peace and the denouncing of extremism in Israel, but such calls feel as though they have been drowned out by those still craving revenge. And as Shalev notes, this isn’t an isolated incident – this is the result of a real shift in Israeli society concurrent with the ongoing occupation.

The past week’s events have shaken me to my core, and have forced me to look long and hard at my personal politics. For if this were any country but Israel, my progressive values would not allow me to support, much less love, such an enterprise. Yet the reality is this: I do.

I’m not ready to abandon the dream of a Jewish state that lives up to its democratic promises, and continue to hold tenuously onto the idea of two states for two peoples. However, I have begun, for the first time, to consider what a single, bi-national state might look like, to consider that it might finally end this madness.

And here’s the irony: Israel’s extreme-right leaders, embracing various one-state solutions, have forced me to do so. Hell, Israel just elected as its President a one-state proponent. How can I not consider what that might look like?

As it happens, during all of this, I’ve just finished Ali Abunimah’s The Battle for Justice in Palestine, which makes an impassioned case for a democratic, bi-national state as the only way to end this conflict.

The progressive American in me agreed with much of his arguments. The Zionist in me was scared by its premise.

The humanist in me just wants all of this to end. Wants all of the suffering and pain on both sides to end.

If not now, when?

–§–What Do You Buy For the Children
David Harris-Gershon is author of the memoir What Do You Buy the Children of the Terrorist Who Tried to Kill Your Wife?, recently published by Oneworld Publications.

Neural Network? Sciences or networking? 

I have taken a couple of graduate courses in neural network at its beginning in 1989 and its modeling and how experiments are done and interpreted using this psychology computer learning algorithm.

What Does a Neural Network Actually Do?

There has been a lot of renewed interest lately in neural networks (NNs) due to their popularity as a model for deep learning architectures (there are non-NN based deep learning approaches based on sum-products networks and support vector machines with deep kernels, among others).

Perhaps due to their loose analogy with biological brains, the behavior of neural networks has acquired an almost mystical status. This is compounded by the fact that theoretical analysis of multilayer perceptrons (one of the most common architectures) remains very limited, although the situation is gradually improving.

To gain an intuitive understanding of what a learning algorithm does, I usually like to think about its representational power, as this provides insight into what can, if not necessarily what does, happen inside the algorithm to solve a given problem.

I will do this here for the case of multilayer perceptrons. By the end of this informal discussion I hope to provide an intuitive picture of the surprisingly simple representations that NNs encode.

I should note at the outset that what I will describe applies only to a very limited subset of neural networks, namely the feedforward architecture known as a multilayer perceptron.

There are many other architectures that are capable of very different representations. Furthermore, I will be making certain simplifying assumptions that do not generally hold even for multilayer perceptrons. I find that these assumptions help to substantially simplify the discussion while still capturing the underlying essence of what this type of neural network does. I will try to be explicit about everything.

Let’s begin with the simplest configuration possible: two inputs nodes wired to a single output node. Our NN looks like this:

Figure 1

The label associated with a node denotes its output value, and the label associated with an edge denotes its weight. The topmost node h represents the output of this NN, which is:

h = f\left(w_1 x_1+w_2 x_2+b\right)

In other words, the NN computes a linear combination of the two inputs x_1 and x_2, weighted by w_1 and w_2 respectively, adds an arbitrary bias term b and then passes the result through a function f, known as the activation function.

There are a number of different activation functions in common use and they all typically exhibit a nonlinearity. The sigmoid activation f(a)=\frac{1}{1+e^{-a}}, plotted below, is a common example.

Figure 2

As we shall see momentarily, the nonlinearity of an activation function is what enables neural networks to represent complicated input-output mappings.

The linear regime of an activation function can also be exploited by a neural network, but for the sake of simplifying our discussion, we will choose an activation function without a linear regime. In other words, f will be a simple step function:

Figure 3

This will allow us to reason about the salient features of a neural network without getting bogged down in the details.

In particular, let’s consider what our current neural network is capable of. The output node can generate one of two values, and this is determined by a linear weighting of the values of the input nodes. Such a function is a binary linear classifier.

As shown below, depending on the values of w_1 and w_2, one regime in this two-dimensional input space yields a response of 0 (white) and the other a response of 1 (shaded):

Figure 4

Let’s now add two more output nodes (a neural network can have more than a single output). I will need to introduce a bit of notation to keep track of everything. The weight associated with an edge from the jth node in the first layer to the ith node in the second layer will be denoted by w_{ij}^{(1)}. The output of the ith node in the nth layer will be denoted by a_i^{(n)}.

Thus x_1 = a_1^{(1)} and x_2 = a_2^{(1)}.

Figure 5

Every output node in this NN is wired to the same set of input nodes, but the weights are allowed to vary. Below is one possible configuration, where the regions triggering a value of 1 are overlaid and colored in correspondence with the colors of the output nodes:

Figure 6

So far we haven’t really done anything, because we just overlaid the decision boundaries of three linear classifiers without combining them in any meaningful way. Let’s do that now, by feeding the outputs of the top three nodes as inputs into a new node.

I will hollow out the nodes in the middle layer to indicate that they are no longer the final output of the NN.

Figure 7

The value of the single output node at the third layer is:

a_1^{(3)} = f \left(w_{11}^{(2)} a_1^{(2)}+w_{12}^{(2)} a_2^{(2)}+w_{13}^{(2)} a_3^{(2)}+b_1^{(2)}\right)

Let’s consider what this means for a moment. Every node in the middle layer is acting as an indicator function, returning 0 or 1 depending on where the input lies in \mathbb{R}^2.

We are then taking a weighted sum of these indicator functions and feeding it into yet another nonlinearity. The possibilities may seem endless, since we are not placing any restrictions on the weight assignments.

In reality, characterizing the set of NNs (with the above architecture) that exhibit distinct behaviors does require a little bit of work–see Aside–but the point, as we shall see momentarily, is that we do not need to worry about all such possibilities.

One specific choice of assignments already gives the key insight into the representational power of this type of neural network. By setting all weights in the middle layer to 1/3, and setting the bias of the middle layer (b_1^{(2)}) to -1, the activation function of the output neuron (a_1^{(3)}) will output 1 whenever the input lies in the intersection of all three half-spaces defined by the decision boundaries, and 0 otherwise.

Since there was nothing special about our choice of decision boundaries, we are able to carve out any arbitrary polygon and have the NN fire precisely when the input is inside the polygon (in the general case we set the weights to 1/k, where k is the number of hyperplanes defining the polygon).

Figure 8

This fact demonstrates both the power and limitation of this type of NN architecture.

On the one hand, it is capable of carving out decision boundaries comprised of arbitrary polygons (or more generally polytopes). Creating regions comprised of multiple polygons, even disjoint ones, can be achieved by adding a set of neurons for each polygon and setting the weights of their respective edges to 1/k_i, where k_i is the number of hyperplanes defining the ith polygon.

This explains why, from an expressiveness standpoint, we don’t need to worry about all possible weight combinations, because defining a binary classifier over unions of polygons is all we can do. Any combination of weights that we assign to the middle layer in the above NN will result in a discrete set of values, up to one unique value per region formed by the union or intersection of the half-spaces defined by the decision boundaries, that are inputted to the a_1^{(3)} node.

Since the bias b_1^{(2)} can only adjust the threshold at which a_1^{(3)} will fire, then the resulting behavior of any weight assignment is activation over some union of polygons defined by the shaded regions.

Thus our restricted treatment, where we only consider weights equal to 1/k, already captures the representational power of this NN architecture.

A few caveats merit mention.

First, the above says nothing about representational efficiency, only power. A more thoughtful choice of weights, presumably identified by training the NN using backpropagation, can provide a more compact representation comprised of a smaller set of nodes and edges.

Second, I oversimplified the discussion by focusing only on polygons. In reality, any intersection of half-spaces is possible, even ones that do not result in bounded regions.

Third, and most seriously, feedforward NNs are not restricted to step functions for their activation functions. In particular modern NNs that utilize Rectified Linear Units (ReLUs) most likely exploit their linear regions.

Nonetheless, the above simplified discussion illustrates a limitation of this type of NNs. While they are able to represent any boundary with arbitrary accuracy, this would come at a significant cost, much like the cost of polygonally rendering smoothly curved objects in computer graphics.

In principle, NNs with sigmoidal activation functions are universal approximators, meaning they can approximate any continuous function with arbitrary accuracy. In practice I suspect that real NNs with a limited number of neurons behave more like my simplified toy models, carving out sharp regions in high-dimensional space, but on a much larger scale.

Regardless NNs still provide far more expressive power than most other machine learning techniques and my focus on \mathbb{R}^2 disguises the fact that even simple decision boundaries, operating in high-dimensional spaces, can be surprisingly powerful.

Before I wrap up, let me highlight one other aspect of NNs that this “union of polygons” perspective helps make clear.

It has long been known that an NN with a single hidden layer, i.e. the three-layer architecture discussed here, is equal in representational power to a neural network with arbitrary depth, as long as the hidden layer is made sufficiently wide.

Why this is so is obvious in the simplified setting described here, because unions of sets of unions of polygons can be flattened out in terms of unions of the underlying polygons. For example, consider the set of polygons formed by the following 10 boundaries:

Figure 9

We would like to create 8 neurons that correspond to the 8 possible activation patterns formed by the polygons (i.e. fire when input is in none of them (1 case), one of them (3 cases), two of them (3 cases), or any of them (1 case)).

In the “deep” case, we can set up a four-layer NN such that the second layer defines the edges, the third layer defines the polygons, and the fourth layer contains the 8 possible activation patterns:

Figure 10

The third layer composes the second layer, by creating neurons that are specific to each closed region.

However, we can just as well collapse this into the following three-layer architecture, where each neuron in the third layer “rediscovers” the polygons and how they must be combined to yield a specific activation pattern:

Figure 11

Deeper architectures allow deeper compositions, where more complex polygons are made up of simpler ones, but in principle all this complexity can be collapsed onto one (hidden) layer.

There is a difference in representational efficiency however, and the two architectures above illustrate this important point.

While the three-layer approach is just as expressive as the four-layer one, it is not as efficient: the three-layer NN has a 2-10-8 configuration, resulting in 100 parameters (20 edges connecting first to second layer plus 80 edges connecting second to third layer), while the four-layer NN, with a 2-10-3-8 configuration, only has 74 parameters.

Herein lies the promise of deeper architectures, by enabling the inference of complex models using a relatively small number of parameters. In particular, lower-level features such as the polygons above can be learned once and then reused by higher layers of the network.

That’s it for now. I hope this discussion provided some insight into the workings of neural networks.

If you’d like to read more, see the Aside, and I also recommend this blog entry by Christopher Olah which takes a topological view of neural networks.

Update: HN discussion here.

Kid, I give you a dollar for every correct answer…

Questions and answers with a kid selling chewing gum while car driver is waiting on a red light

Maya Nassar posted on FB this June 30, 2014
 

Feu rouge
-un chewing gum? J’ai faim! S’il te plaît, que Dieu te garde, achète un, pour que je m’achète du pain
-tu crois en Dieu?
-oui
-comment tu t’appelles?
-Ahmed
-quel âge as – tu?
-7 ans
-d’où viens-tu?
-du Sud
-que fais-tu dans la rue?
-je travaille
-où sont tes parents?
-je ne sais pas
-je vais te poser les même questions encore une fois. Je te donnerai 1000 LL pour chaque réponse juste. Je saurai si tu mens, alors ne mens pas
-d’accord
-tu crois en Dieu?
-oui
-ton nom?
-Wael
-ton âge?
-5 ans… bientôt 6 ans
-d’où viens-tu?
-de Syrie
-que fais-tu dans la rue?
-je travaille
-où sont tes parents?
-morts
La vérité ne s’achète pas. La réalité ne se vend pas. Entre cordialité et mensonge, il croit en Dieu, il travaille… L’enfance s’assassine même sur la place Sassine. Un chérubin aux yeux bleus m’a menti. Seulement, je l’ai cru. Le plus beau menteur que j’ai jamais rencontré de ma vie a les yeux millénaires de la souffrance…
Feu vert
Ses yeux se détournent… il me manque! Sans penser, je fis demi-tour, il se retourna… s’approcha de moi, me regarda et dit:
-je m’appelle Amir… prends 1000 LL… et ne pleure pas…
Entre le feu rouge et le feu vert, j’ai été éblouie par la lumière…
J’ai rencontré le Petit Prince…
MN


adonis49

adonis49

adonis49

Blog Stats

  • 1,522,166 hits

Enter your email address to subscribe to this blog and receive notifications of new posts by email.adonisbouh@gmail.com

Join 770 other subscribers
%d bloggers like this: