Tuesday, April 28, 2009

Haculla Will Teach You Shoplifting

I took these pictures today on the Upper West Lower East Side. Click 'em to enlarge. I love this shit.

First of all, I like Rivington Street. It's my ancestral homeland. (I also say that Brooklyn is my ancestral homeland. What can I say? I like that half-joke; I'm gonna keep using it.) See also of course here.

Anyone who just blanket–kneejerk hates all graffiti: I do not understand you.

Wow.

Love it. (Cf.)

Art.

Crazy.

Amazing.

This I assume is for real. From a totally removed, nonmoral perspective—the same one I guess that guy was taking when he got in trouble for saying 9/11 was æsthetically beautiful or something—I'm going to go ahead and say I love this sign. I mean..."Contemporary Hamlet during the season of Lent"? How can I not love it?

Monday, April 27, 2009

weh-heh-hell

I don't remember the last time I saw two halfway decent captions in this fucking contest. But take a look at this shit:


Sorry, Simon, yours isn't so great.* But Bob and Barbara, you fucking did it!

Note that when being positive about something like this I start cursing like a GI. Note also that neither of these surprisingly funny captions is particularly funny—which goes to show how terrible this goddamned fucking bullshit contest really is!

A-and we're back.


* You think that's what a woman might look like after giving birth? Lying calmly on her back, fully clothed, on a table? And why is the doctor holding a handful of porcupine quills, hm? Sure, you can come up with answers and rationalizations, but my point is that this joke does not follow intelligently from the image. No offense to Simon, whom we all know to be an excellent, usually hysterically funny, and intensely magnetic and attractive human being.

Renaissance man

Just in case anyone hasn't seen this:


I particularly like that (a) it's only the mention of a mother that causes Mr. T to intervene, he has no problem with the rest of it (e.g., "You're so ugly, your ears stick out to get away from your face"), (b) his first objection to the mother reference is that "She ain't here," and (c) his lecture about mother jokes results in the immediate reconciliation of the fighting kids: now they love each other!

A couple of easy ideas for how to be good to your mother:
  • Hold the door for her!
  • Ride with her on a bicycle built for two, and look pretty much her age!
  • Don't kiss her when you've got shit all over your mouth!
Mr. T is an excellent rapper but an even better dancer, and he should be an inspiration to us all.*

Mr. T as the bearded lady in Freaked (1993)

P.S. You get a room in Heaven? Is it like a hotel?


* A more serious response to all this: isn't it somehow kind of sad? I almost want to say tragic, although I'm not exactly sure why. (Might have something to do with Fromm's take on motherly love and the fact that no one is a saint, whether or not she's crapped out a baby, which makes you wonder then what complicated history and psychological distress is behind this intense and awkward endorsement of mother-worship...)

Sunday, April 26, 2009

things that happen in the universe

90' Nuts? I can't even begin to imagine why that apostrophe might be there. Is there something obvious I'm overlooking, here? Oh, wait...is it that it's supposed to be "GO" written in a cutesy way? In which case...well, again, what in the world is that apostrophe doing there? Do people not understand how apostrophes work? (What am I saying? Of course they don't.)


Lloyal Godbuns. I only noticed the double L a few seconds into examining and admiring this cryptic message. I mean, I guess it could just be someone's name. Lloyal Godbuns is someone I'd like to meet. (This graffiti, or I mean these, or I mean this graffito, or I mean... Whatever it or they are, it and they remind me of my favorite-ever graffiti from a bar bathroom, sighted in the year 2000 or so: GOATS ARE TOO GOOD FOR PHILIP JOHNSON, TOOK A DUMP. It's really the comma that makes it.)

Saturday, April 25, 2009

against teleportation

[Here's a Philosophy paper I wrote during my first semester of college. I wasn't that great at Philosophy, probably because I wanted to tackle these questions with sheer intellectual force rather than methodical reason...I guess you could call it a kind of brain–brawn? Anyway, I liked this paper, so here it is anyway. B+/A-. The pictures were not included in the original paper...but you could have guessed that on your own, I bet.]


Kirk's Dilemma
Personal v. Animal Identity
[Short Round]
Philosophy 110a
December 10, 1996

I begin a sentence with the word "I." That single letter represents my identity and my self, of course, but does it refer to the same thing that it referred to when I first learned it, many years ago? Would it refer to the same thing if my brain were irreparably damaged and I forgot everything I had ever known, returning to a mental state comparable to that of infancy? If someone were to replace my mind with the mind of Bernard Williams, what would it mean if I wrote, "I begin a sentence with the word 'I'"? According to John Locke, personal identity consists of and is limited by consciousness: awareness of self and memory of past define the self. If I switched minds with Bernard Williams, the fingers attached to the body I previously inhabited would be typing the thoughts of Bernard Williams, and his mine, for each person's identity would exist wholly within the other's body. Locke contrasts his definition of personal identity with that of animal identity, which does not depend on continuity of memory. The distinction he draws there has been attacked by those who feel that identity is inherently attached to the body, those who believe that "replacing my mind with the mind of Bernard Williams" would be tantamount only to creating a serious delusion of psychosis within the two of us, causing us to type away out our respective computers convinced that we were someone else. The arguments raised against Locke's views do not disprove the distinction between animal and personal identity, but they do successfully challenge the assumption that personal identity is all that is significant to a person's self.

According to Locke, animal identity is defined by the physical: while molecules and other small particles of matter may shift and change, there exists a continuity of body with a continuous life, meaning that any particular body, although what it is made up of may change over time, exists as something different from other similar bodies, with its own life. A man possesses animal identity as well as personal identity, but those two concepts are fundamentally different. Personal identity, to Locke, involves consciousness and the ability of that consciousness to stretch into the past. The continuity of a person must be based on that person's memories, for Locke argues that there is no other way to define a person without allowing for the possibility that the entire population of the world is the same person or that change in physical shape changes identity as well (amputation of an arm should not, in most people's view of reality, be considered a fundamental change in personal identity). Locke's view of PI (personal identity) thus disproves the claims of any madman believing himself to be Napoleon, because unless he recalls everything that Napoleon ever experienced, how can he be in any way considered to be Napoleon, even if some manner of immaterial soul was indeed transferred from French general to madman? If the transfer of such an immaterial soul without memory constitutes transfer of identity, Locke reasons, it is equally valid to assume that the transfer of molecules from one body to another constitutes transfer of identity. Very few would reason that a person whose body is made up of molecules that once helped to make up the body of Adolf Hitler is, in fact, Adolf Hitler, even if, by some strange trick of chance, every molecule in that person's body had once been in Hitler's body. Even fewer would agree that this person should be held responsible for the Holocaust. While disproving claims of transferred identity, however, Locke's PI theory lends credibility to the hypothetical possibility of such a transfer. If identity consists entirely of consciousness and memory, then it is perhaps reasonable to say that a person whose mind is identical to that of Hitler's, a person who has all of Hitler's memories and thought processes, is Hitler and should be held accountable for the crimes of Nazi Germany.

That last consequence of Locke's distinction between PI and animal identity may seem troubling, but we should remember that it is entirely hypothetical, and we are assuming that this person with Hitler's mind does in fact have all of Hitler's memories, for some reason, and does not merely think that he does. One might argue that this poor soul still should not be held accountable, but who is actually being punished if this Hitler-person is found guilty for the crimes of Der Führer? We wouldn't punish Hitler's comatose body for what he did while conscious, and that very instinct to consider mind over matter, so to speak, is what should make us consider strongly the possibility that the only thing significant about a person's identity is his awareness of his self and of past. It is the awareness of past that causes some logical difficulties, however. If my identity is defined on continuous recollection of my past, how do we factor in sleep? Are there holes in my existence from every time I have lost consciousness? Some say that sleep is not unconsciousness, and some even go so far as to say that any mental inactivity that truly counts as unconsciousness cannot be awakened from, because it implies total lack of self. Whether or not that is true, however, many memory theorists would be willing to say that there are gaps in a person's identity, because although such an allowance may seem odd to anyone who does not yet fully embrace Locke's PI distinction, it does not, in fact, have any logical flaws. Identity, if defined by memory, can be and in fact is shaped by a person's consciousness—that is true by its very definition and is almost absurdly repetitive to state.

A trickier problem is that raised by the question of change in memory: if I remember now, at eighteen years of age, what it was like to be ten years old, and if I later, at age forty-two, remember what it was like to be eighteen but do not remember what it was like to be ten, that means that the 42-year-old me and the 18-year-old me are the same person, and the 18-year-old me and the 10-year-old me are the same person, but the 42-year-old me and the the 10-year-old me are separate individuals. In other words, x=y, y=z, and xz, a logical impossibility. Logical impossibilities aside, one must question any position that implies that something that happened to someone and was forgotten therefore did not happen at all. A response to this is that Locke's PI requires only that every part of a person's past is remembered at some later point, so that continuity need not flow smoothly as long as it does progress. The fact that the 42-year-old me remembers the 18-year-old me means that the memories of the 18-year-old me hold significance to the 42-year-old me's identity and continuity. In addition, if I at 42 remember that I at 18 remembered what it was like to be 10, that implies a continuity of consciousness, regardless of actual immediate recall capabilities. If such allowances are not made, you are not the same human being that existed under your name ten years after your birth (or what you believe to be your birth, we must now say). What, then, must we conclude if someone has false memories? It is an indisputable fact that people's recollections of past events are sometimes inaccurate: large groups of people who witness the same event almost invariably produce differing accounts of what occurred—often radically different accounts. If I become convinced that I was abducted by aliens as a child, does that mean that alien abduction is now a part of my identity and my past, even though it never occurred? And if I am instead rather smugly convinced that it did not occur (as is true*) when it in fact did, does that mean that it is not a part of me and my history? The simple response is that I may have a flawed sense of my entire self without its affecting my self, the assumption being that the false memories are delusions and that the actual memories exist somewhere in the mind, possibly forever dormant. If I was actually abducted by aliens when I was 10 but that memory was erased from my mind by the government as part of a massive cover-up, a memory theorist might confidently respond that the abduction is not part of my PI, and although it happened to my body and is therefore part of my animal identity, it did not happen to me, and it would be false from a Lockean point of view to say, "I was abducted by aliens." We must keep in mind that what is true for our personal identities is not necessarily true for the rest of the world.

One argument leveled against Locke's PI/animal identity distinction involves the hypothetical situation of a mind-swap. If A's memories go into B's body and B's memories go into As body, can it be said that the A-body is B and that the B-body is A? If you (A) are told that you and your friend (B) will undergo a similar mind-swap, and if you were then told that one of the two bodies would live the rest of its life in a state of total agony while the other lived the rest of its life in a state of total ecstasy, what would you say if you were told in conclusion that you had the choice of which body got the pain and which body got the ecstasy? You would probably choose that the B-body receive the ecstasy, assuming that you yourself would soon be in that body. But what if the situation were phrased to you without the mention of B and the B-body at all, Williams asks. What if you were told that your memories would be taken from you, that you would be given the memories of someone else, and that you would finally be placed into a state of total agony? Total agony, Williams argues, is no better when you are a miserably deluded person who no longer remembers what is going on or why he is in agony. In some ways, says Williams, it is worse. He acknowledges that fear of the future can be altered if it involves a psychological change in relation to a psychological cause, meaning that an arachnophobe might not be terrified to know that he will soon be covered with spiders if he is told that he will love spiders when it happens, but he maintains that psychological changes have no relation to physical causes, meaning that our bodies will feel pain regardless of what we remember and who we think we are and that we should therefore fear future pain no matter what mind-swaps occur. In response to that, we must point out that anyone can fear anything in the future if he believes it will be happening to him, and such a fear does not prove that it is happening to him. Williams' point therefore does not necessarily affect PI and does not in any way eliminate the possibility that identity can in fact make a total switch of bodies.

Imagine, then, for a moment, that there exists a machine that is capable of recording the relative locations of every single molecule in your body, along with all the energy, electric charges, and neural impulses therein. Imagine that this machine sends this information to another machine, and that the latter machine uses the information to pull random molecules from the air and create a new body, physically identical to the original, with a mental state identical to yours at the moment at which the first machine was activated. Imagine that, at the instant that the new you is created, the old one is reduced to random molecules—the opposite of the other process—so that you are effectively transported across an unspecified distance. Keep in mind that the molecular make-up of the human body is [in?] constant flux: although some parts of the body such as brain cells never die or are remade, the molecules they consist of shift on a regular basis. It is therefore true that anyone who suggests that a new body created by this machine would be inherently separate from a teleporter and his identity is also suggesting, presumably inadvertently, that your body now is inherently separate from your body when you were ten—not different, but inherently separate in terms of identity. I should suggest that such a claim is a logical catastrophe: a person is therefore not a continuous being at all, and no one is the same individual from one instant to the next. If that is true, there can be no such thing as responsibility unless we treat this infinity of identities as one, just as countless still images can become a single moving picture, and if we make that leap, the assertion becomes meaningless, as we have returned to the original assumption that there exists a continuity of identity.

Imagine now that you have the opportunity to use this hypothetical teleportation machine. You can push a button and instantly appear elsewhere. But is that really what will happen? Can the new you be held responsible for all that you have done? Even if it can, does the fact that its memory can extend backwards in time mean that your consciousness will continue forward in time? Will you be teleported, or will you die, only to be replaced by a duplicate somewhere else? The duplicate will of course think it is you and behave like you—will behave exactly as you would, in fact, were you instantly resituated in space to its point of origin—but will it be you? Locke, of course, would say yes, because there exists a continuity of memory and self-awareness. The fear of death that you might feel—the feeling that being reduced to random molecules is not a form of transportation—is based on a limited view of identity, and if you are confronted with the fact that you are saying that your identity is based entirely on your body, you might reconsider your fear.**

But what if the machine malfunctions? What if the new you exists for a good ten seconds before the old you is disintegrated? The new you would then cease to be identical with the original you, because your lives would diverge for ten seconds, changing your respective PIs, and the disintegration of the original you would seem clearly to be your death. Can we argue that the difference of ten seconds also makes the difference of life and death, that if the two yous had not accidentally existed simultaneously, you would not have died but would instead have continued? I, for one, am not confident in that assertion. To quote John Perry's interpretation of Bernard Willimas' nonduplication argument, "A doesn't cease to be the A-body person simply because the B-body person is hanging around." We can rephrase and rethink that slightly to say (keeping in mind that if x=y, then x=y***), "A doesn't cease to be the B-body person simply because the A-body person is hanging around." Locke's PI/animal identity distinction allows for the existence of two beings with the same PI, assuming that their experiences do not diverge, and although you and the new you will diverge in experience from the moment of malfunction, there is no reason why the new you is any less you than the original you. At the moment of its creation, it is you, by Locke's argument, and although it diverges from you in experience, that does not mean that it is no longer you, but means rather that the meaning of the term "you" has changed.

(i) A pushes a button. [A="you"]
(ii) B is created. In the instant of creation, A=B. [A="you"=B]
(iii) A and B diverge. ["you"=???]

Our instinct is to insist that A continues as the sole being with the right to be thought of as "you." If it is not, we must accept that the concept of "you" no longer exists as it once did, a thought which is necessarily discomforting to those of us raised to put stake in our identities, but not in any way unreasonable. The problem is this: we tend not to want to acknowledge Locke's PI when dealing with terms such as "you" and "I." After (ii), the question of whether or not "you" continue depends entirely on how you choose to define "you," and choosing A proves only that you are more prone to accept a version of yourself that was not rebuilt than one that was, which is not an entirely reasonable position, considering the aforementioned flux of our physical make-up.

Regardless of A's identity, however, we have created a situation in which A is to be reduced to random molecules in a matter of seconds, which constitutes death from almost any point of view. Whether we look at it from the standpoint of PI or of animal identity, and whether or not "you" and your PI will continue to live in the hypothetical situation we have crafted, A and the body you now inhabit will die. This brings us back to Williams' point about pain: perhaps pain is something to fear regardless of your identity and your self; perhaps it is a mistake to think that a basic sensitivity to pain and other physical things is affected by PI at all. If anything, Locke's distinction between animal identity and PI supports and is supported by Williams' point about pain: while PI can be divorced from the physical, enabling things like mind-swaps to be a possibility, animal identity cannot escape pain or other physical hardships, and it is that distinction that helps to solve the argument between memory theorists and their critics. If we refer to your animal identity as you and your PI as You, we might suggest further that you, although some might argue that it is not as important as You, is quite significant in an individual life, and from that we might draw that while You might be able to enjoy the manifold benefits a teleportation machine can provide, you must look upon such a machine as no more than a technologically advanced guillotine.



* I think it's funny that I felt I had to include this. Or was this a little joke (on smugly)?
** [This footnote was in the original essay.] Perhaps you fear teleportation because you believe that identity is based neither on your body nor some sort of continuity of memory, but rather a soul in which consciousness is contained. Is there any reason why such a soul could not leap from one body to the next, especially if a new body is created in need of a soul, and that new body is identical to the one of yours that just disappeared? The soul must jump around molecules on a regular basis anyway, as we previously determined, so it must necessarsily possess some level of mobility and willingness to accept physical change.
*** Huh?

Thursday, April 23, 2009

seeing Sara

[Herefind a deleted excerpt from a short story I've been working on. The excerpt: no longer appropriate for the the story it (the story) has become. Herefind: now a word.]


He did know this: Sara was divided from herself, as God divided night from day, earth from water—as though a wall had formed inside of her, a cancer. She was lost in the past, her mirror showing Sara at 17, raw, wet, hideous, hidden away inside a plastic Sara controlled from a command center inside the forehead—Sara who turned herself into a fly, once, on her own wall and could not be sure now that she had ever changed back—so when people loved her, she could only say, "They love a lie." But that was the lie. She just had a bum mirror. She hadn't understood that when she had seen his eyes point as if at hers, those eyes were not a picture, not a narrative, not a mistake or illusion. He had really seen her. There was no plastic Sara. Nothing was hideous.


[Oops, there's no darned color in this post! Here you go:]

darned color

Wednesday, April 22, 2009

great songs of particular length

More great songs less than 2 minutes long:
"Mean Mr. Mustard," "Polythene Pam," and "She Came in Through the Bathroom Window" by the Beatles (Lennon, Lennon, McCartney)
"Stop That Train" by the Beastie Boys (again, if you take it as separate from the B-Boy Bouillabaisse)

Great songs that are exactly 2 minutes and 40 seconds long:
"Picture Book" by the Kinks
"Baby Please Don't Go" by Them

3:18 is a good length for a song
"Trigger Cut/Wounded-Kite at :17" by Pavement
"Hell Yes" by Beck—possibly my favorite Beck track this century (not saying much)
"Shake Your Rump" by the Beastie Boys
"I Will Dare" by the Replacements—one of my favorite songs ever

3:23 ain't so bad, neither
"Calistan" by Frank Black
"You Won't See Me" by the Beatles (McCartney)—oh, those backing vocals
"It's So Easy" by Guns N' Roses

Go 3:40
"My Michelle" by Guns N' Roses
"Victoria" by the Kinks (see excellent cover by the Fall)
"Don't Think Twice, It's All Right" by Bob Dylan—another of my favorite songs ever

Look at 4:17 go
"Lithium" by Nirvana—the song that got me back into Nirvana after a decade away
"The Good Life" by Weezer—I don't wanna be an old man anymore.
"Waitin' for a Superman" by the Flaming Lips—oddly affecting

I think it's funny that these songs are the same length
"Lust for Life" by Iggy Pop
"King of Rock" by Run-D.M.C.

Surprisingly good for karaoke
"Perfect Day" by Lou Reed—feel like a Sinatra crooner and a 1970s Lower East Side junkie–hipster all at once! This one has nothing to do with length. But it's 3 minutes and 48 seconds long if you're curious.

Aprils 22

(or, the Ghost of Apr. 22 Past)

1998
Over a Pierson lunch, I once again found myself discussing the evils of religion, this time up against T—, who ended up telling me that "people just like [me]" are the ones who will lead to what's basically the end of the world, "people who think they know everything." I told her that this was a willful misinterpretation of what I had said (realized later I should have pointed out that she was speaking in clichés). I had said exactly the opposite, that part of the problem with religion is that it pretends to have all the answers. I told her that what I believed was that we don't know everything and that it's better to accept that than to accept superstitious primitive explanations. And yes, lots of good has come from religious people, and lots of meaning has been derived from religion, but that is a testament to the ingenuity of humanity, not the inherent worth of religion. T— still thought that "people like [me]" who think people can exist with no "spirituality" (which she refused to define) are the source of all future evil (in slightly different words). The tyranny of cliché, huh? She (and I guess most people, myself included) have trouble seeing around cultural heritage, can't look objectively at anything even remotely connected to their own lives— and thus cannot truly accept those things.
On the other hand, A—, later, a new Catholic, surprised me with an intelligent point about it all. I doubt she meant quite what I took it as, but it struck me nonetheless— let's see if I can reconstruct—
—So you think religion is lies?
—Yes.
—And you think God is lies?
—...It depends on what you mean.
—[Maybe a specific religion's portrayal of God is lies, but the concept of God is true.]
Her relationship to Catholicism is cultural and family-based. Her relationship to God is intensely personal.
She went on to say something I heard as this: the specific object of faith may be false, but faith qua faith is very real.
"Qua" is a new word for me, but I think I have it right. Maybe not. I should just say that faith... no, qua works, I think... but that the essence of faith, faith itself disconnected from its object, e.g. blind faith in the goodness of God sans God(?), could be a very good thing.
This might be an answer to the question that's bugged me, the conflict between my psychoanalyst's belief* that ignorance, bliss or not, is bad and my awareness that the bloated superstitions of, say, Christianity and Islam and Judaism have led to the enrichment of individuals and the world (arguable) and to the increased happiness of many lives (less arguable). It's the faith that matters, but not faith in a Christian sense (or even probably A—'s sense as she said it), not faith as something directly related to and defined by its object, I mean faith—well, qua faith.
I'm not sure how I feel about that.
Then there's the Third Reich categorical imperative. As Arendt reports, Hans Frank developed a Nazi version of Kant's categorical imperative: "Act in such a way that the Führer, if he knew your action, would approve of it" (Die Technik des Staates, 1942, pp. 15–16). To focus on only one of the absurdities in Nazi Kant, we find that the morality of the categorical imperative has dropped from the highest level to an abstraction of the lowest—i.e., from absolute acceptance that one does not do something to an idea that one does not do something for fear of punishment, that punishment leaving the sphere of direct threat and becoming a more abstract fear, linked nonetheless to actual authority. (This version of that low-level morality is on a similar level to fear of God.)
My worry is that, as much as I believe (with some reservation) in the categorical imperative (e.g., I think you should not kill, for no reason other than that you should not kill),** I do not live up to those high moral standards. What if I believe in the categorical imperative only because of a Third-Reich/bad-faith sort of morality, behaving a certain way because of an awareness of oneself as an observable object (observable in the abstract) and thus out of an abstracted fear of punishment, the punishment one of lost love? I.e., person [S] (which may or may not stand for [Shorty]) does not kill, because he can imagine himself perceived by another who might think poorly of him for killing. Again, we have a fear-of-God morality, perhaps more on the God's-love side than the God's-wrath side. Even worse, what if this abstracted fear of punishment is not based on fear of hypothetical observation, but rather a meticulously overcautious defense from possible real observation? What if [S] acts in such a way that anyone, if he knew [S]'s action, would approve it, just in case someone actually does catch sight of him doing it? In that case, [S]'s morality is on a lower level than fear-of-God, and rests on the lower ledges of fear-of-human-disapproval.
On the other hand, this scale may be flawed. As I found with the parallel-but-different levels of Third Reich morality and fear-of-God morality, it is not linear (which would be nice), and I'm not sure it's so clear that a fear of human disapproval is less noble than a fear of God's disapproval (God being fictional, other people being nonfictional). It would be a good thing, after all, to act in such a way that others are not harmed by your actions.
But that's my worry. Not a major one, and I guess I expressed it last year in different terms, when I worried about my now-defunct pride in honesty. I keep loving P— through Summer '96 because she'd be hurt if she knew I didn't anymore. For example. A simplification, but one with some truth in it.***
Hence my guilt at describing M— many months ago (I remember this very clearly) as hip-sexy (I notice M—'s nipples through her shirt—heaven forfend!). And I really did —and do—feel sort of weird about having written it. And I really do (unfortunately) believe that I feel weird only because H — could theoretically read it. Even if the file were password-protected, I'd feel that. Now, noticing that another girl is attractive in no way constitutes infidelity, and intellectually I'd feel no guilt. I'm in love with H —, and my roommate's friend's mammaries pose no threat to our relationship. And yet I must act in such a way that H—, if she knew of my action— not that she's Der Führer at all—
I'm getting into a whole different territory, spurred on in part by the unbased jealousy H—'s been bringing up intermittently over the past semester— but the foundation I should return to now is that of morality and why I do what I do.
It comes back to religion, too. I say it's bad that people believe a lie, even if that lie makes them happier and helps them be good. Without religion, I do believe, the overall moral good-behavior of the world would plummet. I do not have much faith in man's inherent morality. I do think that most people need threats to keep them on track. I feel bad saying that, but I think it's true. History demonstrates. (Hence the relevance of "We cannot tolerate the use of threats and force by one group to impose its views on others.") But so is it then reasonable to criticize religion, to criticize Plato's noble lie? Do I try to be good just so [to sum up] [Short Round] is good? Is it bad to be good because you should be good? a level removed from Kant's categorical imperative? "Consistency is the hobgoblin of small minds." Somewhat related. How does it all fit in?
I found in History 272b that I kept agreeing with the philosophies that came up. That's not true at all, I guess. But existentialism rang true to me, and then so did the death of the subject. I think, "Go go postbourgeois subject!" and then I realize what exactly I'm rooting for. I don't know. There are just so many systems of thought.
In our last 125 class, Fayen talked about the tyrannical effect of a curriculum over the readings of a course, pointing out that grouping any books together in any order imposes a certain interpretation over all of them. To Sartre (and many other), existentialism was the culmination of intellectual thought. To us—well, so much has followed.
"The good of the one outweighs the good of the many." Or something. Star Trek IV. Do we agree? It's that paper I wrote for Philosophy 110a. I don't quite remember my conclusion. The little girl, or the world? The rugged individual, or the Moonie mass marriage? Of course it's not so simple a dichotomy.
Here's something simple: I have an exam tomorrow.
There. Simple.

[later that day]
Frank Zappa, as quoted on St. Alphonzo's Pancake Webpage: "Stupidity is the basic building block of the universe.


2001
Superman III—an experience, to be sure. Part of what I felt was not an entirely successful evening...but of course it wasn't. What was I thinking, putting J — and T— and C— together? There was no call for it. I'm not good in groups, so where's the rationale for creating my own group, one that would never arise in nature, not in a million years? Not in a million millions? (In some places you'll read, e.g., "a hundred thousands," instead of "a hundred thousand. Just a note. We're pretty used to "tens of thousands," which incidentally is the only context I've ever seen for the plural of "ten.") It was something of a wretched day, frankly. Driving J—**** to Target in New Jersey in the late afternoon of a Saturday, through the Lincoln Tunnel in traffic (there and back twice, as it happened, through some bad navigation), and all four hours of it colored by emotional acting up on the part of us both—wretched, yes. Then straight to this ridiculous evening. No harm done, of course, but I should hang out with T —, or with J—, or with C—. Not all three. It's absurd.
On a lighter note, I am reading Ada. What do I take with me to Europe? I do not know.


2004
I've got to say—this Jesus script is really, really brilliant.
Jesus is fighting a giant rat right now.



* I did not at the time have a psychoanalyst: I'm using the word the way you might use...hm...like, "My American's faith in the world order," or, "My TV-watcher's sense of reality."
** Kant : sophomore Shorty :: Rome : sophomore A—. All about Daddy.
*** Interestingly, in retrospect it's clear that this is the fabrication: I did a lot of work to talk myself out of loving P —. Ah, yes, very deep, the well of the past—may we not even call it bottomless.
**** Different J—.

Tuesday, April 21, 2009

user friendly


Aww, isn't she ADORABLE!

One thing:

Who's the target audience, here? Because I've used computers my whole life, PCs and Macs both, and I can't tell what the fuck this little girl is doing.

The point, clearly, is how easy it is—an appeal, I guess, to those who might otherwise be swayed by the Apple operating system's "user friendly" rep—but the overall impression (to me, at least) is confusion. Here's a step by step:

How Kylie Sends Pictures of Her
Fish, Dorothy, to Her Family
  1. "I plug this thingie in here"—OK, with you so far.
  2. "And now you click this"—wait a second, wait a second. Some window popped up, and then another one did—not clear what part is happening as a result of Step 1 and what part Kylie is doing herself by clicking...or what she clicked on, or what the clicking is for...
  3. A picture comes up on the screen. Was that still from the unspecified clicking, or is it another step Kylie just isn't commenting on?
  4. "I'm going to make this picture much better. I click"—Kylie clicks on...well, I guess she's clicking on something that says "Auto adjust," although you can only tell that by freezing the frame—"It's better!"
  5. "I'm going to send it to my mom and dad"—magically we are already on an e-mail window, and Kylie is clicking send.
The dude at Slate commented on how emphasizing that "even a child can do this" is sort of culturally tone deaf ("Who among us doubts the superior technical savvy of the modern child?"), but it is also extremely telling that every shot of the screen is like a close-up of one part of one window: you very rarely see a whole window, or watch the mouse move to click on anything in particular.* It's like MTV editing where you only catch glimpses of what's going on.

In other words, this ad about how easy it is to do something on a PC takes great pains to make it as unclear as possible what's actually being done on the PC. The one thing we know for sure is that the editing obscures the process rather than illuminating it.

And what does that tell us?

I'm not even going to comment on this campaign as a response to the "Hi, I'm a Mac" campaign because it's too dumb. (Not dumb in terms of hoodwinking consumers, necessarily—it's may be extremely canny from that perspective—just dumb if taken seriously as a serious argument of any kind.) But what I will say is that the overall effect of this ad is to make computers seem more mystifying: watching this, I feel the way I imagine my grandmother would feel if I tried to show her how to enable file-sharing or something.

There is something deeply, subtly wrong with this ad, and I suppose what it really comes down to is the main thing I'm usually complaining about when I'm complaining about most ads: it is bullshit, and it is designed to manipulate you. I think we all need to be a little more irritated by this shit than we generally are—that's what I'm sayin'.

Respek.



* Compare to this old iPhone ad, in which you are actually watching—in real time—as a hand model(?) actually uses the actually extremely user-friendly iPhone.

Monday, April 20, 2009

Jedidicy: justifying the ways of Lucas to Man

O.K., get ready for a nerd explosion. Here we go.

Table of contents:
1. Introduction
2. Disclaimer
3. Apology
4. The theory


INTRODUCTION

When the Star Wars prequels came out, I could not contain my excitement. If that had been true only of The Phantom Menace, it would have been one thing. But it was still true after I saw The Phantom Menace and knew it to be deeply, deeply bad: even though that movie pained me, injured me, even so, I was doing the metaphorical equivalent of salivating—hell, I was literally salivating—by the time I got to see L'attaque des clones in Paris three years later. But no, it was worse than that: being excited to see the second prequel after hating the first is bad, but I saw the first one like eight times. Stop for a second: did you hear and understand what I just said? I said I hated the first prequel and still watched it again and again and again...in the fucking theaters!

There is something wrong with me. Of course, whatever it is is evidently not unique.

I grew up on Star Wars. I saw Return of the Jedi with my dad in the theater; I think we drove to Brooklyn or something in order to avoid the crowds. I had mad action figures, yo. I had the sheets, I had the Underoos. I had this. I even had the pre-hipster retro dorky-cool semi-demi-ironic Star Wars passion in the early-to-mid 1990s: there was a time when there was something great about talking with people about Star Wars, when it was almost kind of a funny topic of conversation, whereas of course now the prequels have ruined that. Yet another thing the prequels have ruined.

Summary: I was brainwashed, sorta.

But if you can believe it, I am not here (up on this soapbox) in order to complain. Nay, I come rather (if you can believe it) to explain how the prequels are not quite as terrible as they... Well, not seem. Let me put it this way: how the prequels are not quite as terrible as they actually indeed indisputably are.

First, an unnecessary ranking of the six films, which I'm not sure why I'm including except that I know that people will disagree and want to argue with me:

Star Wars: A-
The Empire Strikes Back: A
Return of the Jedi: B+
The Phantom Menace: C-
Attack of the Clones: C+/B-
Revenge of the Sith: B

[I have corrected partially but not completely for grade inflation. Here a C is not an F; however, an A is not a real A, either. If you know what I'm sprayin'.*]

So that's, what, an average of about a C+ for the prequels? In a series with an overall B average, where the first three films averaged out at a solid A-?


DISCLAIMER

I have been known to come up with elaborate, persuasive theories that are not necessarily true. Usually these are the result of rationalization, by which I mean that my motives in developing the theories have to do less with the pursuit of truth than with the construction of a more palatable truth. For example, I will come up with excuses for people when it appears that they have done something rather embarrassing or shitty, and people will say, "Hm, well, that makes an awful lot of sense, and you may be right, but frankly I think you're way wrong." This is not what I am doing here today.

What I am doing here today is speaking truth to power.


APOLOGY

Generally I have mixed feelings about referencing Star Wars "facts" that didn't happen in the Star Wars movies themselves—hell, I don't even like to treat the prequels as "canonical." I think, for example, that the name "Palpatine" might not actually appear in the original films, I might have known it only from reading the novelizations—I'm not so sure about that, but I am sure that shit like what you read on Wookieepedia is preposterous. I gather that contemporary Star Wars fans (age 17?) talk about what's canonical, what's part of the "extended universe," and so forth; to me, there are three categories: the original movies, the prequels, and total ridiculousness. I mean, when I first looked at Wookieepedia (after a student recommended it), I found myself reading about the Sun Crusher, which "was built by using funds diverted from the Death Star I project": ludicrous. Later I learned that Boba Fett didn't die when he fell into the Sarlacc ("the Sarlacc could never hold Fett"). Apparently he escaped, fell back in, was regurgitated, fell in again, and wound up leading the Mandalorians (whom you have not heard of): "After returning to Mandalore, Fett made a few controversial decisions..." And while writing this I found the following information under the heading "Did you know..." on the main Wookieepedia page:

  • ...that Dezono Qua would purchase slaves of various sentient species so that his droid E-10 would cook and serve them to him?
  • ...that protocol droid C-3PO once served fungus crackers as a snack to Jedi Master Mara Jade Skywalker?
  • ....that Pter Thanas saw action on F'Dann IX early in his career in the Imperial Navy?
  • ....that Colonel Niovi surrendered the Super Star Destroyer Guardian to the New Republic?
  • ....that Helen and Roric Goldenfield were Tatooine residents who had trouble with a group of pirates?
  • ...that Obi-Wan Kenobi once left the Jedi Order to take part in a civil war on Melida/Daan?
No. No, I did not know any of that. I have heard of C-3PO and Obi-Wan Kenobi and Tatooine. But none of the rest of that shit is real. C-3PO and Obi-Wan and Tatooine are real; the New Republic, Mara Jade Skywalker, and Melida/Daan ARE NOT REAL.

BUT:

I am going to draw on the ridiculousness because here the ridiculousness connects to something you can actually see in the movies. (Of course, the central question is whether that's "you can see" in the sense that you can see something apparent that inheres in what you're observing or "you can see" in the sense that you might perceive something that isn't there—the way "you can see" dippers big and small in the night sky. See "Disclaimer" above.) I guess the idea here is that it's not evidence; it's an illustration. It's not material; it's a tool.


THE THEORY

The very, very short version of why I think the prequels have some value is that I think they're like crappy picture books—by which I mean that just as the little kiddie-book version of Return of the Jedi (with like 15 pages and a record at the end that you were supposed to play while reading) wasn't as good as the actual motion picture Return of the Jedi, a movie like Revenge of the Sith isn't as good as what you might call the ideal version of Revenge of the Sith. My theory is that George Lucas is actually a pretty good storymaker and a godawful storyteller (which is why the very best of the six films was not written by George Lucas and the very worst three were). The Star Wars prequel trilogy is like a crappy storybook version of a much better series of films that just unfortunately does not technically exist.

Here's what I mean:

In the Third Great Salivation Period (2004–2005)—during which, in spite of my disdain for Star Wars Episodes I–II, I grew increasingly crazed in my anticipation for Star Wars Episode III—I purchased and played two video games (note, please, that I do not generally play video games): Jedi Knight II and Knights of the Old Republic. I had heard that the latter was much better than the prequels, and it is that game that I'd like to discuss with you (see "Apology").

In Knights of the Old Republic, you get to go to both Jedi school and Sith school. (The Sith, as I believe was also never mentioned in the original movies but only in the novelizations,** are bad Jedi.***) You're given the ability to choose whether to be a good guy or a bad guy, so if you're going down the path of the Dark Side, then when Yoda's grandfather*** says, "Are you ready to begin your Jedi training," you can say either "Yes" or "Yes [lying]." Funny, but not important. What's important is that you know already from the movies that the Jedi are against getting too personally attached to anyone—that's particularly clear in the prequels, but in the originals, too, Yoda and Obi-Wan are both telling Luke that he should give up on his dad, that he should forget about his friends when they're being tortured in Cloud City—and of course they're also always trying to get him to chill out and keep the passion under wraps. The Jedi want you to clear your mind of all emotion, whereas the Emperor wants you to get mad, feel the hate flowing through you, etc.

With me so far?

Well, in Knights of the Old Republic, you learn—this isn't heavily emphasized, but it's clear and unambiguous—that the one thing both the Jedi and the Sith have to teach you in their respective training programs is that love is not kosher. The Jedi tell you to avoid love because it leads to passion; the Sith tell you to avoid love because it leads to compassion. But either way, caring too much about any one person is to be avoided—Jedi and Sith agree!

So now look at the course of the whole six-movie saga. What is it about? Anakin Skywalker is supposed to "bring balance to the Force," yeah? The Jedi weirdly assume that "balance" would mean wiping out the Sith, but whatever: as it happens, Anakin Skywalker does wipe out the Sith (when he kills the Emperor and himself goes back to the good side, no more Sith, prophecy fulfilled). But it's not just that. A big part of what makes Anakin go bad is that he is secretly in love with Natalie Hershlag and gets no support from the Jedi (after having been separated from his mother with no sympathy from the Jedi), so all the Sith have to do is pretend to help him out, to care about how he feels, and he's theirs. But of course Darth Sidious doesn't give any more of a crap about Natalie or the Moms than the Jedi Council does. Again, Jedi and Sith (and Snoop Dogg) agree: "We don't love them hos!"

But so then what have we got? Luke Skywalker ignores what Obi-Wan and Yoda and Darth Vader all tell him and risks everything to try to save his dad. Why? Because he thinks it's strategically valuable? No: because it's his dad. Repeatedly Luke goes against official Jedi advice by making choices to save his loved ones or family members, and this is what wins the star wars. Why-and-how does Anakin Skywalker end up killing Darth Sidious? Because-and-when he is moved by the sight of his son's being electrocuted. In other words, the crucial, most important, pivotal actions in the saga have to do with people acting out of love, or at least out of caring for other people, which is exactly what both the Jedi and the Sith forbid. In other words, the balance that's brought to the Force is a new, third course, neither dogmatically Jedi nor Sith, one that puts love and personal attachments front and center. In other other words, all you need is love?

When you view the saga this way, then all the ridiculous bullshit that happens to Anakin Skywalker takes on at least something resembling meaning. Love is more important than any ism, maybe you could say.

I don't really know. But I do know one thing:

"Around the survivors a perimeter create" may be one of the worst lines of dialogue ever written by anyone in any context.


P.S. The prequels are terrible.



* In case you don't know what I'm sprayin': when I taught at an elite Manhattan private school, a veteran teacher complained that there were now in effect only 5 grades: A, A-, B+, B, and B-. That was a bit of an exaggeration, but true it was indeed that students responded to something in the C range pretty much exactly as you ought to assume someone would respond to a flat F. The idea that "B" means "Good" and that "C" means "Satisfactory" would strike a lot of people as ludicrous. But come on, people: 85%? If you get a C, that means (or ought to mean) that you're ¾ of the way to perfect! A recent article about how Fæcebook makes you dumb was based on a study of college students in which "Facebook users...had GPAs between 3.0 and 3.5, while non-users had GPAs between 3.5 and 4.0." A 3.5 GPA is a fucking B+/A-! This is the average? The dumbest of the dumb are getting B's?!
** I remember being very interested to see Darth Vader identified as "a Dark Lord of the Sith."
*** This is not quite precise and will piss some people off. Ha ha.

classy

Friday, April 17, 2009

The New York Times & reality

From: [Short Round]
Subject: "Darjeeling" to Be Paired with a Short
Date: October 22, 2007 10:22:12 PM EDT
To: letters@nytimes.com

To the editor:

Lia Miller's article on Wes Anderson's short film "Hotel Chevalier" is marked by an error... Natalie [Hershlag*] does appear in "The Darjeeling Limited": she has a very brief, nonspeaking cameo appearance...

[Short Round]


From: [nice person at the Times]
Subject: Fwd: Cxn? BIZ/10.23 – "Darjeeling" to Be Paired with a short
Date: October 23, 2007 3:56:59 PM EDT
To: [Short Round]

Dear Mr. [Round],

Thanks for your note about "The Darjeeling Express." It was forwarded to me, as I handle corrections for the business desk.

Natalie [Hershlag] is indeed listed as a cast member on the IMDB.com entry for the film, but the publicist for the movie says that the studio considers Ms. [Hershlag]'s appearance in a brief still shot "a reference" and that she "doesn't appear as a character." The studio has declined a correction...

Thanks again for writing.

Best regards,
[nice person at the Times]


From: [Short Round]
Subject: Re: Cxn? BIZ/10.23 – "Darjeeling" to Be Paired with a Short
Date: October 23, 2007 4:33:50 PM EDT
To: [nice person at the Times]

Thanks for your response. But the studio gets to make the call? I'm not referring to the IMDb: the actress Natalie [Hershlag] is on screen, alone and identifiable, for at least a couple of seconds, and the article says that "she does not appear in the movie." I suppose I understand why Fox Searchlight might choose to be narrow in their definition of the word "appear," but do they really get to dictate your paper's usage? Do they get to determine reality?


[Nice person at the Times] never replied.


Analysis: So how big a deal is this? Depends how you look at it. The New York Times, in this capacity, appears to be doing PR for the studio. If the Bush administration said, "No, waterboarding isn't torture," the Times wouldn't just accept that as fact and modify its content accordingly: if the content were political—were important, sure, fine—then this kind of deference to the source's sensibilities would be incredibly damning, would mean the paper was a mouthpiece. My point isn't that The Darjeeling Limited and waterboarding belong in the same category of importance,** just that a newspaper's job shouldn't be to report what interested parties want them to report. It's a matter of principle and ought to apply at all levels, high and low. Imagine if a corporation fed the paper some news, and then the fact-checkers turned to that same company for verification! Oh, wait, you don't have to imagine it: apparently that's exactly what they do.


[Because I couldn't find a halfway decent screenshot of the relevant scene in Darjeeling, I opted instead to illustrate this post with images of Natalie Hershlag brushing her teeth in "Chevalier." I hope you've enjoyed them—the girl does brush a mean tooth.]




* Portman. Much as I enjoy writing "Fæcebook," I often prefer to speak of Natalie Hershlag, Winona Horowitz, Robert Zimmerman, Allen Konigsberg, et al. It's just one of the many things I do to keep myself amused in these troubled times.
** I shouldn't have to say this, but I think I do, cuz peoples is DUM.

To the editors

Here is a small selection of the many letters I've written to the editors of major periodicals. These ones were unpublished (and, for the most part, uncharacteristically unpolitical).


Subject: subway etiquette
Date: June 19, 2006
To: New York magazine
I was surprised that your etiquette issue made no mention of people who stand blocking the doors of subway cars when there are other places in the car to sit or stand. Obviously in a fully packed car all bets are off, but otherwise these people are making it all but impossible for their fellow citizens to get on and off the train, for no imaginable reason other than sheer laziness or obliviousness, as they have to wait five minutes for four people to squeeze into an all-but-empty train. This is arguably a greater crime than holding the doors. Those who walk onto a train and then stop directly inside the doors without moving to either side are particularly reprehensible and should probably be deported or at least forced to wear a special patch on their clothing.


Subject: cowardice?
Date: April 26, 2006
To: The New York Times
In "Pop Culture Beats Politics," Caryn James calls Thank You for Smoking "a cowardly film" because its tobacco-lobbyist hero, who wants to put smokers back on screen, never is shown smoking himself. James acknowledges in passing that "the satire is more about spin than smoking" (an understatement), but still she judges the choice to keep the smoking off-stage to be "hypocritical."
I wonder whether she finds Wag the Dog hypocritical for not perpetrating an enormous hoax on America or thinks The Ring is cowardly for not actually spitting up ghosts from the screen to murder its audience.


Subject: new filmmaking rules?
Date: April 7, 2006
To: The New York Times
In his review of the film When Do We Eat? (which I have not seen), Neil Genzlinger writes, "It's fine—healthy, even—to treat religious holidays with a little levity, but a certain respect is also mandatory." Mandatory? Is he kidding?
When did this rule get written, that films must display any level of respect for any institution, religious or otherwise? The film itself seems not at all interesting, but this is a rare instance where a negative review makes me want to go see the film, just to spite the reviewer.


Subject: "Here I Am Taking My Own Picture"
Date: February 18, 2006
To: The New York Times
Your article claims that "technology alone can't explain the trend" of kids taking their own pictures. However, the article fails to mention except in passing the single most important aspect of digital photography as it pertains to this trend: the ability to see your photos immediately after taking them. Add to that the fact that one need never pay to have a single print developed, and there's little msytery left over.
Maybe in 1960 kids didn't take pictures of themselves with a Kodak Brownie, but it wasn't because they were less prone to self-aggrandizement: it was because they'd have to wait whoever-knows-how-long to get that print developed, and it would have cost them some amount of money, however little. Compare the new photography to looking in a mirror (the 1960s version of the same thing), and then ask yourself whether kids were humbler back in the day.
The effect of having each photograph be totally free, totally disposable, and immediately available cannot be underestimated—nor, it seems, can the baby-boom generation's hunger for condescending and often alarmist pop psychology about how superficial kids have somehow become.


Subject: naming solution not completely satisfying
Date: February 5, 2006
To: The New York Times
The problem of married names is a stumper, and I applaud the Rudorens' innovation: the invention of a whole new name. The idea is admirable. However, I am far from ready to call the problem solved.
Ms. Rudoren gives as a key reason for her decision that she "didn't want to have a different name from [her] future children." But unless her intention is to establish a dynasty instead of a precedent, she will have a different name from her future children—as soon as they themselves marry and change their own names as their parents did. (That is unless they decide to go the more traditional route, in which case she'll likely have a different name from her future grandchildren.)
At least three grandparents' names are generally scrapped in the naming of a single child, and it would seem the main (nonhyphenated) alternative to this—the Rudorens'—is to scrap all four (or to melt down two). Maybe the best solution of all would be to return to a more old-fashioned naming system: if we had names like Emily of New Brunswick, or Michael son of Bill and Sue, or brown-eyed Sara, then no one could feel slighted when baby got a name.



Subject: the real force behind liberal-baiting
Date: January 17, 2006
To: Harper's magazine
Thomas Frank emphasizes the fact that our nation's conservatives frequently adopt a threatened attitude toward what they imagine to be a hateful liberal elite, when in fact liberals are a far less powerful minority. Although Frank is wise to note this seeming discrepancy, and although the political perversity in question is indeed revealing, it would be wrong to find any irony or paradox in it.
To someone of a liberal, humanist mind-set, the straightforwardness of the conservatives' real position might appear counterintuitive or even self-contradictory, but its actual coherence is evident in one of the examples Frank cites, in which Al Franken is imagined as knowing nothing about baseball (when in reality he is a big fan). The slur, of course, is that liberals are so snobby and out of touch with American values that they probably don't even like sports.
That's the rub. Speaking as someone who doesn't like sports, I can testify personally to the fact that no one in this country doesn't like sports. Political belief has nothing to do with it: sports are just about everybody's common ground. Some people care less about it than others, but to go so far as to dislike sports, particularly if you are a man, is to cast yourself in the role of a pissy eccentric: incomprehensible at best, but likely bordering on loathsome. Certainly lonesome, non–sports fandom is a true minority position. To push Al Franken into that category is to make him less powerful, not more.
Conservatives don't need to pretend that liberals are in charge: unlike most of us lefties, who tend to have some kneejerk sympathy for the little guy (even when sometimes unwarranted), conservatives are perfectly happy to demonize the underdog—maybe even at their happiest. The picture they draw of liberals is not of an imaginary overclass but of a dangerous, alien minority, a group of rightfully marginalized deviants who are trying unjustly, outrageously, to exercise their immoral influence over the right-minded minority. That's what liberal-baiting really means, or what at least it has come to mean. The Republican party has become our Nationalist party, as Anatol Lieven has suggested, and the imaginary enemy is not a threat from "above," but very much from "below" or at least from "outside"; not the danger of monolithic oppression, but of subversive dissent.



[No subject]
Date: October 20, 2005
To: USA Today [via web site]
Your article about oral sex among teenagers ("Teen define sex in new ways") makes two crucial mistakes:
The first is imagining that this is news. As I know as a 28-year-old, and as your own statistics show (numbers for boys are about the same as they were in 1995, 88–90% of all adults 25–44 have had oral sex), young people have had these relaxed attitudes toward oral sex for at least a decade.
The second is concluding that intimacy is in danger. This is based simply on poor reasoning. James Wagoner thinks the changing attitudes suggest a "disconnect between intimate sexual behavior and emotional connection," and Sabrina Weill thinks they represent "confusion about what is normal behavior." But isn't the whole point that the concept of intimacy has changed and that oral sex is not in fact all that intimate anymore? And in what sense can kids be said to be confused about normal behavior if oral sex has in fact become the norm?
The truth is that attitudes have simply changed—and given that oral sex is in fact safer than intercourse (as your article acknowledges), it's not at all clear why that should be considered a problem.



Subject: "All Ears for Tom Cruise"
Date: July 26, 2005
To: The New York Times
I agree with Nicholas Kristof's sad evaluation of our news media's priorities ("All Ears for Tom Cruise, All Eyes on Brad Pitt"), but I wish he had made some effort to offer an explanation for this sad state of affairs: without understanding a problem, how can we hope to solve it? It seems clear that the reason the news will always focus on celebrities instead of genocide is that celebrities are more fun than genocide, and the news is now unofficially (and in some cases officially) entertainment. To change that, we would need either to change the taste of the population or to make the news media see itself not as just another business in the free market, but rather as an institution serving a vital public need.



Subject: On [Bull]
Date: February 14, 2005
To: The New York Times
In "A Princeton Philosopher's Unprintable Book Title," philosopher Harry Frankfurt explains that he chose his "unprintable" book title "because I wanted to talk about [bull] without any [bull]." What a pity that The New York Times isn't up to that task.
What was once perhaps an admirable commitment to some sort of meaningful standard appears increasingly frivolous and embarrassing—and is, at this point, really only so much [bull].



Subject: proofreading help
Date: September 12, 2001
To: The New York Times
On your web site, there is currently an article with the headline, "Bush Vows to Avenge Attackers." In case you plan to use that headline in your print edition, I should point out that you have misused the word "Avenge": the headline makes no sense by any current definition of the verb "to avenge," unless what Bush vowed was to avenge the deaths of the suicide terrorists, which seems exceedingly unlikely.
The headline should read either, "Bush Vows to Avenge Attackers' Victims," or even the archaic, "Bush Vows to Avenge Upon Attackers." The sense you use, "To take vengeance UPON," has been obsolete since the 17th century.

crying for help

As Fashion Futurist used to say, "I was right." Well, sort of.

According to some dudes, the function of crying is "to send a signal." This is something I arrived at myself in the unpublished short story "Milton Hasbro Tries & Tries" (2003), excerpted below:

Somebody's baby was crying on the crosstown bus, and a little boy wanted to know why. "Babies cry for a lot of different reasons," the little boy's mother said. "Maybe he's hungry, maybe he's tired, maybe he needs his diaper changed. Maybe he just feels like crying."
The common link, Milton thought, listening, was that the baby needed some sort of attention in each case. Crying was crying for help. That's why we cry even now when we're older, he told himself; on some irrational level we're begging to be rescued. It's a surrender. Somebody—anybody—I need help. Take over!
And why do we cry when the pain is lifted? One day on the phone Milton cried when she told him she loved him. He cried silently, but the tears poured down the sides of his face. He was lying on his couch, holding the phone and just crying and crying. Why? She'd told him she loved him: a good thing, a happy thing. "I love you," she said, "so much." So much! And still he cried. Why?

A counterexample?

Thursday, April 16, 2009