Tuesday, December 27, 2005

On parent-child relationships

There is a great deal of precedent for automatic obedience of children to their parents. It's one of the ten commandments in the bible (depending on your definition of "honor" anyway) and Confucius defined filiality or filial piety as the most important virtue to develop. The understanding that children should obey their parents no matter what is still an important one today.

I contend, being the anti-establishment pinko that I am, that the origins of this principle are conservative in nature. It is much easier to preserve the status quo if you raise your children with an attitude of accepting what you tell them, even if you are unable to adequately explain it. I think that a great deal of religion (not all religion, of course) is passed on like this, even if the transfer isn't always as insidious as I've characterized it. The people who have this attitude, that of non-questioning acceptance of parental authority, drilled into their head at a young age, are going to be the ones who are more inclined to blindly go along with what those in power have to say.

Of course, there is a limit to the level of authority parents have over their children, ultimately. In extreme cases of abuse or molestation, it is acceptable for the state to step in and remove the parents. However, there are many ways to mess up your kids that are perfectly acceptable by society. You can teach your kids that we live in a geocentric universe with a flat earth that was created by you, if you want. There's nothing anyone can really do to stop you from filling your children's heads with whatever absurdities you want. Of course, whether or not they'll ultimately buy it is unknown.

As a brief (ha! as if I'm ever brief) aside, the right of parents to inculcate their children with certain types of beliefs is really at the heart of the intelligent design versus evolution debates.

Anyway, it just so happens that a lot of the time, people grow up and are extremely close to their families, immediate or otherwise, and remain close with them for the rest of their lives. But this doesn't always happen. There is nothing intrinsic to families that makes the kind of close-knit ideal come about. It happens often because of the fact that the people who raise you have such a significant effect on your development, that you usually grow up into somebody who meshes well with them. It can be extremely difficult for people who, for whatever developmental reason, do not necessarily mesh well with their families.

There is a potential internal difficulty, comparable (though usually less potent) to that which occurs in people who are gay but feel like they are supposed to be straight. If they're a black sheep who feels like you ought to adhere to the ideal of the family unit, you suffer from this tension. There is also a potential external difficulty, if the black sheep individual finds himself only a generation removed from precisely that type of family. If your parents want you to be a part of a close family which you are not interested in, then there's going to be stress.

I want to say that parents and family are important, but there is no reason that disobedience ought to be a cardinal sin. Family members can be, contingently, the most important people in your life, but there's no necessary reason for them to be. Blood is not always thicker than water.

Thursday, December 22, 2005

Christmas

There's an article at Slate about the illegitimacy tradition - that is, a sect or strain of Christianity that maintains that Jesus had an earthly father and the holy spirit only supplemented his birth. To me, this is as meaningless as the argument between monophysitism and chalcedonianism (which refers to the dual nature of Jesus as human and divine) and it is just as impossible to settle. The arguments come from interpretations of the available evidence, but there aren't real answers to these questions. All the same...

"In 1987, Schaberg, a biblical studies professor at the University of Detroit Mercy, published The Illegitimacy of Jesus. Her central argument was that Matthew and Luke's Gospels originally told of an illegitimate conception rather than a miraculous virgin one. University of Detroit Mercy, which is Catholic, publicly distanced itself from Schaberg's positions. She got hundreds of angry letters and a few death threats and one night awoke to discover that her car was in flames on the street outside her apartment."

It is two thousand years after these events first transpired. It is more than sixteen hundred years since the first Council of Nicea. Almost five hundred years since the Council of Trent. The idea that people still have such conviction for their magical beliefs, after the humanist Rennaissance and the industrial revolution, and that they are so insecure about those beliefs that they should respond to some insignificant scholar whose position undermines their own with death threats and by burning her car - the very idea makes me slightly ill.

I'm inclined to say that these are the same people who maintain, despite all evidence to the contrary, that the founders of our country were zealous Christians, not vaguely deistic intellectuals, and that the trappings of Christmas have always been Christian imagery and were never associated with pagan holidays, even though this is not true. If you are able to believe in things for which there is no strong evidence for or against (or for which there are approximately equal amounts of evidence on both sides) with the same amount of conviction as types of propositions which can be tested or deduced, then it is not that much of a leap to believing in things for which there is no evidence at all, but that you just want to be true.

Wednesday, December 14, 2005

On Cromulence

The English language is a fluid thing. Hell, all language is a fluid thing. There is no inherent truth to the words we use; they only mean what we say they mean because there is an agreement among speakers of the language to identify a particular sequence of letters or sounds with some objects in the world. I am very interested in this agreement.

The world 'out there' that words describe is infinite, or at least much bigger than the words we have available. Pick up a dictionary - that may be a very long list of words, but there's still more out there than that. There is a difference between an idea expressed as a phrase and an idea expressed as a word. I don't know where it comes from in our cognitive faculties, but I believe that we treat the two things differently.

This issue arises when philosophers or economists or whomever is trying to describe a concept for which there is no word yet. He is often forced to use a metaphor, which may or may not click with his readers. For example, for Thomas Friedman says that the world is flat in his latest book, he doesn't mean literally flat. He's referring to a much more complicated process of interconnectivity through technology between countries which he describes as "flattening." He chose to do this because he considered the associations with the phrase "the world is flat" and deliberately went against them, thinking (quite correctly) that a potential reader would be intrigued by the statement and want to see what Friedman actually meant.

Friedman turned a common linguistic hurdle into an advantage, since using pre-existing words in new ways means that one must acknowledge all the associations and baggage that the word happens to have already. For example, when using the word 'libertarian' one must clarify whether one means the political party or the philosophical position supporting the existence of free will. The two are absolutely distinct (though not mutually exclusive) even though the same word is used.

Gloria Anzuldua, a chicana lesbian-feminist poet, tried to dodge these pitfalls when she coined the word 'nepantilism,' from an Aztec word, to mean a struggle of being torn between two ways. It was an interesting attempt, but the word never caught on, and yields a mere 28 google hits, which is far worse than many misspellings.

Another attempt to coin a new word was made by Edward A. Ross in a 1907 work titled "Sin and Society." In this case, he attempted to define what we would now define white-collar criminals, a group of people that didn't really have their own term at that point, as criminaloids. This word enjoyed slightly more success, being printed in an Encyclopedia of White Collar and Corporate Crime at some point, but it still only produces 234 results in google.

The words boycott and spoonerism are both new words from the past two hundred years which were originally the names of individuals. What is interesting, however, is that there is another word which means spoonerism but predates it in the OED which has fallen out of favor: marrowsky. Why did one survive and not the other? It should be noted that it is common to turn notable people's names into advectives, if only when discussing their particular work. Shakespearean, Brechtian, Hegelian, Kantian, even Phildickian (in reference to Phillip K. Dick) all mean 'of (the person in question).' It is much rarer to see a person's name associated with an interesting but not definitive activity, as is the case with boycott and spoonerism, or sandwich for that matter. Of course, the process is somewhat common in the sciences, when someone discovers something else it is often named after them. You would be hard-pressed to call a boycott, a spoonerism, or a sandwich a scientific discovery, however.

Moving from eponyms to new words in general, why do some words stick and not others? There is some property among neologisms which some have and some don't, which is that they sound like they ought to be words. Spoonerism has it, marrowsky doesn't. Gremlin has more of it than fubar, and is therefore a better known word. Ironically, there is no word for this property.

I think there should be. (I think there are lots of ideas that should have words, but that's an issue to be explored later) But the selection of such a word should be carefully considered. When attempting to define a word that means 'sounds like it should be a word,' I need to make absolutely certain that is sounds like it should be a word.

My own capabilities as a wordsmith fall spectacularly short of this difficult task. However, I did happen to stumble up a solution, thanks to the television show which devoured a significant chunk of my childhood: The Simpsons. At one point in the series, the following exchange occurs between two characters on the show.

"I never heard the embiggens before I came to Springfield."
"I don't know why. It's a perfectly cromulent word."

The joke being, obviously, that neither embiggens nor cromulent are words. Since then, cromulent has became a slang word among Simpsons fans, much like term 'grok' among Robert Heinlein fans. The meaning is relatively clear from the context - cromulent means something the lines of 'acceptable' or 'legitimate.' Some people wrap the joke in the show into the word and use it ironically, but that's a layer of meaning too many in my book. It'll never take.

However, if you take cromulent by itself to mean acceptable, then it sounds like it ought to mean just that. Of course! What better word to use to refer to the acceptability of neologisms, than a neologism itself?

I say that cromulent should mean acceptable in a general sense, but more specifically the acceptability of a newly coined word or phrase, or the quality of sounding like it ought to be a word. As in, "Gloria Anzuldua's word 'nepantilism' is not very cromulent." Or "The word 'cromulent' has the property of cromulence."

This, I think, does service to the many Simpsons fans who used the phrase in casual conversation, attempting to make it a word through the principle of common usage, and it also gives a subtle nod to the creativity of the Simpsons writers and all the neologisms they've given us.

The only downside may be that the joke will be ruined, if cromulent actually becomes a word. But that, I believe, is a small price to pay.

Monday, December 12, 2005

Accidental satire

When you're looking at an article, or a movie, or a novel - or any number of things, actually - the question of interpretation is a tricky one. 'What does this mean?' Often, this question is equated with, 'What did the author want to say?' When someone brings up a possible interpretation, one response is, 'Well, I don't think he meant that.'

The answer is that it doesn't matter whether or not he meant to say it, he might have said it anyway. Authorial intention is invalidly equated with a text's meaning. The first problem for this kind of understanding is that most of the time the author's intentions are unavailable. Either he or she is unreachable or unwilling to comment, or he or she could even be dead. Also, an author's feelings toward a subject can change over time, and that makes his or her own comments on the work sometimes unreliable.

The other issue is that author's can accidentally say things that they don't intend. In this case, what the author intended to do is wholly immaterial to the interpretation of the work in question.

Take the historical example of "A Modest Proposal." Swift's seminal ironic text was met with some misinterpretation. People thought he was being serious in suggestion that Irish children should be eaten. Now in this case we know that this was not what Swift intended and that it is an abhorrent solution to the problem. But people still managed to misunderstand what Swift was trying to do.

That problem looms large to anyone who tries to make a point satirically. Subtlety is often lost on people, and your satire might be taken earnestly. I propose that it's about time this defect of satire be turned on its head. The next time you read an article in which the author defends a point that you consider too absurd to merit anyone holding it, you can describe it as 'accidental satire.' Without meaning to, the author has only furthered the cause he was arguing against by proposing an untenable position.

I would like for 'accidental satire' to find its way into the popular lexicon, or at least the blogosphere. Well, I'll keep my fingers crossed. It would be an interesting note to add to my curriculum vitae, when I move forward into the world of publishing.

On Things and Stuff and Their Properties

In an attempt to become utterly removed from any semblance of applicability to the real world, let me discuss briefly the philosophy of things and their properties.

There is significant philosophical literature, much of it completely unreadable as is the case for much philosophical discourse, on what exactly things and properties of things are. To what am I referring when I say 'that chair is red?' There is a model that resembles a pincushion, where the cushion is the thing and the pins are properties. In this case, red-ness and chair-ness are somehow stuck in to whatever the pincushion happens to be. But what is the cushion? The model doesn't really make sense. There is an alternative model, in which the properties just clump together, and that's all things are: just a collection of properties.

That is an interesting debate - interesting insofar as it is utterly academic and means nothing to anyone in the whole world - but it is not what I want to discuss. I want to discuss two types of properties which things have, and I think this concept is much more understandable to any individual, even those who have not been cursed with a philosophical mind.

Let me start by saying that I think there are two types of properties that things can have and that they are relative and objective. An objective property is independent of the observer of the property and should be the same no matter where one stands, so to speak. A relative property is just the opposite in that it has no intrinsic validity and depends entirely on the speaker or the person professing something about that property. It might suffice to think about the term 'property' here as very similar to 'adjective.'

For example, weighing fifty pounds is an objective quality, perhaps of some particular bag full of sand. Well, weighing fifty pounds at a particular elevation on earth is an objective quality, if you really want to split hairs. The fact that it weighs fifty pounds (at that location) is independent of the observer. However, whether or not that same bag of sand is considered heavy does depend on the person making the assertion. Most people would consider that bag to be heavy, but some unusually strong individuals wouldn't have any trouble lifting and carrying it and would therefore not find it be heavy to the same degree.

So weight vs. heaviness is a fairly clear-cut example of a pair of objective and relative qualities. Assuming the reliability of the weighing system, no one would question that the bag of sand weighs fifty pounds. Likewise, very few people would argue over whether or not they consider the bag to be heavy. They might say, "It's not heavy to me," (a relative assertion) but the only reason someone might say, "Oh, it's not that heavy!" (an objective assertion) would be if they thought the other person was feigning weakness in order to escape additional work. If the person's difficulty in lifting the bag of sand is to be considered sincere, then their estimation of the bag as heavy is beyond counterargument.

While this instance of the two types of properties is easy to see, most of the time people get confused over which type of property they are observing or talking about. Take, for example, the difficulty of some pop quiz. Even if someone actually had difficulty with the quiz, they still might find themselves on the end of some teasing by another student who found the quiz simple. In this case, it seems to mean approximately the same thing to say, "It wasn't that difficult to me," and "It wasn't that difficult at all," even though the first sentence refers to both the quiz and the resources available to the particular test-taker, while the second refers only to the quiz, as if the quality of difficult-ness was identical irrespective of the person taking the quiz. The trickiness of this distinction is compounded by the fact that their is no objective scale for 'quiz difficulty' which can be differentiated from the relative claims about same, as there is a recognized scale for weight.

This confusion, when someone talks about a relative property as if it were objective, (and its opposite, discussed later) leads to a great deal of disagreement. Take, for example, some review of some movie. It is not at all uncommon for the reviewer to say that the movie is either 'good' or 'bad,' but they're not actually referring to qualities of goodness or badness in any significant way. If they were, there would be significantly less disagreement among movie reviewers and movie patrons over the quality of movies. What the reviewers are actually doing is saying something akin to 'the movie was in accordance with my vision of what constitutes a good movie' or 'the movie met (or surpassed) my expectations.' I contend that this cognitive sleight of hand, in which people compare a movie to their hypothetical ideal of movie-ness without even being aware of it, is done in similar types of processes many times every day.

While the disagreements people have over the merits of some movie or book are usually minor, it would still be useful to recognize where these disagreements come from. It is more productive to ask the question, "What is about the movie that you liked?" (in a serious way, not assuming that the answer ought to be 'nothing') than it is to assume that the other people's ability to judge good and bad is somehow flawed since their estimation was not in line with yours in this instance.

A clearer example of this principle can be seen by looking at things that are funny and things that taste good. Those are two qualities that some people think of as objective and therefore have trouble understanding why others disagree with their claims that something has these qualities. A facile definition of funny is the property of eliciting laughter. If one person laughs at a joke and another doesn't, the first found it to be funny while the second didn't. It would be difficult to claim that one or the other person is wrong. Of course, funny doesn't necessarily equate to laughter, so a more complex but still valid comparison in which funny means stimulating your sense of humor in a way that makes you think, "That's funny" will serve the same purpose. Different people have different senses of humor, so funniness depends both on the joke and observer.

Likewise, different people have different senses of taste. Some people don't like spicy foods, but that doesn't mean that spicy foods aren't tasty. People gloss over the complex mechanism going on in their mouths and in their brains when they say that something "tastes good" as if they were saying that something weighs fifty pounds. These mechanisms, an intricate interaction between certain parts of your brain and the taste buds on your tongue with the food in question is not identical in every person, just like the two quiz-takers in the earlier example didn't necessarily have identical kinds of mental acuity, which ultimately had different levels of appropriateness for the quiz they were taking.

It is important to try to recognize when a disagreement is of this type, because you cannot convince someone else to have your perspective or your tasting mechanism, or what have you. Arguments with foundations in disagreements over relative properties are futile; instead, one should recognize if the difference in points of view is significant and what can be gathered from it. For example, if two people are having a disagreement over whether a movie is good, you might establish what each of you thought were the qualities of the movie which contributed to its goodness or badness (this is a difficult process, since the cognitive activity is buried quite deeply) and then see what emerges. It might occur such that the disagreement stems from the fact that one of them happens to particularly like a certain actor while the other is not much of a fan. The process can then be repeated, nearly ad infinitum - or at until the limit of one's self-knowledge is reached.

With that subject essentially exhausted, I think, the opposite type of confusion needs some mention. The confusion of objective qualities for relative qualities is more insidious than its complement. One example of this type of process is the assertion that, due to a significant degree of confusion over events that may or may not have occurred, it somehow becomes the case that whether or not those events happened in a particular way has no inherent reality. That is, whether or not this or that happened is just a matter of your perspective.

This is the fear of the people who cry out against relativism, and they are right to do so. It is dangerous to suppose that there are no truths independent of the observer. This problem is tied up in the fact that in order to describe the world around us, we have to use words that inevitably carry with them connotations with relative qualities tied up in them. Whether or not a suicide bomber is a 'freedom fighter' or a 'terrorist' depends on your point of view, but whether or not he blew himself up and killed some number of other people is incontestable.

It's an extremely complicated linguistic dilemma to which there is no clear solution, since we need to use words to describe the world, but the words we use make assertions we don't necessarily want them to make. Often, we make those assertions on an accidental cognitive level. That is, we have an point of view that we don't recognize as such which makes the suicide bomber into a terrorist, so using that word doesn't seem to be an issue. The only solution I have to offer is to try and be conscious of our perspectives and to try to see where other people are coming from as well.

I don't pretend to try and perform the delineation of the world into the two types of properties. Such an exercise would be futile, because while there are clear cases where a property is objective and where it is relative, there are many where one person would claim it to be one type and another would disagree. Is intelligence objective? People who think it can be measured by IQ would say so. Even people who don't think IQ equates to intelligence can claim that it is objective, but I don't think so. I don't think it's quite relative, either - I think there are different kinds of intelligence. But that's a question for another day.

When considering this essay, I would ask that you think about your own opinions on relativism and objectivity in the larger sense and see how your preconceived notions of the admittedly very much loaded terms in the discussion affected your estimation of my conclusions. Also, I would appreciate it if you could try to imagine where I'm coming from in writing this.

Thursday, December 08, 2005

The Golden Rule

The Golden Rule is a near-universal tenet of moralistic systems. Various formulations of the rule manifest themselves in codebooks of ethics from throughout history and around the world. Its most well-known instance is biblical: Matthew 7.12, "Whatever you wish that men would do to you, do so to them." Its most common, glib interpretation is "Do unto others as you would have others do unto you." In the Analects, Confucius states the negative corollary, often referred to as the Silver Rule: "Do not do unto others what you would not like others to do unto you." Even Kant formalizes the principles behind the Golden Rule when he posits his Categorical Imperative: "Act only according to that maxim by which you can at the same time will that it would become a universal law." More complicated verbage, but the message is the same.

The rule has its roots in purely good intentions, unlike some religious maxims that have at their heart a desire to instill fear or obedience in their practitioners. The Golden Rule is perfectly acceptable from a secular perspective. The Golden Rule tries to prevent people from holding others to higher standards than they hold themselves. Don't be a hypocrite, it says. The underlying message is to recognize that others, even though you do not have direct access to their decision-making processes, their minds, to the things that you recognize in yourself that make you human, they still possess analogous faculties you ought to treat them as such. When others ask you for clemency in the face of mitigating circumstances, remember when you were in a position to seek the same kind of understanding.

That's all well and good, but there is a problem here. The most common, if somewhat facile, counterexample to Golden Rule type policies is that of the masochist. If he would have others cause him pain, then he is justified in causing others pain. It is possible to work this problem into other formulations of the rule, as well. Even if you don't accept this particular argument, it points to the fundamental flaw in this type of an ethical code.

The Golden Rule assumes that there is a single way to behave to which everyone should conform, that there is a uniform set of practices which, when enacted universally, will lead to a perfect, or at the very least better, society. The idea itself is not incoherent. It is possible in theory. However, it is not a practical possibility, barring science-fiction level methods of personality control. Even so, I contend that it is undesirable to try and pursue such a system as a social model. As the population of a society approaches uniformity, smaller and smaller deviations from the norm become more and more disruptive to the social order. A large-scale parallel to the 'uncanny valley' develops.

This disruption will exist whether or not the particular behavorial code which the society is approaching is the theoretical, stable ideal mentioned earlier, or whether it will eventually fall apart itself. The risk of significantly increased chaos is not worth a likely failure.

Of course, all this talk about hypothetical societies and the push for complete uniformity is too far from reality to even be that relevant. We aren't worried about there being a culture that literally expects its members to follow a precise code of behavior in the sci-fi dystopia sense, because there is an intuitive understanding that, to put it glibly, 'variety is the spice of life.' We look at the obedience to the state that was present in Nazi Germany and recognize both that it was bad in itself and that it had bad results.

This tendency towards accepting variation only works within a certain framework, though. Many people can't extend their levels of acceptance to openly gay people or Klan members (please don't make anything of that pairing - I only mean to come up with two types of behavior which different groups have serious problems with). The important thing is how strong these people's distaste for the types of behavior they dislike happens to be. Some just shake their head, express their annoyance in private, and then either grudgingly tolerate or just ignore the people who bother them. Others want to either keep the types who upset them out of the public eye or eliminate them entirely.

This dichotomy need not exist just on opposite sides of an issue, say between those in favor of gay rights and those opposed. Consider a person who is in favor of gay rights and who has a very specific idea of how homosexual ought to go about accomplishing the goals that this person has decided are important. He or she might be as opposed to progress that goes in a different direction than what he or she considers important as somebody on the other side of the issue might be opposed to any progress at all.

The really interesting about this essay is that I'm advocating taking a viewpoint that is based on broad acceptance, since diversity is not only necessary for life to be interesting, but it is also necessary for society to keep functioning. There's a balance that's been worked out between groups who find themselves on opposite sides of various issues, and the back and forth goes in cycles. Sometimes, one side wins and the issue is no longer an issue, as was the case with slavery in America. Sometimes, it's only a temporary win, like with prohibition in the twenties. That was rolled back pretty quickly, because the damage to society from prohibition was greater than the damage done by legal alcohol. I contend that society is self-correcting, in the long run. If some movement gains strength and then turns out to be bad, then it will fall out of favor.

Of course, the beauty of this argument is that I don't expect everyone to agree with me. In fact, it is inherent in my point that some people won't. A society where everyone had the passive viewpoint I do might function, but a society where everyone had that attitude except for one person - that person could get away with anything. For every crazy, single-minded person on one side of an issue, there needs to be one of the other side to keep the scales from tipping. I try to recognize that fact and keep it in mind at all times.

It's the first component of my philosophy of latitudinarianism, if I haven't mentioned it already.