31 January 2011

Oenology: Allow me to wine a little

I love wine. I love the way a good wine smells, the way it looks, the way it tastes. Here's what I DON'T love about wine: wine snobs. For reasons that have never been apparent to me, wine draws pretension like honey draws flies. Why is that? Wine-making is a very down-to-earth endeavor with agrarian roots (so to speak). When was it hijacked by snobs with snifters?

The most annoying aspect is the way so many people pretend to experience wine. The human palette is in fact a fairly crude tool. Even people gifted with extraordinary palettes are limited to three, maybe four different tastes at a given swig. So it annoys the bejesus out of me when I hear people talking about "dark chocolate leather with a hint of raspberries and two...no, scratch that...three-day old emmental cheese from...Wisconsin...southern part of the state." OK, I am exaggerating. But not by much! Why is all that non-sense necessary? What's so wrong with just saying, to take sauvignon blanc as an example, "That's a great wine. Tastes like grapefruit."? Why pretend you also taste ten different kinds of passion fruit with buttery overtones and nutty undertones, when it's not physically possible for a human to make that many different taste distinctions?

I'm also not a fan of wine ratings. I think Robert Parker has done a huge disservice to wine, even if he has done a great service to wine-makers' wallets. What does it mean to say that a wine gets 93 v 92 v 80 v...whatever? To me, it's akin to saying that Munch's Scream is a 93 while Van Gogh's Starry Night is a 95. Huh? How do you put numbers on something so subjective? Not only is it impractical; it's demeaning. To suggest that something so subjective can be scored, is to suggest that it is not in fact subjective at all, that it is formulaic and measurable and that there is therefore some 'right' answer. How absurd! You might just as well say that great art can best be done using paint-by-numbers kits!

The most unnerving thing, however, is wine snobs who foolishly spend hundreds of dollars on wines, working under the delusion that more expensive = better. Mind you, there are some excellent wines in those price ranges. But it does not necessarily follow that a $200 bottle of wine will be twice as good as a $100 bottle of wine, or for that matter that it will be at all better than a $10 bottle of wine. And besides, where's the fun in getting the most expensive wine? Anyone with a fat enough wallet and a decent sommelier at his or her disposal can pair a wine with a food. But trying finding a solid $20 bottle of wine to go with that lamb...that takes some thought.

At the end of the day, here's what matters: enjoy what you drink, pretension and snobbery be damned. If you like white wine with steak, have white wine with steak.* If you want to put ice cubes in your zin, go for it. If you love that $15 bottle of pinot noir and think that $50 Chateauneuf-du-pape is swill, then drink that pinot with a smile. And when you hear some schmuck order the most expensive wine on the menu and go on about its delightful nose and hints of tinkleberries, just laugh and keep right on enjoying that $15 pinot.


*I don't recommend that particular experiment. It would be pretty gross, I'd think. But if you like it, go for it! And while Sauternes with steak might be a stretch, this does bring up another good piece of advice: ignore that non-sense about 'white wine for white meat, red wine for red meat'. There are many good reds that go with, say, chicken, for example.

30 January 2011

Language Part III: Random Things

OK, for round three, there is no coherent theme. It's just about random things I feel like addressing!

Subjunctive v indicative. For those of us who've studied Latin-based (aka Romance) languages, this is not new. "¡Ojalá pudiera verte!" not "¡Ojalá podría verte!"(Span.) or "Je veux que tu prennes du café" not "Je veux que tu prends du café" (Fr.) or "Spero che tu stia bene" not "Spero che tu stai bene" (Ital.) and so on.

But what about in English? Does English have a subjunctive? Yes, but its usage is on the decline. Consider "If I was in your shoes" v "If I were in your shoes". Most people can identify the latter as sounding more correct (and they are right); but we all acknowledge that the former is becoming more and more common. However, there are certain areas where the subjunctive aids disambiguation, so I expect it to stick around for a long time in those. Consider "Was I in charge" v "Were I in charge": if you fail to use the subjunctive, it sounds like a question. We also find the subjunctive holding on in many fixed expressions, e.g. "If need be", "Truth be told", "far be it from me", etc.

The main reason it appears to be disappearing is that in so many cases, you simply can't distinguish between indicative and subjunctive in the modern form of English*. English has very simple verb conjugation, so the same conjugation is used across many different persons, numbers and even tenses. Consider the present tense indicative: only one (third person singular) is conjugated differently from the rest, e.g. I run, You run, S/he/it runs. So even when we are using subjunctive, it is often disguised because it is indistinguishable from the indicative. Example: "I require that you come to dinner." This is subjunctive, but it is conjugated no differently than if it were indicative, e.g. "You come to dinner". But try it with the third person singular and the subjunctive more obviously rears its head: "I require that he come to dinner", not "...comes to dinner".

'An'. What could be easier than the use of 'an' v 'a'? If it starts with a vowel, use 'an'; otherwise, use 'a'.

Well, two problems here. The first is a common misunderstanding about what a vowel is: vowels are sounds, not letters. The same is true of consonants. That's why linguists use the IPA (international phonetic alphabet): to understand language, you must divorce sounds from letters. It just so happens that the letter 'c' is used to represent a /s/ sound in English sometimes, but that sound is not inherent in this graphical representation called 'c'. So, applying this to our vowel issue, remember than 'an' is used before a vowel, not just a letter we usually associate with vowels. Example: we normally associated 'u' with a vowel sound. But sometimes is plays the role of what's called a 'semi-vowel', a sort of hybrid between a vowel and a consonant. When it does, as in the case of 'universe', we do not use 'an'. No one says "an universe", right?

The trickier problem is related to a very unstable phonetic element: the aspiration 'h', e.g. the first sound in 'historical'. This is a very unstable sound in languages generally, so it tends to disappear over time. That's why (in American English) 'herb' is now pronounced more like 'erb'.** (More on this later.) So when we have these cases of unstable aspirated 'h' sounds and they have become very weak, it is perfectly acceptable to use 'an'. Many people believe it isn't, because the word doesn't start with a vowel. But when the 'h' is so weak and the next sound is a vowel, it makes sense. Example: "I read an historical account of the sinking of the Lusitania." The only caveat is that the word must not only suffer from a weakly aspirated 'h' at the beginning, but must also not have a stress on the first syllable. So we do not say "an history", for although the aspiration is not very strong, the stress coming on the first syllable does accentuate it to the point that 'an' seems out of place. If you think about it, you will see this 'rule' works pretty well. Think about the following: "I rode a helicopter to an historical site to see an historian writing a history of helicopters."

Next on the list of random things I feel like addressing: hyphenation. Specifically, I want to draw attention to hyphenation as it relates to creating single concepts. For example, if you want to form a single modifier from two modifiers, you must hyphenate: "He's an easy-going fellow.***" The reason it is practical to use a hyphen here, is that without the hyphen, the two modifiers appear to apply independently to the modified noun: "He is an easy going fellow." In this case, is he easy-going or is he both easy and in the process of going? It can be confusing sometimes because there are cases in which a given phrase might be used with our without a hyphen, and the semantic gap may be a small one. My wife asked me the other day if this sentence required a hyphen: "I am a fourth-grade teacher." It does, because these two terms together ('fourth' and 'grade') constitute a single modifier for 'teacher'. But the sentence "I teach the fourth grade" does not require a hyphen as it is an adjective ('fourth') modifying a noun ('grade'). But "I teach a fourth-grade class" requires the hyphen because, again, the two modifiers combine to form a single element modifying 'class'.

'Ain't'. Believe it or not, I have no problem with this word whatsoever. When I was growing up, my mother used to correct me, saying "'Ain't' ain't in the dictionary!" In fact, it is, as well it should be. So why is it so often seen as 'incorrect'? I blame what I only half-jokingly call 'Written Language Syndrome'. WLS afflicts languages like English by deluding people into thinking that spoken language should follow written language completely and slavishly. The logic thus leads us to believe, for example, that since 'ain't' was born of strictly colloquial, verbal English, it has no place in 'proper' speech. Why shouldn't it? It is a convenient contraction, and a flexible one to boot, since it serves several combinations. I would agree that, since it evolved at the spoken colloquial level, it has no place in formal writing; but I see no reason at all to shun it at the spoken level.


*It was easier in earlier stages of English, since we had more varied conjugations back then.

**It is still pronounced with the initial /h/ sound in England, but this is tied to socio-economic class issues. 'Dropping one's haitches' has always been a clear marker of belonging to a lower socio-economic class, so educated English people who live in horror at being thought (or revealed to be) 'lower class' frantically pronounce all their initial 'haitches'. For this same reason, I would wager that most of them would object to my application of 'an' to anything that has even a wisp of an 'aitch'! 'Aitch' that a pain in the 'aitch'?

***Yes, he is...except with grammar.

27 January 2011

Fun with Artificial Intelligence, Part II

As promised when I initially wrote about artificial intelligence (and specifically about Eureqa), I have been playing around with economic data and have come up with some interesting results. After running dozens of experiments using different combinations of data and allowing Eureqa to use different computational operations, I have come to a fascinating conclusion: in terms of what helps the AI fit equity performance data points to a model based on all data available to it, it seems equities don't 'care' about anything but how we, the consumers, think about the economy, regardless of how well- or ill-informed we are about the economy. Give the AI GDP data, CPI, unemployment, CD rates (to provide opportunity cost), even prior year stock performance and price-to-earnings data: if you also provide consumer confidence data, it will systematically discard every other data type save that one. In some simulations with solutions that have similar complexity and fit, it might also use Fed Funds rate and/or CD rates, which I see as opportunity-cost stand-ins. But for the most part, it just wants to know, "How do you feel about the economy, regardless of how well or poorly it is actually performing and regardless of how much you even really understand it?"

As I said in the prior post, I am far from convinced that such data-mining can offer realistic predictive models. But ironically, the fact that the AI prefers the least rigid, least 'rational' data type, in fact makes me less skeptical about its predictive power. My reasoning is this: if the AI chose models that were based strictly on 'hard' data such as CPI, etc., I would suspect it was simply data-mining and that the results would be useless outside the confines of the already-given universe of data points, since equities markets are inherently irrational and are driven by things far less quantifiable than, say, GDP; but the very fact that it chooses the least 'rational' data type, consumer confidence, tells me that it may indeed be coming up with a decent predictive model, since it seems fitting that it has chosen the one data type that combines rigid metrics but applies them to decidedly 'soft' data (that is, consumer sentiment).

So the proof will be in the pudding, I guess. But that pudding will take quite a while to cook, so don't hold your breath. Meanwhile, for what it's worth, I will shortly add some of the predicted values from various experiments. Then we'll sit back and see what happens!

My next project is GDP. I want to see what data types the AI most prefers for predicting the performance of the US economy.

23 January 2011

Language Part II: This thyme it's homophonic

Today, 'lets' deal with homophones. It's a 'thymely'* subject! We'll have a look at some of the many instances where homophones can lead to problems at the written level.

It's v its: This is a classic example of something that is very unstable and certain to change in favor of what is now viewed as a 'mistake'. (See earlier entry, especially the section on 'he and she' v 'they'.) First of all, they are homophones. But the real problem is that the distinction is entirely counter-intuitive: normally an apostrophe+s construction denotes a genitival relationship, e.g. Christopher's Take; but 'it's' is a contraction of 'it is' while 'its' is a genitival form of 'it'. WTF, mate?! So I expect this to disappear sooner or later. This is the descriptive grammarian in me defeating the normative one.

Their v they're v there. Wow. 'Their' is adjectival and possessive, 'they're' is a contraction and 'there' is an adverb. Hard to blame anyone making a mistake here, at least in everyday writing. The semantic differences are very clear, of course, but the fact that all three are very common and are homophones, makes it easy to slip up. Even I do it if I am in a hurry and/or I am not editing carefully. I expect to see some fusion here eventually. I give it a few decades, but if I had to place bets, it would be on 'there' becoming the single orthographic entity into which the other two are subsumed.

Who's v whose. Pretty much the same thing going on here as 'they're' v 'their'.

To v too. Easy to distinguish between them: one is a preposition and the other an adverb. Hmm...except that 'to' can sometimes be an adverb as well, albeit in a different context. Throw in the fact that it is a difference of but one letter, and I give the whole thing maybe fifty years before one is absorbed by the other. I place my bet on 'to'. I think 'two' will survive on its own, though.

Your v you're. Placing my money on 'your' winning out in the end. Some people will say, "How? They mean two different things!" To which I reply, "Seriously?" I just have this to say: bear (animal) v bear (unrelated verb), quail (bird) v quail (unrelated verb). We get much of our meaning from context anyway, so homonyms are as harmless as homophones.

A lot v alot: This one's a bit different because the latter isn't even a word. But guess what? It soon will be. And for good reason: it expresses a single idea. So while I personally would never write 'alot', I accept that it will soon be an accepted word in English.

Let's v lets. Contraction (imperative of first person plural of a verb) v third person singular conjugation of the same verb. C'mon...it'll boil down to 'lets' eventually anyway. Lie back and think of England. Just 'lets' let it happen.

Ant v aunt: Normally I wouldn't even include these, but they bring in yet another wonderful factor: semantic difference guards against confusing homophones, but especially ones that are entirely regional to begin with. To me, these words are homophones, because in my native South (of the US), they are both pronounced \ˈant\; but elsewhere, they are not homophones at all, as aunt is \ˈänt\. I therefore expect this written distinction to survive.

Almost any word in plural v possessive. This one drives my wife bonkers. You see it everywhere these days: "We have the best price on television's!" "Kitten's for sale". Most words in their plural and in the possessive are homophonic, so this shouldn't be too surprising.

I say just kill the apostrophe everywhere. Look at this plural v possessive issue issue and many of the issues above it: mistakes centered on this annoying little floating comma. Context should take care of making it clear when we mean one v the other, so why bother at all? Will anyone fail to understand "I didnt take its meaning"? 'Its' perfectly clear to me!

As for me, as much as I accept that the 'mistakes' will soon be the rules, and even as I personally promote the downfall of some of the stricter current rules, I am too much a perfectionist to say things like "lets say its OK to make these mistakes". Besides, if I relent, I lose out on the fun of correcting and judging you and making you feel that your worth as a human being is tied to your grammar and spelling. So, you know...I'll just stick with the rules.

Kidding aside (and yes, for the humorless among you, I was just kidding about judging you), if you take away** just one thing from this posting, let it be this: writing follows speaking, rarely*** the other way 'round. We forget this because ours is a highly literate society. But even now, in the 21st century, most languages**** spoken on Earth are either unwritten or are written using a borrowed script (e.g. Latin alphabet). Language occurs naturally; writing is quite artificial. One day I will get around to a blog entry on written language as an (arguable) precursor to civilization. But not today.


*OK, 'thyme' can't be an adjective, but shut it..it's my blog.

**We'll talk about hyphens at some point. Take-away v take away and so on.

***Spoken follows written only in cases of a sort of hyper-correction, when speakers try to make their speech conform to written standards. Granted, 'hyper-correction' is usually used in a different context, but I think it fits here, too. Remind me to talk about hyper-correction when we discuss comparative linguistics!

****It always surprises people when they hear that there are roughly six THOUSAND languages on this planet. [No one really agrees on the exact number, since there are a lot of grey areas, especially when it comes to distinguishing language v dialect. This can be a very touchy subject. For example, I have always maintained that Norwegian (and its MANY dialects), Swedish and Danish are all just dialects of a single Scandinavian language, since they are all more or less mutually intelligible; but I do not recommend ever saying that to a Scandinavian person! But when someone like me can study at university in Norway and have textbooks in all three 'languages' throughout the course of his academic career, it seems difficult to make the case that they merit the distinction of being called separate languages. But as usual, I digress!] India alone has around 700 languages. Even the language we call Chinese is in fact many different languages, many of them mutually unintelligible on the spoken level (though sharing a common, mutually understandable written version).

22 January 2011


Pet peeve of the day: IQ tests. These are the wife-beaters of the intellectual world: they've been telling people they're stupid for so long, that people believe it. But have you ever noticed that the type of intelligence they test just HAPPENS to be the kind of intelligence possessed by the very inventors and promoters of these tests? And they have been so thoroughly successful in conning people into believing that this type of intelligence is the 'true' intelligence, that we design whole systems of advancement and learning around their tests.

So what is intelligence, then? If real intelligence were truly just about the type of intelligence tested on these exams, I would be rated fairly highly, since I always score pretty well on traditional IQ tests. That's no coincidence: I have always had good 'book smarts', the very same type of intelligence of those who make the tests. But my question is, is that really a useful gauge of how well people use their minds to navigate the world around them? Isn't that the real definition of intelligence: how well do people use their mental faculties to adapt to (and excel in) the world around them? And if that is the true definition, aren't IQ tests woefully inadequate, even misleading?

I can think of at least two areas that are completely ignored by IQ tests. The first is the ability to understand people based on their reactions. For example, some people know how to 'read' other people exceptionally well. Some people can see a slightly raised eyebrow and the smallest of grimaces, and glean a book of information about that person from those facial expressions. How is this less important a survival skill than, say, knowing the square root of 256 without the aid of a calculator? That person skilled at reading faces can always whip out his iPhone and get that square root. There is no app to help me read other people. But if we both take an IQ test, I appear to be the smart one.

Another mental skill ignored - one seemingly minor in the modern world with our GPS and smart phones, but in my view still important - is a sense of direction. I lack it utterly. I couldn't navigate my way out of a paper bag. But I know people you could lead into an unknown field at night, and then watch them head due north. I have lived in the same neighborhood for five years and still have no idea which way is north (unless I look at my iPhone compass). Maybe this doesn't much matter to an urban person in the 21st century, but I can tell you from experience that it is still a skill you very much miss when you lack it.

I can hear the objections from proponents of these tests: "we never claimed they were holistic measures of all types of intelligence and anyway, those skills you cite are instinctual, not mental."* I reject both arguments. The first is disingenuous: proponents may have softened their stance on the value of the tests in recent decades, in light of studies exposing things like cultural bias and education v. supposed 'innate' intelligence, but even today, they clearly (and quite smugly) believe that those of us who have high IQ test scores are somehow superior. I reject the second argument because this division of 'conscious' v 'subconscious' (i.e. 'instinctual') mental ability is completely arbitrary and irrelevant. Moreover, it is a false one: when answering some of the toughest questions on IQ tests I took in my youth, I did not perform any conscious calculations.

Speaking of subconscious, my pet theory (which I posit only half in jest) is that all the geeks who came up with and continue to support IQ tests, designed them with a (hopefully subconscious) goal of setting themselves above all those kids who made fun of them in grade school. The popular kid who had charisma and could read people so well? His skills won't count as intelligence! That kid in boy scouts who could never get lost and could lead the troop out of the woods? Doesn't count! In fact, every kind of meaningful intelligence besides their own kind...doesn't count!

So do we reject all IQ tests? Not necessarily. But we need to go back to the drawing board and redesign them, or at any rate supplement them.


*Yes, this is a straw-man argument; but it is one that I could reasonably expect from an IQ-test proponent. And if you don't buy that, then, well...it's my blog, so shut it!

15 January 2011


[Preface: Upon reading this a second time, it reminds me of a medieval codex found in (what's now called) Italy. It was written by a frustrated Latin teacher who kept admonishing his students, "It's not [x], it's [y]!" Of course, to future linguists this discovery was a gold mine: this frustrated teacher's 'corrections' were markers showing the evolution from Latin to Italian. So before reading the following treatise, remember: if you think any of it strange or unwieldy, take comfort in the fact that many of the 'mistakes' I point out will likely be the rules of English in future centuries. But since you are living in the here and now, well, they are still just mistakes...]

My random ravings on language usage:

1) Incorrect usage of the words 'irony' and 'ironic'. I don't understand why people feel the need to use such words if they don't understand the meaning. Do people just generally feel they should know the meaning, so maybe they use these words to overcompensate? Why? There is no reason why everyone should get irony, any more than everyone should get, say, skydiving. I don't get spatial reasoning, so I avoid giving directions and I do my best to avoid maps at all costs. I don't go out of my way to volunteer as a navigator to overcompensate for this deficiency. I am also terrible at remembering names.* I do not offer to introduce people at parties to overcompensate for this failing. And that's OK. It's who I am. So why do people who do not understand the concept of irony feel the need to use the word and its adjectival form so much? I blame Alanis Morrisette. Her rather silly song shoved the word into popular use, despite the fact that her use of it was almost entirely off base. There is nothing ironic about rain on your wedding day. It's just sad and unfortunate. A traffic jam when you're already late is annoying, but not ironic. A no-smoking sign on a cigarette break? Also just a sad bit of luck.** What's worse, people often use these words when they in fact mean precisely the opposite of irony. Example: "Ironically, the prisoner was captured as he was dashing out of the prison yard." That is exactly what one would expect to happen! But there is a delightful irony in all this: people using the exact wrong and opposite word when saying 'irony', is itself a form of irony, so this suddenly got all meta. Anyway, the point is, be hereby absolved of any need to understand the words 'ironic' and 'irony'. You are liberated! So just stop using them!

2) People making pretentious attempts at using what they deem 'fancy' grammar words...and then failing to use them correctly. I am a bit of a self-confessed grammar nazi, in the sense that I recoil at poor grammar when I find it in places one should least expect it, e.g. newspapers, books, etc. So it's ironic*** that I am in fact not really a normative grammarian at all when it comes to everyday speech and informal writing. Even in formal writing, it doesn't bother me terribly, as long as the rules are broken for the sake of fluidity and clarity. For example, take the 'split infinitive'. First of all, it's a stupid rule, and believe it or not, it is relatively new and quite artificial. But new or old, avoiding splitting infinitives leads to tortured word order and poor clarity in many cases. So why bother? Proper grammar and syntax are supposed to be instruments that facilitate communication. The moment they become a hindrance, dispense with them. It's that simple. But when writing formally, just make sure you know the rules before you break them.

Examples of people attempting to use 'fancy' grammar:

a) Who v Whom. If you don't know the difference, just always say 'who'! It's fine. I promise. Even I won't judge you for it!**** 'Whom' is a word on its way out anyway. Why? Because language is the most democratic thing ever devised. In the end, the majority always wins. If it didn't, Italians would still be speaking Latin. So, like 'may' v 'can', 'whom' will soon be a relic. Even I don't bother with it all the time. But for the record, the difference is relatively simple: 'Who' is always a subject of a verb. 'Whom' is always an object.***** Some people simply can't get their heads around this. And you know what? That's FINE! So just say 'who' all the time! Nobody cares! Just don't sound foolish by using 'whom' incorrectly, because then you sound pretentious (for attempting to use a word you do not understand) AND foolish (for said lack of understanding). And to be fair, it can be hard to get the concept sometimes, because it is not a function of syntax, despite what many people believe. In other words, just because the word you're looking for is not *immediately* followed by a verb belonging to it, doesn't mean 'who' is the incorrect choice. Classic example: "Whom shall I say is calling?" Sounds fancy, huh? It's wrong. It's easy to think that the 'shall' belongs to the 'I' (which it does) and thus that the 'whom' must be an object; but it isn't: it is the subject of 'is calling'. So it should be "Who shall I say is calling?" The easiest way to untangle these things is to play with the word order a bit. "I shall say whom is calling?" You would never say "'whom' is calling" there, right, because it's obvious it should be 'who' as a subject, no? So don't use 'whom' in the other formulation: just because the word order has changed, doesn't mean the grammar has.

b) 'Fancy' (mis)use of pronouns. WHY on Earth do people feel that nominative versions of pronouns are somehow 'fancier' than accusative ones? They aren't. Nominative means it's the subject of a verb. Accusative means it is the object. (Technically, accusative means it is simply the object of a verb, but in practice, since English doesn't really get into the weeds on variants based on case, for all intents and purposes****** we can say an object of anything, e.g. of a preposition within an adverbial phrase, etc.) So when people try to sound pretentious and say, "The letter was addressed to she and I", I cringe. Would you ever say "She did it to I" or "I did it to she"? No. So why would a letter be addressed to "I"? Answer: you think it sounds cool. Stop it.

But the worst (to my ears) is misuse of reflexive pronouns. You sound ridiculous when you say, "It was given to John and myself"! Or "She gave the letter to myself". Only you can do something to 'yourself'.******* Someone else does it to 'you'. "I sent the letter to myself" v "She sent the letter to me". And when you use it incorrectly, you sound so pretentious, because it's obvious you think it sounds 'elegant'...but it's not; it's just plain wrong.

c) 'Fancy' use of adverbs...when you really want an adjective. Classic example: "I feel badly." If you feel badly, it means your ability to feel is something you do poorly. If you mean that your state of being/mind is bad, then "You feel bad". Verbs like 'feel' and 'smell', when used intransitively, convey a meaning that is basically the same as 'to be'. Take 'smell'. If you smell badly, it is the manner in which you are capable of smelling, so you are saying your sense of smell is poor. If you smell bad, it is a state of being, so go take a shower.

A similar issue arises when one joins two words that are essentially subjects using a 'copulative'******** verb. That's why it is technically "It is I", because "to be" is an intransitive verb, so there is no object and both components are thus subjects joined by the copulative verb. But this is where democratization of language wins out: most people treat the second subject as an object and it is perfectly natural to do so, so someone reading this in 100 years will likely have learned that "It is me" is the correct way to go.

3) 'Literally'. Christ on a crutch, please stop using this word. My head literally explodes when you do. See, if that were true, I wouldn't be writing, since a coroner would be picking up pieces of my skull right now. 'Literally' is not a synonym for 'very' or 'really'. It is meant to distinguish between something meant figuratively (e.g. "My head is about to explode" => Shut up) v literally (e.g. "My head is about to explode" => Get the Windex and a towel). Sadly, I hear even (supposedly) educated people abusing this one. I recently heard a reporter announce that an issue was "literally tearing the country apart." Leave that to earthquakes, please. And just as the case with 'irony', it is OK if you don't get it; just don't use it. Sadly, the aforementioned democratization of language means there will soon be a permanent semantic shift in favor of making 'literally' mean 'really' or 'very'. Normally I would accept that with grace, but in this case it is too bad because we will have lost a word that is actually quite useful. But something else will take its place sooner or later. That's the beauty of language.

4) 'i.e.' v 'e.g'. This is a really easy mistake to avoid. Just imagine yourself saying 'that is' v 'for example', e.g. "We were stuck, i.e., we weren't going anywhere." Think of it this way: e.g offers examples that are a subset of the universe, while i.e. simply renames that universe. "I ate lots of things, e.g. apples, bananas, etc.". E.g => because these are just some examples of the many things I ate. "I ate lots of things, i.e. I had quite a variety." I.e. just restates the matter. If it helps, 'i.e' really does mean 'that is' in Latin ("id est"), so that's an easy way to remember it. E.g. means 'exempli gratia', 'for the sake of example'. And you know what? English is a beautiful, flexible language on its own. So if in doubt, just dump the stuffy Latin altogether! Just say 'that is' and 'for example'! You will use those properly every time! And why is Latin a better alternative to English anyway? Aside from the convenience of abbreviation, I see no advantage.

5) Pronoun number consistency. Example: "I am not sure exactly which candidate we'll pick, but they will be qualified." This is an unfortunate side-effect of English's lack of a more flexible set of pronouns, combined with an increasing sensitivity about excluding women from general statements. The correct form is to say 'he or she' (or 'him or her' if accusative), e.g. "I am not sure which candidate we'll pick, but he or she will be qualified." It used to be common simply to say "...he will be qualified", but that's quite understandably and necessarily a no-no these days (unless of course it's, say, a sperm donor, in which case let's safely stick to the masculine pronoun). On the written level, I have a convenient short-hand for this: "...but s/he will be qualified."

This is probably an uphill battle, though. The fact is, using two pronouns, e.g. 'he or she', is just too cumbersome. It's a pity we can't just make up some nice neutral pronoun. If we had the equivalent of an Académie Française, I guess we could give it a shot. But I for one am quite happy that English lacks such a dictatorial governing body. One of the main things I adore about English is its amazing dynamism and creativity; any governing body would soon beat those traits out of it. Still, it's hard not to look with some envy at languages that have more flexible pronouns. For example, when I hear 'we' in English, there is always the ambiguity: do I mean you and I are the subjects? Or do I mean some third party + me as opposed to you? In other words, are you part of the 'we'? In many languages, this is solved by having two pronouns for 'we': one meaning the speaker and the person to whom s/he is speaking, another one meaning the speaker and someone other than her/his interlocutor. But I am getting off on a tangent here. Some other day, I will waste your time talking about comparative linguistics.

6) Lay v lie. This is actually very easy. Lay is a transitive verb. Lie is not. A chicken lays an egg. You don't 'lie' anything. You just lie, e.g. "I lie in bed." You don't "lie an egg". BUT, and here's where it does get a bit tricky, the past tense of the intransitive 'lie' is...'lay'. Sorry about that. So it's: "I lie in bed today", "I lay in bed yesterday", "I have lain in bed since last week". "I lay an egg today", "I laid an egg yesterday", "I have laid three eggs since last week". (What can I say, I'm one busy chicken.)

But again, language as democracy says that at some point, these two verbs will simply fuse into one verb with a single conjugation, but with two meanings: one transitive, one intransitive. And you know what? That's fine. There's a word for languages that don't change, evolve and adapt: dead.

OK, that was fun! But I am literally dying of hunger. Ironic since I haven't eaten in ages. But whom is responsible for that? It's me. I guess I should feel badly, e.g., I feel guiltily. So I shan't just lay about waiting for dinner! I have a guest coming, so I should prepare something for them.


*Seriously, it's embarrassing. But it only seems to apply to living people. I can tell you who succeeded Thomas Moore, who faced William the Conqueror at Hastings...but I couldn't tell you a single name of a single person I met at a recent party.

**She wasn't entirely off. For example, conquering one's fear of flying just in time to board a doomed flight is indeed ironic.

***Yes, ironic.

****Well, maybe a little. But we'll both get over it.

*****In the mood for more delicious irony? In every sentence in which I have used 'whom' so far, it has been a subject of a verb, in direct contradiction to what I just said. But this is in the same way 'me' is a verb in this clause: it is referring to the entity/concept v actually using the word in a sentence. Like saying "'Me' is a word in English."

******Not 'all intensive purposes'...minor pet peeve.

*******Settle down!

********You in the back! Stop that snickering!

13 January 2011

Let's Call the Right Wing on Their Bluff

Full disclosure: I am a proud, dyed-in-the-wool, American liberal. I believe I am the keeper of my brother if he is weaker or in need. I believe healthcare is a right, not a privilege. I believe a government’s reach does not extend to a woman’s control over her own body. I believe marriage is not a special right reserved to people of a particular sexual orientation. I unapologetically believe that, to paraphrase Oliver Wendell Holmes, taxes are the price we should happily pay as the cost of a just, safe and civil society. I believe that rights belong to people, not corporations and other abstract legal entities: such entities’ existence and behavior should be regulated at the discretion of our citizens so that the former serve the latter, not the other way around.

Now that I have established my left-wing bona fides, allow me to commit liberal heresy. We liberals should accede to the right wing’s assertion of the preeminence of states’ rights over federal policy in all but the unarguably national arenas. The view of the right wing is best summarized by Texas governor Rick Perry’s recent quote to the effect that people vote with their feet, so if policy is left to the states and their citizens don’t like it, they can move to states whose laws more closely reflect those their beliefs. I have recently been won over to this point of view for a number of reasons:

1) As Jared Diamond (the noted ornithologist-cum-anthropologist and a hero of mine) has lamented, our convergence towards an increasingly homogeneous world culture has greatly reduced the variety of ‘experiments’ we can run to find the best outcomes for society. If we in the United States strip away the centralized federal model in favor of 50 different states each conducting its own ‘experiment’ within a looser federal union, we have 50 experiments running all at once, which could lead to new models of government and political economy. This could benefit not just Americans, but the entire world.

2) The Republicans are right about one thing: devolving power to the states does indeed increase freedom in the very real sense that local populations are free to decide on the model that best suits them, instead of bowing to a national “50 percent + 1” majority.* And that leads to the final reason...

3) Americans have lost the ability to reach a true national consensus. Every decision is made by essentially cobbling together agendas that represent just under half of the people’s will, then getting just enough indecisive people in the middle to support those goals, goals that in turn enrage the 49.9% left on the other side. For example, if I get my way on healthcare**, it basically alienates the 49.9% of America that didn’t get its way. That isn’t consensus; it’s imposition.

So I propose we strip the American federal government of everything but defense, foreign affairs and trade policy, the federal courts system, constitutionally-mandated census work, interstate transportation and commerce regulation (including food and drug safety), national parks, national security (e.g. CIA), national law enforcement (e.g. FBI), and exploration (e.g. NASA). Shut down Social Security and Medicare and divvy up and distribute to the states the current assets of those programs based on each state’s prior year contributions to them. Cease all national funding of education and arts. Shut down Medicaid and all federal welfare. Shut down all programs geared towards fostering and subsidizing corporations of all sizes (e.g. Small Business Administration). Shut down Housing and Urban Development. Cease all centrally-planned agricultural policies and subsidies. Then calculate how much money would be needed to fund the few federal departments and programs we’ve left in place and then, on a pay-as-you-go basis, simply require that states turn over that money at the start of each fiscal year, basing their contributions on their share of national GDP. (Also included in the money required of the states should be funds required to pay interest on the debt and enough money to pay off that debt within 20 years.) All federal tax collection at every level (personal and corporate) would cease, leaving only fee-for-service collections for programs like national parks. How the states structure their tax systems to pay their annual federal tab and their own internal expenses, is entirely up to them. Each state decides for itself how and even if it wants to fund such programs as aid to the poor, universal healthcare, pensions, etc.

There will be two overarching consequences to trying this approach. The first will be short-term. The hypocrisy of the right wing will be exposed since conservative states that pretend to hate the federal government even as they benefit from its largesse, will be forced to face the stark reality of life without net inflows of income that many of them receive. (Isn’t it ironic, by the way, that the reddest states that so despise ‘federal waste’ are most often the net recipients of government revenue, while the many blue states that are net contributors seem to mind it the least?)

The second, longer-term consequence (and benefit) is that after a generation or so, we will finally be able to settle the argument of which models offer the most benefits and lead to the maximum happiness of the citizens. As a liberal, I am convinced that after 30 years or so, states like my adopted home state of Massachusetts will have better-educated, wealthier, happier and healthier citizens living in cleaner places. I think red states like my native Tennessee will choke (quite literally) on their ‘freedom’ from such things as healthcare and environmental regulation. Conservatives will of course wager that after a generation, their models will have proven to have produced the best outcomes. Either way, the argument will slowly be won and we can return to a truly national consensus, after which point we may return to a more centralized model based on that new consensus (if the left ‘wins’) or just maintain the devolved approach (if the right ‘wins’).

The one wrinkle may be that both sides may well claim victory because they will ask different questions about their respective models’ successes. Liberals will ask who is happiest, wealthiest (pre-tax) and healthiest. Conservatives will likely ask questions like who is ‘freer’ in the sense of being less obligated to communal needs; how many corporations are headquartered in their borders (since lower corporate taxes will likely attract many companies even if their largest markets are in, and most revenue comes from, the blue states); and how much after-tax income the ‘average’ person has (even if that average masks huge disparities between the richest and poorest as their middle classes are squeezed out). If we are therefore unable to agree on ‘who wins’, there will never be a return to national consensus, in which case I would expect to see the states drift towards regional federations of like-minded states, with the overall union eventually doomed to a slow extinction through obsolescence.

The beauty of this idea lies in its simplicity: no phonebook-size laws, just a simple raft of repeals that undo all the relevant laws, with perhaps a few new laws to govern distribution of assets to the states, collection of states’ contributions to the remaining federal programs, and to replace still-needed sections of repealed laws. Since all funds for Social Security and Medicare would simply be refunded to the states, we wouldn’t even need complicated grandfather clauses to phase out those programs. Citizen pressure should suffice to ensure that the refunded money is used for similar programs at state level. (And if not, vote with your feet!) There would be high transitional unemployment as legions of federal workers are thrown out of work, but their skills would doubtless soon be required at state level and the federal government would send them on their way with one-time payouts in order to liquidate federal employee pension obligations. The relatively few remaining federal workforce would keep their benefits and be exempted from any state-level pension contribution requirements.

There is also an historical beauty to this approach in the way it would bring us full circle: The Enlightenment-inspired Jeffersonians to whom we liberals ultimately trace our roots, were the original champions of states’ right; but in the 20th century we switched places with the Republicans, having decided that our liberal goals were best pursued at the federal level. We could now reclaim the Tenth Amendment as our own.

So let us liberals call the right wing on their bluff. Let’s try it their way and see who comes out on top.***


*Whether or not this 'freedom' to behave foolishly and deprive certain people of rights and even human dignity is a good kind of freedom, is another discussion for another day.

**I didn't. No public single-payer option means this reform was far from complete.

***For the record, I know there's no way this experiment is going to happen. But as a mental exercise, it's worthwhile to consider if for no other reason than that it reveals the hypocrisy of the right wing. Their attitude and actions seem akin to those of the weakling who says "hold me back or I'll kill 'em"....knowing full well that he is being securely 'held back'.

10 January 2011

Bad Science and/or Bad Science Writing

This humorous article posted by a friend on Facebook got me to thinking about science writing. Poor science reporting is a pet peeve of mine. Sometimes it's difficult to suss out which is really bad: the science writing or the science itself....or both. Many writers reporting on science lack a scientific background themselves*, which makes it challenging enough; add in a reporter's need to sensationalize, and you often get some really horrendous science writing.

This disturbing tendency is particularly damaging when it comes to health-related science. "Research Suggests Consuming X Leads to Cancer" for example. How often do we see headlines like that? I can never tell what to take seriously, because it's difficult to tell who is doing a bad job: writer or researcher....or both...or neither (if in fact the conclusions are sound)? But taking the information at face value, I can't know how seriously to take it. People who eat X get cancer at a rate 1.5x higher than others. OK, but is that cause and effect? Correlation? Coincidence? Is it environmental? Genetic? For example, maybe the problem is that people who eat X have a taste for X because they possess a gene that makes them crave it, and that gene has a dual role, one that leads to a higher tendency towards cancer. Or maybe people who eat X like the taste and the substance has a taste similar to something else that is in fact causing the cancer. Or maybe it is coincidence. (Don't start with the 'no coincidences' thing!) If you take enough data about enough things and draw enough conclusions, you will sooner or later run into coincidences like this. It's not just possible: it's quite probable. Anyway, the point is that the permutations and possibilities are practically endless: genetics, environment, both, neither, coincidence, epigenetics or just plain flight of fancy....or some unimaginable combination of all.

Occasionally there are more mundane reasons for bad science reporting. I recently read an interesting piece in Discover magazine about taste. The writer said that everything Americans think they know about the 'map of the tongue' is in fact based on a very bad translation of a German study. So forget that bit about sour being on the sides, sweet on the tip or whatever.

And sometimes it is a question of science writers bowing to and passing on received 'wisdom', information that is passed along so many times that we all - including science writers who should know better - just believe it. Take that utter and complete nonsense about needing eight cups (64 oz, roughly 2 liters) of water a day. This ridiculous 'fact' is based on a bad journalist's laziness. In the 1990s, a New York Times reporter mentioned it in a piece. He had gotten it from a study done two generations earlier. Problem is, he didn't bother to read more than the 'eight cups' part: the study's author went on to say that 30-40% of that amount is gotten from our food anyway. Now we have an entire bottled-water industry built on the premise that if your urine isn't 100% clear, you're dying of kidney failure within the hour.

Bottom line: next time you read a science headline, take it with a grain of salt...but no more than a grain, as otherwise you'll die of heart failure immediately, according to a recent study.


*Not that there's anything wrong with that, says the science buff who has a degree in languages and who barely passed high school biology thanks to a distinct squeamishness about frog dissection.

09 January 2011

Changing gears for a moment: Investing Primer

Completely different theme today: real-world investing. This is actually just a copy-paste (with a few edits) from a document I wrote quite a while back and that I updated recently for a friend. Over the years, several people have asked me for my take on investing. I am not sure why: it's not something I usually discuss with people. Perhaps my overall nerdy aura makes people think I am an expert! I am not, and should never be mistaken for one. I can't emphasize that enough. I hope this 'primer' is a useful *starting* point for those interested in understanding investing, but it should never be an *ending* point in such a search. Talk to a professional! But this primer should give you the knowledge you need to ask the right questions of such a professional.


The first rule of investing is straight from Douglass Adams: DON’T PANIC. (Sadly, the second rule is not ‘42’.) The reason many investors lose money is that right when it becomes the least expensive time to invest, they sell. To understand what I mean, imagine the following scenario. You need to buy a new suit. You go to your local department store and the clerk tells you that a big sale just ended, so it’s the most expensive time to buy. So you think to yourself, huh, most expensive time? Cool. Then suits must be worth a lot! I will take two!! The next week, the store has a huge 50% off sale. So your logic is, well, then suits must not be worth as much as I thought they were, so I’d better unload them before they lose even more value! So you return to the store for a refund. The clerk looks at you like you’re crazy: “Sir, you realize that since the sale is on, I have to refund the SALE price, not the price you paid? You will lose half your money!!” To which you reply, “Yeah, but I am nervous prices will fall even further...please give me money back.”

Now that little scenario may sound silly to the point of inane, but stop and think about the way most people handle their investments. This is exactly how they behave. They get into the stock market because it’s ‘hot’ (meaning prices for stocks are often at their HIGHEST). Then, right when prices get reasonable and smart investors are buying up cheap stocks (i.e. during market corrections), the foolish investors run out of the market in a panic, locking in their losses forever. They do this because they believe they are smarter than the markets, that they can ‘time’ the market. That is a fool's errand. Don't try it.

The key to making money in equities is a very simple strategy using two tools:

1. Dollar-cost averaging. The reason it is called ‘dollar-cost averaging’ is that you are buying steadily through all the market fluctuations, so you average out the cost of ownership of your equities. So it’s just a technical way of saying ‘slow and steady’. Invest a fixed amount every month (or every paycheck or whatever) and do NOT SELL during market corrections. In fact, quite the opposite: if the market is correcting, think of it as that sale at the department store: if you do anything at these times, it should be to buy more when all the panicking masses are 'returning their overpriced suits!'

2. Indexing. The reality is that unless investing actually interests you and you are willing and able to take the time to research stocks thoroughly, your best bet is simply to buy ETFs or mutual funds that are tied to the major indices that reflect your goals and risk-aversion level. DO NOT buy actively-managed funds. These are funds in which the managers actively buy and sell stocks in the belief they can beat the market consistently. But check the facts: the VAST majority of actively-managed mutual funds fail to beat the indices most comparable to their strategies and goals. Many can brag they have done it for, say, a year or two or even five, but in the end, the ‘invisible hand’ will always win. In fact, since they all fall sooner or later in comparison to the indices, any funds that brag they have beaten the market for a few years, are actually the very ones to avoid as it means they will sooner rather than later revert to the mean.

You might say, “Well, that sounds good in theory, Christopher, but look at the reality of the stock market over the past decade. I know a lot of people who lost a fortune in the markets during that time. So how can you defend equities as a sound investment?” The answer is: easily and justifiably. Go back to dollar-cost averaging and apply it to the wild and crazy markets over the past ten years. For example, the NASDAQ is actually lower as of this writing (January 2011) than it was exactly ten years ago today. Scary, huh? So if you were a foolish market-timer who went in when it was hot (right before the bubble burst) and ran out at every scary correction, you would be broke (or close enough anyway). But if you had been a smart investor following this simple strategy, look at what would have happened:

Buy $10,000 last day of every October starting 2000, so ten purchases spread over the decade (2000-2009). (Normally you would buy every month or few weeks, not just once a year, but let’s keep it simple for the sake of example.) Even though the NASDAQ is lower now than ten years ago, you would, as of last trading day in October 2010, have $124,769.24 on that $100,000 investment, thanks to dollar-cost averaging and a disciplined approach to investing, with no panicked withdrawals. That may not look huge – and by historical standards, it isn’t great for equities– but you are still ahead of the game, even after inflation. And compare it to what your ‘safe’ friend did when he ‘wisely’ sold all his stocks after the crash, got out of equities and just put it all in bank certificates of deposit (aka CDs) at, say, 2.5%: even if he escaped with that same starting $100,000, he now has $114,834.66. That barely (if at all) keeps up with inflation, so in real terms, he has lost money on his ‘safe’ approach.

If it’s so easy, why do so many people lose money? See rule number one. People panic. They buy at inflated values (when stocks are ‘hot’) and sell at deflated values (when it’s ‘time to get out’).

So the bottom line is simple: invest steadily, don’t panic, and stick with it!

The next question is, how aggressively should you invest? After all, there are stocks and then there are stocks...blue chip indices like the Dow tend to be less sexy and maybe not as lucrative, but they are less volatile and offer better dividends (i.e. fixed income regardless of price performance); small cap indices are very aggressive, but tend to fluctuate a lot and there is more downside risk that can offset some of that upside reward. And what about overseas indices? To decide the right blend, you must consider two things:

1) What is your risk tolerance? In simplest terms, people fall on a spectrum from ‘risk-adverse’ to ‘risk-seeking’. You either want steady but lower returns, or you want (potentially) higher returns but with more downside risk. It’s all fine and good to say ‘Don’t panic’, but if you are the type of person who WILL - despite all advice - panic when he sees an index tank, say, 30%, then you are more on the risk-averse side. The most important move you will ever make as an investor is when you truly ask yourself what kind of person you are and then honestly answer that question. If you are risk-adverse, admit that to yourself and proceed accordingly. This isn’t a test of your personal courage or self worth!

2) What is your time horizon? Equities are great investments, as I have shown above, but the shorter the investing time frame, the riskier they can be. Think of it this way: if you need to buy a car next week, you aren’t going to invest the money you need in Apple or Microsoft. Yes, they may go up several percentage points in a week. But they can also tank several points in a week. But if you are buying that car in, say, ten years, then part of your investment to save for it may well be in equities. So you need to blend accordingly and even create different portfolios (one for short term with CDs, bonds, and maybe just a very small amount in equities; another for long term, mostly in equities). For retirement savings, it is all about one portfolio, but one that evolves over time. Personally, I use a very simple approach: every year, roughly one percentage point more of my portfolio goes to safer investments (e.g. high-grade bonds, etc) and one point comes out of stocks. So by the time I retire, only around 30-35% of my money will be in stocks. That might seem high, but remember, retirement isn’t a one-day event: it also has a time horizon of its own, since some of the money will be needed ASAP, but the rest will not be needed til well after the day you retire.

Making mistakes around time horizon is why you hear about people who are shocked to lose their retirements just before they are due to retire. How many stories like that have we heard in recent years? “I was two years from retirement, then I lost 75% of my portfolio...now I have to work til I am 90!” So what two mistakes did this poor fellow make?* First of all, he broke Rule 1: he panicked and sold after the market correction, meaning losses are locked in. But the overarching issue is a mistake related to time horizons. The question you should ask that fellow is, “If you were 63 years old and needed that money at 65, why in the world did you have it all in equities?!?!” At 63, he should have had well over half his money in very safe places like CDs, highly rated bonds, etc. Money exposed to short-term fluctuations should just have been money he needed later in retirement, so there would be time to recoup.

In summary, first decide what kind of investor you are, then look at your time horizon, then make a plan to invest regularly, with a firm commitment to yourself not to panic at market downturns. Do this and you will be fine!

A small post script here: accept the fact that no matter how disciplined you are, you are still human and you will still do stupid and/or highly risky things. Set aside a ‘play portfolio’ with 2%-5% of your investing capital (or a few grand...whatever feels right, but no more than 5% of all capital) and use it to play hunches, buy ‘hot stocks’ your friends tell you about, whatever. Think of it as your investment playground. Silo this from your main portfolio, though: separate account, maybe even separate brokerage. (That’s to avoid the temptation to transfer money from your real portfolio.) This can also be your ‘lab’ where you can learn about investing, feeling safe to make mistakes. But remember not to get cocky: no matter how good you get, you will never beat the markets in the long term, so don’t get any stupid ideas about transferring your major assets here just because you do well for a while.

Some sample portfolios:

Risk-seeking, long-term investor with at least 20 years til retirement:

5% in a safe money market fund or high-grade bond fund

5% in high-yield (lower grade) bond funds

20% in ETFs or index mutual funds tied to major world indices

30% in ETF or index mutual fund tied to Russell 2000 (small caps)

20% in ETF or index mutual fund tied to S&P Mid-Cap 400 (mid-caps)

20% in ETF or index mutual fund tied to S&P500 (large caps)

Mildly risk-adverse, long-term investor with at least 20 years til retirement:

10% distributed among safe money market funds, a high-grade bond fund, and maybe even some precious metal shares (e.g. ETFs tied to price of gold)

15% in ETFs or index mutual funds tied to major world indices (ex-USA)

10% in ETF or index mutual fund tied to Russell 2000 (small caps)

10% in ETF or index mutual fund tied to S&P Mid-Cap 400 (mid-caps)

55% in ETF or index mutual fund tied to S&P500 (large caps)


*Well, the FIRST mistake was probably just plain greed, but that's another blog posting for another day. I truly do feel sorry for all the people who lost so much in such a short period of time, but I can't escape the reality that much of it was due to their own foolishness and greed. Even in cases of fraud. Take the Bernie Madoff debacle. Yes, he defrauded all those poor folks and he is a monster. But why weren't those people following the rule of 'if it seems too good to be true, IT IS'? And why were they sinking ALL of their money into his funds and not hedging against the risk by putting at least some of it elsewhere? Sadly, the answer in all cases is 'greed'. And let's be honest with ourselves: Madoff wasn't the only con artist. The SEC played a witting role in allowing his Ponzi scheme to flourish, despite many warnings from a fraud investigator who had been sounding the alarm about Madoff for YEARS.

Fun with Artificial Intelligence

I have been having a ball for the past week playing with Eureqa, an AI I mentioned in an earlier post. Eureqa is the brainchild of Profs. Schmidt and Lipson at Cornell. When it comes to data, my main fascination has always been economic data, so I have been toying with trying to find models that best account for why equities markets move in one direction or another. As I said in that previous post, there are far too many irrational factors involved in the movement of equities markets to come up with the S&P 500's answer to E=MC². Even if you managed to come up with a more or less reasonable predictive model, the theoretical economic characteristic of so-called 'perfect knowledge' eventually becomes not-so-theoretical economic reality; people begin operating under this new spotlight; next thing you know, the model is dead precisely because everyone knows about it and therefore behaves in ways not predictable by the model (since this new knowledge and resulting behavior are themselves major new variables).

Quite aside from the fact that eventual knowledge of a good model would itself make the model obsolete, is the fact that there is a HUGE difference between an equation that explains data and an equation that reveals cause and effect for data. Just ask all the people who have wasted good time and money 'data-mining' the history of equities markets. A perfect example is O'Shaughnessy's 'What Works on Wall Street'. The author dug through decades of data on the stock market and came up with elaborate models showing what would have been extremely effective ways of making money....assuming one had the knowledge of the entire period, but had gained that knowledge at the beginning of the period studied. It's amazing to me that an internet search of this man still pulls up almost universally positive, glowing articles and interviews, despite the fact that the mutual funds that he opened in the 1990s, funds entirely built on his 'research', were abject failures. He managed to spin this somehow, get out of mutual funds, and open a private wealth management company. This allowed him to continue making money and claiming he was right all along, but in fact freeing him to use completely unrelated methods of investing (since he isn't required to divulge his techniques). So he is undoubtedly a gifted marketer, and obviously even a good money manager...as long as he isn't following his own advice.

If this is all still (quite understandably) rather abstruse, I'll illustrate with a metaphor. Imagine you stand outside on the street corner and observe the weather and the passing of cars and people. Three out of ten days it rains. On those days, you notice people wearing raincoats. You also notice there are no open convertibles. Data-mining your way to 'good' equations tells you that an absence of convertibles and a presence of raincoats cause it to rain. That's an example of mixing up cause and effect. In another example, imagine that you observe that on the days it rained, you observed ten percent more people named Jane. Aha! An excess of Janes is causing rain! No, this is just coincidence.* I could go on, but for more examples of these kinds of problematic reasoning, there's a far better resource: read Crimes Against Logic by Jamie Whyte. This book is one of my top fifteen all-time favorites, not so much because it taught me anything I didn't already know (though it did that, too), but because he so eloquently and clearly expressed ideas I had known well but that I had been unable to articulate.

However, an inability to find well-performing predictive models for equities markets, doesn't mean that feeding such data in Eureqa is itself useless. By watching how Eureqa treats all the different variables, you start to see how they interact and which ones haven't even a correlative relationship with equities market performance. For example, Eureqa rarely seems to 'care' much for inflation. There appears to be very little correlation. BUT, it does 'care' quite a lot about the Fed Funds rate, which is essentially the public policy reaction to inflation. It also 'likes' CD rates, which might be a decent stand-in for opportunity costs, though that implies some cause-and-effect (CD rates are low -> opportunity cost of foregoing them in favor of equities is low -> I will buy equities -> everyone does same -> equity prices rise)**, which requires a heavier burden of proof, one that I am far from meeting. And employment? Almost always tosses that out as irrelevant very quickly. But it 'loves' consumer confidence, which suggests that while the markets don't 'care'*** about how many people are out of work, they care very much about how confident people feel in the economy (which is presumably in turn driven by how many of them have jobs, though not directly). But again, there is no straight cause-and-effect here. You can't say Consumer Confidence = y ergo stock performance will = z as an exact function of y.

Early days yet, but so far, so fun! After I get bored with this round of experiments, I think I'll move on to GDP.

*Don't even get me started on people who say 'I don't believe in coincidences.' Do you have ANY idea what kind of universe we would live in WITHOUT A HUGE LOT OF COINCIDENCES? Randomness permeates the very fabric of existence. The 'problem' is that our human brains have evolved with this incessant need to find patterns. I say this facetiously because of course that very same 'problem' is doubtless one of the very core elements of our intelligence, not to mention a key explanation to our very survival as a species. But it does have the unfortunate side-effect of making us see Jesus in breakfast food far too often.

**This introduces an intriguing interplay itself. Perhaps the opportunity cost (in form of CD rates or T-bills) must reach a certain threshold before prompting consideration of equities, but that consideration is in turn colored by the confidence one has in the markets and the overall economy (as measured by U of M consumer confidence index?), and that interplay in turn drives the degree to which investors commit to equities, thus determining the demand for (and therefore value of) those equities. Add in a dash of price-to-earnings data (i.e., the 'real' cost of 'buying' the earnings behind an equity) and you might just have some soup worth tasting.

***Please forgive the anthropomorphic words here. And if you are a lefty like me, do not fall into the temptation of attributing 'feelings' to markets. That a market does not move in reaction to a tragedy like high unemployment, does NOT mean that the people who make up those markets do not care about unemployed people. The phenomenon is merely an observed outcome of the aggregate behavior of the people acting in the market, not the 'evil' intent of any group of people within the market. I enjoy demonizing Wall St fat-cats as much as the next liberal, but do so for their individual behaviors, not those of the markets in which they act.

07 January 2011

It ain't the stupid people's fault; it's the smart ones who oughta know better, but don't

I can't remember where I read it, who said it (I seem to recall it was a 19th century American) or exactly how the quote goes, but it amounts to "The bad state of the world ain't the stupid people's fault; it's the fault of the smart ones who oughta know better, but don't."

When I was growing up, I labored under a blissful illusion: somewhere, somehow, a bunch of really smart people were out there making the world a better place, and doing so at a steady, methodical pace using rigorous scientific means. To take one example: I always thought medicine was the paragon of such scientific (and beneficent) efficiency. In my imaginary world, when you went to the doctor, the doctor examined each symptom, checked the results against some impressive (but, as it turns out, entirely imaginary) database, went through checklists and arrived at both a diagnosis and a treatment based on the best available data from the world of medicine. In turn, s/he logged the results of this diagnosis, treatment and outcome into this fictional database, strengthening the field of medicine with ever more data. As I reached adulthood, this illusion was slowly destroyed, replaced with the reality that (again, to pick on medicine) doctors basically just muddle through as best they can, relying on their training and whatever attempts they care to make at staying abreast of current trends in their field.

So as an adult, I have slowly come to learn what a rather inefficient, unproductive world it is that we occupy. Even in areas where one expects competence and intelligence, too much digging results in disappointment. Picking on medicine here again, take the example of back pain. For decades (if not centuries), the standard treatment for 'throwing out one's back' has been bed rest. In fact, it has been conventional wisdom for so long, few bother to question it. Doctors to this day routinely tell their prostrated patients to take it easy, stay in bed. There's only one problem. Scientific evidence suggests that not only is bed rest not a good idea, it can actually prolong the problem. Moderate exercise, though initially uncomfortable, is considerably more effective a treatment. I can attest to this through personal experience: after reading this rather astonishing screw-you to conventional wisdom, I acted on it. I throw my back out a couple of times a year. I used to just lie in bed waiting for the pain to pass. Now, I drag myself out the door and, painful as it is at first, I walk. It's a little embarrassing since I am bent over a good 45 degrees or so, and I creep along at around one mile per hour. But sure enough, after 10-20 minutes, I am walking upright and the pain is all but gone. The old 'cure' of lying in bed left me prostrate for at least the whole day.

This is not meant to suggest that doctors are incompetent or even, for that matter, that they universally prescribe bed rest for back pain! The point is, why aren't we as a society doing a better job of simply documenting, compiling and cross-referencing our experiential data and then rationally acting on the results? In medicine, science, public policy and, yes, religion and ethics, we consistently fail to act on the evidence we accumulate so easily. I can forgive religion: it's entirely premised on accepting things blindly on faith. But what's science's excuse? And medicine's? Healthcare workers aren't even utilizing the humble checklist routinely, a simple 'no-brainer' that could save countless lives.* And what is the excuse of public policy 'experts' when they continue to prescribe courses of action that have failed repeatedly, ignoring policies that have proven to be successful? The easy answer is that people are just foolish. The harder but more worthwhile answer is that we have a lot of work to do in educating even the 'smart' people to follow a more rigorous approach to their respective endeavors.


*Yes, checklists. That most basic of things. You'd be surprised how rarely it is used. Imagine if every significant procedure at a hospital required a checklist. 'Did you check for contraindications? y/n; if y then proceed; else, check' 'Did you cross-reference the bed chart with the computer records before performing the procedure? y/n; if y then proceed; else, cross-reference'

03 January 2011

Yet another step towards the singularity...

A few years back, I remember reading an article about Vernor Vinge and his case for the technological singularity. I found it fascinating, but soon it crept to the back of my mind. I didn't see much evidence that it was approaching anytime soon. But recently, some breakthroughs in science and a wonderful new book by Mark Stevenson, have made me revisit the idea. In its simplest expression, it basically just says that at some point in the not-too-distant future, accelerating returns will result in a watershed moment, after which humankind will be so changed that all of our current assumptions about even the most fundamental concepts will be swept away, leaving us in a world so completely different from the one we had come to know as a species, that it will be essentially impossible to predict from this side of the singularity. So when will this point be reached? It is not too surprising that no two people agree and even less so that many people think the whole concept is bunk. But people like Ray Kurzweil seem to feel that many people alive today will live to see it. I am not sufficiently convinced even of the validity of the idea just yet, never mind having an opinion about timing. But after reading books like Mr. Stevenson's and seeing some of the mind-blowing advances happening in so many fields, it is certainly something that won't be creeping off again to the back of my mind any time soon.

To cite just one (albeit very powerful) example, consider Eureqa, a program developed by Professors M. Schmidt and H. Lipson at Cornell. Mr. Stevenson visited the team at Cornell and discusses it in his book. Basically, this AI 'program' - calling it a program seems akin to calling Mt Everest a 'mound of dirt' - takes your data and derives principles on which such data are built. That may sound rather dry and dull, but consider this: it figured out Newton's Laws of Motion based on data it was fed. In a few hours. So imagine brilliant careers in research distilled down to a few hours. Then consider that of course such minds won't retire after they feed an AI like Eureqa some great data and get some cool new fundamental laws. They will keep going. So imagine Newton figuring out his laws in a day and then going for another 40 years or so. (Well, OK, he DID keep going for another 40 years or so, but you get the point.) Starting to see how accelerating returns might be leading us somewhere unrecognizable?

If you're one of the select few scientists with access, you can feed...oh, wait, what?! Eureqa is a free download. Anyone in the world can use it. I have.* It's laid out like Excel. I'm no scientist and I doubt I will come up with any Earth-shattering theorems with Eureqa. But imagine this tool in the hands of thousands of brilliant researchers around the world.

Hold on to your hats, folks. It's gonna get wild.


*I love playing with economic data, so I fed it 30 years worth of such data to see what equations it would come up with. This was just to amuse myself, mind you: one must be careful to distinguish between deriving laws from data gotten from the natural world v just plain data-mining. My exercise was essentially the latter. In other words, any equation derived from something as erratic as economic data will serve just one purpose: more or less accurately predicting a data point within the universe of data already provided to the AI. So if I got an equation for predicting, say, the value of the S&P 500 based on CPI and consumer confidence, all I could be certain of would be that the equation could more or less accurately 'predict' the values for 1987, using the other 29 points of data within its universe. Still, it may serve to give a general sense of trends.

02 January 2011

The Personal Genome Project

I recently read Mark Stevenson's "An Optimist's Tour of the Future" and among the many new ideas and innovations to which I was introduced, was George Church's Personal Genome Project. I am going to cop out here and just quote directly from the project homepage:

"In an unprecedented achievement, the Human Genome Project provided the first drafts of nearly complete human genome sequences in 2001 after more than a decade of effort by scientists worldwide. This information is now being used to advance medicine, human biology, and knowledge of human origins.

We foresee a day when many individuals will want to get their own genome sequenced so that they may use this information to understand such things as their individual risk profiles for disease, their physical and biological characteristics, and their personal ancestries. To get to this point will require a critical mass of interested users, tools for obtaining and interpreting genome information, and supportive policy, research, and service communities. To catalyze these developments, we launched the Personal Genome Project (PGP)."

In other words, let's multiply that historic human genome mapping project by a factor of 100,000, and along the way 1) help drive down the cost of genome mapping; 2) figure out better and faster ways to do it; 3) share massive amounts of data that could one day save millions of lives; 4) see how all this affects society, in ways far beyond just medicine (e.g. legal ramifications, social policy, privacy, etc.); 5) advance research in everything from genetics to medicine to genealogy to forensics to...well, you name it; 6) potentially allow designer medical treatments that could result in miraculous cures using drugs that would be lethal to someone else. So, maybe call the family and let them know you won't be home for dinner til around 2072-ish, Dr. Church.

I applied to be one of the hundred thousand guinea pigs they will be needing to make this insanely ambitious project work. Last week, I was accepted as a volunteer subject and will hopefully soon be providing the DNA sample and requested information. I would encourage everyone to consider volunteering. I say 'encourage everyone to consider volunteering' and not 'encourage everyone to volunteer' for very good reasons. This undertaking is not without risks and is by no means something to be undertaken lightly. I do not mean it is risky to your health or physical safety (it's just a DNA sample!), but it has the potential to impact you in just about every other way. First of all, you are agreeing to have your genome sequenced...and then shared with the entire world. So privacy is out the window. Of course, your name isn't attached to the information, but who can guarantee that information will never come out? And no one can be sure how such information would be used. Maybe your insurance company finds out, links your results to you, sees you have a much higher risk of cancer, then dumps you. Imagine someone artificially creating DNA 'evidence' from your genome and leaving it at a crime scene.* Let your imagination run wild and you still won't think of all the potential for harm. This is a cutting edge project, and sometimes cutting edges lead to injury.

But now stop and think of the practically limitless good this project can do for so many different aspects of our daily lives. Even if the study yielded zero short-term medical breakthroughs, the very exercise of processing 100,000 genome sequences will help improve the process and drive down costs. It will teach us (and our lawmakers) some lessons on things like privacy, ethical treatment of people with diverse genetic backgrounds, even civil rights.** And it could potentially open the door to novel treatments using existing drugs that are deemed too dangerous because of their fatal side-effects in some patients. (If you can figure out what genetic markers are associated with such adverse reactions, you could provide such a life-saving drug to one person, but withhold it from someone to whom it would be fatal.)

But perhaps the most compelling reason to help out is the fact that Dr. Church is doing something no private company would do: he's going to release all of the data to the whole world, no strings attached. So instead of some small group of people trying to sift through all that data looking for a handful of profitable potentials, you have the whole scientific world delving into it, everyone from university researchers to pharmaceutical companies looking for the next big drug pay-off to medical researchers at hospitals to biologists to, well, everyone.

So, please consider participating if you are a U.S. citizen*** and can accept the risks. Just go to the project sign-up page to get started. You will need to pass a short 'exam' that covers some very basic, junior high school-level genetics + areas specific to the project (including risks). It's easy and they even give you a link to a special site where they basically spell out all the answers in a study guide! And whether or not you decide to participate, please consider donating to the cause. It is a not-for-profit, 503(c), so you can even deduct the donation from your taxes!

Meanwhile, yours truly will be publishing the occasional blog post on the experiences of a willing guinea pig in one of humankind's most ambitious endeavors. So stay tuned!


*Nuts, right? Believe it or not, the project sponsors specifically cite this as a potential risk.

**Yes, civil rights. Yesterday's oppression was based on ethnicity or sex. Today's on sexual orientation. Tomorrow's...on genetic propensity for certain traits?

***Yeah, I don't really get that either. But for some reason, the scope is limited to Americans for now.

01 January 2011

Inaugural Post

[Updated 2017]

<tap, tap> Is this thing on? Testing, testing, 1, 2, 3....

Welcome to my second attempt at blogging. My only other attempt at online writing was a rather messy webpage back in the good ol' days of tripod.com. (And apparently, that blog is still out there.) I was living in Paris at the time, but I am originally from Memphis, so, pun-tificating pun-tiff that I am, I selected the name 'Tennesseine' (which is Tennessean + Seine, for the humor-impaired among you). Despite the fact that I am over two decades removed from the muddy Mississip' and by now (2015) 10 years from being nearly in Seine, I am sticking with that moniker for the very simple reason that it's Saturday evening, I am tired, and I am not inclined to exercise my mind enough to come up with something new.

While I am not very experienced with blogging, writing for public consumption isn't entirely new to me. As anyone who knows me can tell you, I am not lacking in opinions, so googling me will turn up many a letter-to-the-editor in several publications, including Norwegian-language Bergens Tidende (from when I was living in Bergen), New York Times,  International Herald Tribune (many letters over quite a number of years living abroad), Time and Newsweek. When living in Norway, I even had a letter published in the Commercial Appeal, the newspaper of my native Memphis; but I made a mental note never to submit articles there again. To make a long story short, the letter was a tongue-in-cheek 'story' obviously not meant to be taken seriously, but the next week the paper printed a furious reply to it, something penned by a woman whose sense of humor had clearly been bleached out by the excessive chlorine in her gene pool. It's not the woman who disturbed me...there are people like her everywhere. But clearly the newspaper editors themselves, these bastions of intellect for that fine city, weren't playing with all their chips, either. So that was the day I realized you truly can never go home again.

[If at this point you are wondering if all my musings will start at one point but end up light years away from that point and yet somehow still in the same paragraph, let me just sum up that answer for you by pointing to the signpost ahead: 'Abandon all hope, ye who enter here'.]

My bio: I was raised in the Southern US (in the aforementioned Memphis). Up til my departure, all my father's family had pretty much stuck to the South since around the American Revolution, mainly in North Carolina at first, later Mississippi, only in my grandfather's generation coming to the 'big city' of Memphis. Similar story on my mother's side. We're of mainly Irish/English stock on Dad's side, Welsh/English on Mom's. Translation: plain ol' vanilla American. The one curious fact about my background is that no matter how far back I go and on how many branches of my family I research, all sides have been here since at least the early 18th century, in many cases 17th century, and they every single one come from the British Isles. That's not so odd for one's direct line (father to father and so on), , but it holds true no matter how far afield I go (father's mother's, mother's father's mother, etc.). You'd think at some point I'd have some interesting ethnicity or an Ellis Island type story to tell.

I left the US first in 1991, when I moved to Norway. I did my BA there, with a semester at Université du Havre in France thrown in for good measure and just that right amount of nasalization. My studies were essentially languages and international studies. Afterwards, I came back to America for a few years (Atlanta and DC), back to Europe in '98 (Barcelona, Malaga, then Windsor), back to the US in 2001 (mainly Chicago), then to Europe again in 2003 (Florence, then Paris), then back again to the US (Boston) in 2005, a finally here in Charlotte, North Carolina, USA as of 2013. (And yes, that transoceanic ping-pong was exactly as exhausting as it sounds. But it was also quite thrilling and I am truly humbled by and grateful for the adventure.) Along the way, I got married and divorced and had a wonderful son, now 21 and living in England, where he attends university. In 2007, I remarried, but we split up in 2017. We remain close friends, though, and are raising two beautiful boys together.

To pay my mortgage, I am a project executive at a software company. To pass my time, I read more than I should but less than I want; I run; I do some volunteer work; and (not surprisingly) I travel. I also torture my loved ones with the same cringe-worthy kinds of puns to which I have already subjected you. And finally, yes, before you ask: I also spend a lot of time silently judging you for your bad grammar. If it's any consolation, I even do that to myself. (It's not easy being me.)

Finally, why am I blogging? Well, stay tuned. A very specific subject led to the birth of this blog (and a rather exceptional person inspired me to start it). So I'll blog on that 'specific subject' when the time comes; but along the way I will ramble on about any number of subjects, from politics to religion to current events to, well, anything else I can think of that might offend you. Stay tuned for updates and, soon enough, some clarification on that cryptic answer I just gave above.

[User alert! If you are easily offended when people challenge your beliefs, steer clear. I've not much use for superstition, unfounded belief, dogma, conventional wisdom or prejudices (except of course my own, which I am sure are perfectly justified and quite likely aimed at you).]