In Search of Stupidity: The Lost Chapters

Read chapters and sections from the first and second editions of In Search of Stupidity: Over 20 Years of High-Tech Marketing Disasters

Edit

Welcome to the Korean edition of In Search of Stupidity: Over 20 Years of High-Tech Marketing Disasters! Korea is a country that knows something about stupidity, having suffered since the 1950’s from the ongoing effects of what may the 19th and 20th centuries stupidest idea, Communism. But we in the US are well aware that Korea is also a first-world country with a first-class, high-tech infrastructure. This accounts for the fact that Koreans are known to be addicted to playing and dominating the rest of the world in massive MUDs (Multiuser Dugeon Domain) games, such as World of Warcraft, Starcraft, and MU. In the US, if your son or daughter looks up from their PC sometime and mutters something about how their shiny World of Warcraft castle has been sacked by rampaging Orcs, evil knights, or pitiless demons, the odds are often quite good it’s a Korean doing the sacking and rampaging!

This means the time is right for a Korean edition of “Stupidity.” As Korea takes its place amongst the world’s high-tech elite, it faces an important choice! Will Korean companies A) repeat the mistakes made by others and suffer repeated financial losses, layoffs, and much angst and personal woe as its software and hardware industries attempt to grow? Or will it be B) Korea gracefully avoids repeating past disasters made by idiots in other countries while it grows to unparalleled high-tech greatness?

Sorry, I’m voting for “A.” Human nature is universal and human stupidity is an incredibly powerful force capable of ignoring history, commonsense, and practical experience (if you doubt this, just spend a minute looking north). But you, lucky reader, are equipped with a valuable antidote to high-tech stupidity. You hold in your hands not just a book, but an institutional memory that will pour foresight and advanced knowledge into your brain. Soon, you will be equipped to avoid the mistakes of the past and will be able to march forward into the future secure in the knowledge that if you do indeed screw up, your mistakes will probably be original ones.

And this, at the very least, will put you one up on the industry’s biggest industry software company, which, with its latest release, managed to repeat positioning mistakes made almost a quarter century ago by MicroPro, which has earned its own inglorious chapter in this tome. I’m speaking, of course, of the launch of Windows Vista. Several months after the release of Vista to businesses (the product was released to consumers in January, 2007) the consensus of the market has been that Vista is a flop. It’s a peculiar type of flop. Financially, Vista is making a great deal of money for Microsoft. No surprise there; after all, the company has an almost complete monopoly in the desktop OS and main business applications markets and the dominant position in the server OS segment. OEMs are in complete thrall to Microsoft; if you don’t offer Windows on your laptop, you’ve got an unsellable silicon brick in your warehouse.

But that said, Vista has failed to meet any of the lofty goals originally set for it. It has failed to excite any segment of the market, including key influencer groups such as the press and the gamers. It is not driving sales of new computers. At retail, the pull figures for Windows out of the channel are dreary and show no signs of changing (we’re researching these numbers and will be reporting on them soon in an upcoming issue of Softletter). The blogs are condescending and even many Microsoft devotees are dismayed by what they see and hear. Legitimate copies of Windows XP (and even 2000!) just became more valuable and knuckles will have to be severely cracked before the hands grasping those precious old boxes relax and allow a fresh copy of Vista to be shoved into a reluctant grasp. The fact is, few people see any need or have any desire to buy Vista.

In all fairness, some of the problems that accompanied the Vista launch are beyond Microsoft’s control. As the Internet has matured as a development and infrastructure platform, the growth of SaaS and advent of hybrid applications has led to an inevitable challenge to Microsoft’s monopolies in OSs and desktop applications. Over the next several years, Microsoft will need to execute the painful chore of chewing off its own two strong revenue arms (but not too quickly) and hope they regenerate into new revenue and profit builders. It’s not a tasty task, and you can’t really blame the company for avoiding it, necessary though it is.

But paradigm shifts aside, the biggest problem crippling the Vista rollout was Microsoft’s complete bungling of the critical task of properly positioning the product. Vista’s marketing identity is a dirty smear in the mind’s eye of the public; today, it’s almost impossible to find anyone (particularly anyone from Microsoft) who can cleanly and quickly describe why Vista is superior to Windows XP. And a confused buyer is a buyer is a buyer that does not buy (or one who buys something they can understand).

Microsoft’s Positioning Sins

During the Vista rollout, Microsoft committed several positioning sins. Redmond’s mistakes begin with the deadly transgression of creating a positioning conflict within its own product lines. It’s a surprising mistake. During the history of the Windows 9.X vs. NT product lines, Microsoft was frequently tormented by customers confused by which product to buy, a mistake highlighted by the firm’s creation of one of high-tech’s dumbest ads, the “two nags racing” piece which you can see preserved on www.insearchofstupidity.com in the Museum of Stupid High-Tech Marketing. While 9.X and NT both existed, Microsoft frequently had to explain why a customer should buy one product over the other when both were called Windows, looked very much the same, did very much the same thing, and cost pretty much the same. But Microsoft was lucky in that during this period its chief jousting opponent was IBM and OS/2.

But with Vista Microsoft pointed the lance at its own foot, kicked its marketing war horse into action, and firmly pinned its toes to the ground. There are no less than six (actually seven, counting the OEM version. Wait, isn’t that eight if you count academic packages? Are we missing some other variants? Probably. Does Windows CE count?) versions of Vista currently available for sale:

• Vista Starter (which you can’t buy unless you live in the Third World, apparently.)
• Vista Home Basic (which, amazingly, does not include the new UI.)
• Vista Home Premium
• Vista Business
• Vista Enterprise
• Vista Ultimate

This plethora of choices leads customers to naturally ask a deadly question: “Which one do I buy?” Before, a consumer only had to compare between Windows XP Home and Professional (Windows Media Edition came along too late in the life cycle of the product to become well-known enough to confuse anyone). To help customers, Microsoft has published a blizzard of collateral, comparison sheets, pricing matrices, etc., etc. Thinking about whether to buy “Vista Home Premium” vs “Vista for Business?” What’s “Super Fetch” worth to you? How about “Volume Shadow Copy.” But it’s good to know the “Business” version includes “Premium Games.” Just what a business person is looking for in their business version of Vista. Why not include applications that have some applicability to business concerns? Maybe Stock analysis and reporting? Specialized business calculators? Something? Anything?

And making things ever more interesting is that the EULAs accompanying each version are different. Want to create a virtual machine on your PC and run Vista Home in it? You can’t! How about Vista Business? You can! Why one and not the other? Who knows.

Moving along down the path of positioning damnation is Microsoft’s failure to attach any powerful or desirable image to Windows Vista as a product line. Try to imagine in your mind what memorable picture or capability is associated with Vista. None comes to mind. The product does have a new interface, but Microsoft made no attempt to build a compelling image of usefulness around the AeroGlass UI. Yes, the icons are bigger and the screen savers are prettier, but so what? Microsoft might have discussed how the new desktop gave users “X-ray vision” like Superman, increasing their day to day productivity while working with Windows, but it didn’t. Vista is supposed to be far more secure than XP, and perhaps Microsoft could have discussed “Fort Knox,” an integrated group of technologies that allowed you to lock up your PC with bank vault tightness, but it didn’t. (Given Microsoft’s past security woes, it may have lacked the stomach for this gambit.)

By contrast, when Apple released Tiger (OS X 1.4) the market and the press were bombarded with information spotlighing “Spotlight,” an integrated search utility baked into the new release. Desktop search was by no means new on either Macs or PCs, but the Mac campaign succeeded in making people aware of its usefulness and, more importantly, gave them a mental picture of why they might want to give Tiger a look. With Leopard (OS X 1.5), the emphasis was on “Time Machine” (integrated backup).

Another key mistake made in launching Vista was to match features to pricing expectations and here Microsoft has also failed, particularly in respect to Windows Ultimate. Ultimate is the kitchen sink of Windows, the one with all the toys and whistles and it’s expensive at $450 for a retail version (and pricey at $270 for the upgrade version). But not to worry! With your purchase of Ultimate you’re promised all sorts of goodies only you, our ultimate customer, can download. And what are these exciting ultimate premiums? Well, to date, they include fun things like Windows Hold ‘Em (a poker game), extra language packs (those squeals of delight are coming from buyers who just unpacked Finnish) for the Windows multi-language user interface, Secure Online Key Backup, and BitLocker Drive Preparation Tool. (The latter two products are included in other, cheaper versions of Windows.) And oh, let’s not forget the new screen saver that let’s you use videos. Alas, it’s in beta and not due to be finished for a while yet. Ultimate customers, are, of course, delighted with all of this.

In its long and storied history, Microsoft has distinguished itself from its competition by its ability to avoid the self-inflicted wound of stupid marketing. With the release of Windows Vista, this has changed. But during the release of Windows Vista, Microsoft has repeated mistakes made by MicroPro (positioning conflict), Borland (positioning conflict, pricing/feature disparity), Novell (positioning conflict), Ashton-Tate (pricing /feature disparity coupled with inept PR) and itself (Windows 9X vs. NT), proving that the company now suffers from the same institutional memory hole that afflicts much of high-tech. The Vista release now serves as a valuable and contemporary object lesson in how not to position and launch a new software product.

Best of luck!

Edit

Foreword
by Joel Spolsky

 

In every high tech company I’ve known, there’s a war going on, between the geeks and the suits.

Before you start reading a book full of propaganda from software marketing wizard and über-suit Rick Chapman, let me take a moment to tell you what the geeks think.

Play along with me for a minute, will you?

Please imagine the most stereotypically pale, Jolt-drinking, Chinese-food-eating, video-game-playing, slashdot-reading Linux-command-line-dwelling dork. Since this is just a stereotype, you should be free to imagine either a runt or a kind of chubby fellow, but in either case this is not the kind of person who plays football with his high school pals when he visits mom for Thanksgiving. Also, since he’s a stereotype, I shall not have to make complicated excuses for making him a him.

This is what our stereotypical programmer thinks: “Microsoft makes inferior products, but they have superior marketing, so everybody buys their stuff.”

Ask him what he thinks about the marketing people in his own company. “They’re really stupid. Yesterday I got into a big argument with this stupid sales chick in the break room and after ten minutes it was totally clear that she had no clue what the difference between 802.11a and 802.11b is. Duh!”

What do marketing people do, young geek? “I don’t know. They play golf with customers or something, when they’re not making me correct their idiot spec sheets. If it was up to me I’d fire ‘em all.”

A nice fellow named Jeffrey Tarter used to publish an annual list of the hundred largest personal computer software publishers called the Soft●letter 100. Here’s what the top ten looked like in 1984[i]:

 

Rank

Company

Annual Revenues

#1

Micropro International

$60,000,000

#2

Microsoft Corp.

$55,000,000

#3

Lotus

$53,000,000

#4

Digital Research

$45,000,000

#5

VisiCorp

$43,000,000

#6

Ashton-Tate

$35,000,000

#7

Peachtree

$21,700,000

#8

MicroFocus

$15,000,000

#9

Software Publishing

$14,000,000

#10

Broderbund

$13,000,000

OK, Microsoft is number two, but it is one of a handful of companies with roughly similar annual revenues.

Now let’s look at the same list for 2001.

 

Rank

Company

Annual Revenues

#1

Microsoft Corp.

$23,845,000,000

#2

Adobe

$1,266,378,000

#3

Novell

$1,103,592,000

#4

Intuit

$1,076,000,000

#5

Autodesk

$926,324,000

#6

Symantec

$790,153,000

#7

Network Associates

$745,692,000

#8

Citrix

$479,446,000

#9

Macromedia

$295,997,000

#10

Great Plains

$250,231,000

 

Whoa. Notice, if you will, that every single company except Microsoft has disappeared from the top ten. Also notice, please, that Microsoft is so much larger than the next largest player, it’s not even funny. Adobe would double in revenues if they could just get Microsoft’s soda pop budget.

The personal computer software market is Microsoft. Microsoft’s revenues, it turns out, make up 69% of the total revenues of all the top 100 companies combined.

This is what we’re talking about, here.

Is this just superior marketing, as our imaginary geek claims? Or the result of an illegal monopoly? (Which begs the question: how did Microsoft get that monopoly? You can’t have it both ways.)

According to Rick Chapman, the answer is simpler: Microsoft was the only company on the list that never made a fatal, stupid mistake. Whether this was by dint of superior brainpower or just dumb luck, the biggest mistake Microsoft made was the dancing paperclip. And how bad was that, really? We ridiculed them, shut it off, and went back to using Word, Excel, Outlook, and Internet Explorer every minute of every day. But for every other software company that once had market leadership and saw it go down the drain, you can point to one or two giant blunders that steered the boat into an iceberg. Micropro fiddled around rewriting the printer architecture instead of upgrading their flagship product, WordStar. Lotus wasted a year and a half shoehorning 123 to run on 640kb machines; by the time they were done Excel was shipping and 640kb machines were a dim memory. Digital Research wildly overcharged for CP/M-86 and lost a chance to be the de-facto standard for PC operating systems. VisiCorp sued themselves out of existence[ii]. Ashton-Tate never missed an opportunity to piss off dBase developers, poisoning the fragile ecology that is so vital to a platform vendor’s success.

I’m a programmer, of  course, so I tend to blame the marketing people for these stupid mistakes. Almost all of them revolve around a failure of non-technical business people to understand basic technology facts. When Pepsi-pusher John Sculley was developing the Apple Newton, he didn’t know something that every computer science major in the country knows: handwriting recognition is not possible. This was at the same time that Bill Gates was hauling programmers into meetings begging them to create a single rich text edit control that could be reused in all their products. Put Jim Manzi (the suit who let the MBA’s take over Lotus) in that meeting and he would be staring blankly. “What’s a rich text edit control?” It never would have occurred to him to take technological leadership because he didn’t grok the technology; in fact, the very use of the word grok in that sentence would probably throw him off.

If you ask me, and I’m biased, no software company can succeed unless there is a programmer at the helm[iii]. So far the evidence backs me up. But many of these boneheaded mistakes come from the programmers themselves. Netscape’s monumental decision to rewrite their browser instead of improving the old code base cost them several years of Internet time, during which their market share went from around 90% to about 4%, and this was the programmers’ idea. Of course, the nontechnical and inexperienced management of that company had no idea why this was a bad idea. There are still scads of programmers who defend Netscape’s ground-up rewrite. “The old code really sucked, Joel!” Yeah, uh-huh. Such programmers should be admired for their love of clean code, but they shouldn’t be allowed within 100 feet of any business decisions, since it’s obvious that clean code is more important to them than shipping, uh, software.

So I’ll concede to Rick a bit and say that if you want to be successful in the software business, you have to have a management team that thoroughly understands and loves programming, but they have to understand and love business, too. Finding a leader with strong aptitude in both dimensions is difficult, but it’s the only way to avoid making one of those fatal mistakes that Rick catalogs lovingly in this book. So read it, chuckle a bit, and if there’s a stupidhead running your company, get your resume in shape and start looking for a house in Redmond.

 

 

[i] Source: Soft●letter, Jeffrey Tartar ed., April 30, 2001, 17:11.

[ii] Personal Software, later VisiCorp, and its partner in publishing VisiCalc, Software Arts, were  early proponents of the concept that the best way to treat a golden goose is to cook it over a long, slow open legal fire.  Software Arts, headed by Dan Bricklin, the developer of the seminal spreadsheet, entered into an affiliate publishing model with VisiCorp, headed by Dan Fylstra, under which Software Arts received a royalty of 37.5% for every copy of VisiCalc sold at retail and 50% for wholesale copies.  It was a great deal for Software Arts but not one in which VisiCorp could flourish.  Instead of coming to an intelligent accomodation over the issue, both companies sued each other.  While the lawyers were licking blood from each company’s self inflicted wounds, VisiCalc was allowed to languish and was quickly displaced from the critical IBM PC market by Lotus’ 123 package.

Edit

In the first edition of In Search of Stupidity: Over 20 Years of High-Tech Marketing Disasters, I made a deliberate decision to avoid giving specific advice about how companies could avoid being stupid. At the time, I thought the process was fairly obvious; study the mistakes of the past, apply self-observation to your current behavior, and if you see yourself repeating a previous example of idiocy, stop and do something else. As I point out in Chapter 1, the claim that high-tech companies are constantly running into “new” and “unique” situations that they cannot possibly be expected to anticipate and intelligently resolve is demonstrably false (particularly if you read In Search of Stupidity). The truth is that technology companies are constantly repeating the same mistakes with wearying consistency (as this second edition makes even clearer), and many of the stupid things these companies do are completely avoidable.

But despite my fond expectations, many who read the first edition claimed they needed more guidance on avoiding stupid behavior and more detailed instructions on how to pump up the frontal lobes of the collective corporate brain. Thus, I’ve added helpful analyses and, where appropriate, checklists on specific actions you can take to both avoid acting stupidly and transform yourself into a marketing Einstein after suffering a brain hiccup similar to the one that afflicted Eric Schmidt in 2005 when he decided to go to war with the press after a member of the fourth estate demonstrated Google’s potential to invade your privacy by googling “Eric Schmidt” (discussed in greater detail in Chapter 5). Although sometimes created in the spirit of tongue in cheek, the analyses and fundamental items in the lists will assist you in your quest to raise your marketing and sales IQ. Follow their sage advice, and you will find they offer you both redemption (good) and foresight (much, much better).

Another critique leveled at the first edition of In Search of Stupidity was its love of hindsight (also sometimes known as “history”). In the opinion of a fairly vocal minority, applying hindsight to the situations I wrote about was unfair; they believe I was picking on a band of dewy-eyed naifs wandering about a primordial high-tech Garden of Eden where original sin was unknown until introduced into paradise by Lucifer. (Prime candidates for the role of “Father of Lies” include Steve Jobs, Bill Gates, Larry Ellison, and a bevy of other industry movers and shakers from the period covered. The winner of the part depends on historical context and your personal opinion.)

For an overview of this viewpoint, I urge you to go to Amazon.com and read the In Search of Stupidity reviews, both good and bad, to see how people expressed themselves on the topic of hindsight. In my humble opinion, the words of Robert Hangsterfer “bob_hangsterfer” from Glendale, Wisconsin (two stars, “Rehashed stories, no guidance,” May 10, 2005), best sum up the disdain of some for learning from the mistakes of the past: “The author berates the ‘losers’ of the PC software wars and laughs at their ‘stupid decisions.’ Yet, how were the executives supposed to know what decisions would lead to success or failure?” Bob calls plaintively from the virtual pages of Amazon.

Now, in all honesty, I don’t regard this criticism as trenchant but rather somewhat tautological: “Nothing do I know; therefore I know nothing. So how can you expect me to act like I know?” But, the question deserves an answer. So, let’s take Chapter 8 of In Search of Stupidity, which deals with Intel’s $500 million+ meltdown over the Pentium’s inability to properly handle floating-point math past four digits. How could Intel possibly have known the consequences of its actions? How could Intel have possibly predicted what would happen when a major brand is besmirched by a major (or at least a perceived major) flaw in a high-profile product? What clues existed that would have possibly informed poor, confused, lost little Intel that its course of attempting to cover up a flaw in its flagship microprocessor, refusing to acknowledge the impact of the problem, and not offering to make customers whole to the extent possible was a stupid path to take?

Well, Intel could have studied the 1982 example of Johnson & Johnson, when some cockroach slipped cyanide into capsules of Extra Strength Tylenol and murdered seven people. The poisoning immediately destroyed sales of the leading brand of acetaminophen, and most observers predicted that Tylenol was doomed. In the first days of the disaster, advertising guru Jerry Della Femina, author of the classic From Those Wonderful Folks Who Gave You Pearl Harbor (Simon & Schuster, 1970) and other tomes about the world of ads and admen, was quoted by the New York Times as saying that “I don’t think they can ever sell another product under that name. There may be an advertising person who thinks he can solve this, and if they find him, I want to hire him, because then I want him to turn our water cooler into a wine cooler.”

I assume that Jerry has drunk a lot of vino in the intervening 24 years because his prediction was dead wrong. Instead of shriveling away, Johnson & Johnson launched a PR campaign that by 1994, the year in which the Intel debacle occurred, had already become a model of what to do when circumstances damage a company’s reputation or brand. The campaign included the following elements:

    *    An immediate press campaign by Johnson & Johnson informing the public about the poisoned capsules and warning them not to use any Tylenol product. Company executives were instructed to not obfuscate or deny the scope of the problem but instead cooperate with the media in getting the story out so as to ensure everyone heard about the poisoning.

    *    An immediate recall of all Tylenol capsule products on store shelves (at a cost of more than $100 million to Johnson & Johnson).

    *    An offer to immediately swap out all Tylenol capsules with Tylenol tablets.

    *    A series of forthright statements by Tylenol upper management expressing their shock and pain over the deaths.

    *    After the completion of the recall, an extensive PR announcement of the introduction of new Tylenol products in tamper-proof packaging, coupled with an extensive series of promotional programs offering the new products at reduced prices via price discounts, coupons, and so on.

When you read Chapter 8 of this book, contrast these actions with the ones Intel actually took.

The result of Johnson & Johnson’s classic (and much studied) campaign rescued the product from the marketing grave. During the crisis Tylenol had seen its share of the market drop from 37 percent to 0 percent. A few months after the poisonings, Tylenol was back up to 24 percent market share and today still reigns as the leading brand of this popular painkiller.

So, there’s your answer, Bob. That’s how Intel could have known what to do. With a little study, a little history (hindsight), and a healthy dollop of common sense, we know how Intel could have saved itself a world of embarrassment and derision as well as a cool $500 million+.

(Oh, how do we know this? Because, Bob, amazingly enough, the second generation of Pentiums also suffered from math problems! But, Intel had learned its $500 million lesson. The company promptly offered to recall the “defective” processors and make dissatisfied customers whole. Since most customers didn’t really know what a floating-point unit [FPU] chip was and even more probably no longer knew how to do long division, the public—offered the security of Intel’s “guarantee blankie”—decided not to bother fixing a problem that wasn’t bothering them, and no one paid any attention to the whole imbroglio except for a small cadre of picky math people and hardware-obsessed geeks who took their new chips home and went away happy.)

Now, in all fairness, it’s not just high tech that suffers from a reluctance to learn from the mistakes of the past. For just a moment, let’s step outside high technology and take a look at what is perhaps America’s most seminal business, the automotive industry. As I’ve noted in Chapter 1, by the 1970s the U.S. car industry had raised the practice of building shoddy buggies to an art form. I particularly remember toward the end of the decade an abomination produced by Chrysler called the Cordoba, which was, we were assured by pitchman Ricardo Montalban, clad in “fine Corinthian leather.” It is a virtual certainty that all that survives of these cars are the leather seats; the bodies long ago turned to rust. Chryslers of this era were matched in this respect only by the Chevy Vega, a car that began to disappear in a cloud of iron oxide particles from the moment it was driven out of the showroom. If that wasn’t exciting enough, for more thrills one could always buy the “Immolator,” the Ford Pinto with that amazing, mounted-above-the-rear-axle-so-it-was-guaranteed-to-explode-when-smacked-hard gas tank.

But by the second half of the 1980s, a turnaround seemed to have taken place. Ford, General Motors, and Chrysler all appeared to turn important quality corners. Although no American cars have ever reached the benchmark standards set by Japanese carmakers, the situation definitely improved. Instead of dissolving in a heap of nano particles by 60,000 miles, the fate suffered by the hapless Cordoba, American cars started to be put together so well that people began to expect their homegrown tin to hit the 100,000-mile mark. U.S. carmakers latched on to the Japanese discovery that people like to “live” in their cars, and soon American cars had caught up with their overseas rivals in respect to the number of cubbies and cup holders festooning their buggies; Chrysler in particular was so diligent in this regard that some people began referring to their minivans and sedans as Slurpeemobiles. Even more telling, while K cars such as the Dodge Aries and Plymouth Reliant were never state-of-the-art automotives, 20 years after their introduction versions of each could be seen still limping up the frozen, rust-inducing streets of New York, New England, and eastern Canada (where they were sometimes known as Toronto Taxis).

The cult of quality continued to spread across the American auto landscape during these years; at Ford, quality was “job one”; the sort of legendary Lee Iacocca, when he wasn’t driving the development of such abortions as the revived Chrysler Imperial, a 1970s K car with a 1960s design that appealed to people in their 80s, proclaimed that “if you could find a better built car, buy it.” Not to be outdone, General Motors started a new division, Saturn, designed to prove that if you stuck a group of Americans in a remote location in the backwoods of Tennessee with nothing better to do, they’d build a small, underpowered, but pretty reliable car just as good as the Japanese were doing in the 1980s.

The result of all this attention to quality and reliability paid off; during the late 1980s and through much of the 1990s, American cars held their own against the Japanese and seriously dented the Europeans. But by the mid-to-late 1990s, American car companies had begun to backslide. Today, Toyota Camrys and Honda Accords routinely reach 150,000 and even 200,000 miles of reliable, trouble-free use while sporting ever more sophisticated designs, increased fuel economy, and more powerful engines. Doors and body panels on Japanese cars align with geometric precision; by contrast, the body panels of Chevys and Pontiacs often look like they’ve been attached to the car by assemblers suffering from problems with both depth perception and basic geometry. On European cars, interior plastic trim usually has a plush feel and pleasing patinas; on American cars, the plastic frequently appears as if it were made from recycled polyester double-knit leisure suites stored over from the 1970s. I’ve owned both a Pontiac Grand Am and a Bonneville over the past ten years; both suffered from serious electrical problems before they hit 65,000 miles. (To add insult to injury, my 1999 Pontiac Bonneville in the course of two weeks underwent a transformation from transportation to tin at 69,000 miles when in quick succession the car’s AC system ceased working and the engine’s plastic(!) intake manifold cracked, drowning the buggy’s innards in antifreeze.) By contrast, every electrical and mechanical component in my dispose-a-car Hyundai Elantra wagon still worked properly before I gave the thing away at 130,000 miles.

A Honda Accord’s high-beam stick flicks over with a satisfying “snick”; by contrast, the action on a Pontiac Bonneville’s light control stalk is equivalent to yanking on a stale stick of licorice. The Ford Focus, Lincoln LS, Pontiac Aztek, Chrysler 300m, and so on, and so on, have all been plagued with extensive quality complaints upon their initial introduction. Consumer Reports, the gold standard for objective auto ratings (yes, the magazine is not much fun to read, and it has that annoying left-wing, tree-hugger-life-was-better-in-the-19th-and-early-20th-centuries-when-choo-choo-trains-belched-smoke-into-the-air attitude, but it does buy its test vehicles and thus has no need to suck up to Detroit or Tokyo in the manner of publications such as Car and Driver and Road and Track) consistently accords Japanese cars with seas of little red bull’s-eyes (top ranked) while American cars are awash in black dots (bottom of the barrel). This after 30 years of multiple opportunities to catch up and adjust to the new reality the Japanese had introduced to the market, namely, that well-engineered and highly reliable cars will be favored by buyers over cars that aren’t.

Quality, reliability, and design issues had become such a problem in the U.S. auto industry that by 2006 General Motors and Ford bonds had been reduced to junk status; both companies were shedding plants, employees, and market share by the bucket load, and Ford scion William Ford had been reduced to making remarkably-just-like-the-ones-made-20-years-ago ads featuring smiling (presumably because they’d not been fired for making astoundingly unreliable first-year Ford Focuses) workers promising that Ford was (again) going to make reliable and well-built cars.

In defense of the indefensible, several auto-industry observers offered the feeble excuse that the reason for the reoccurrence of poor quality in American cars was that Detroit had decided to focus on building big SUVs in its quest for profits and market domination. The problem with this theory is that while the Americans were pouring out tons of GMC Jimmies with front ends that wobbled like tops at about 60,000 miles and Eddie Bauer–version Ford Expeditions with body parts that tended to fly off the car’s exterior at moderate velocities, the Japanese were turning out giant, global-warming-contributing and ice-cap-melting monstrosities that were also highly reliable and well made (as adjudged by Consumer Reports).

What possible justification can American companies (and I blame both the bosses and the workers for their failure to get it) offer to excuse their failure to at least match the Japanese in quality and reliability? Answer: there is no excuse. What we’re dealing with is sheer idiocy and a failure to study the mistakes of the past (hindsight) so as to avoid doing the same stupid thing all over again.

I’m not quite sure why hindsight in general has developed a bad reputation amongst the high-tech cognosceti, but I have several theories. One revolves around culture, specifically the culture of Silicon Valley, high technology’s principal engine and driver since the late 1970s. Silicon Valley is located in California, a land of self-actualization and narcissism and home to some of the silliest cults to ever plague mankind. Take est, for example. Developed by former used-car salesman (of course!) “Werner Erhardt” (not his actual name, but who cares), est was built around the platitude of “What is, is.” This was translated into a very profitable seminar program of how to arrive at “is” by getting “it.”[1] The series was highlighted by a series of exercises that took the attendees on a trip through a tomato, allowed est trainers to yell rude things at the attendees, encouraged acolytes to ignore most social niceties, and didn’t allow them to go to the bathroom (very often) during the est seminars. The core of the est belief system revolved around a mantra that “your beliefs, feelings, and experiences were the only things that were truly valid.”[2]

est had its competitors, as you’d expect. At MicroPro, for example, many in upper management’s inner circle were “graduates” of something called “The Laughing Men.” This was an offshoot of est that taught, according to what I was told at the time, pretty much the same thing. (I assumed the reason the men were laughing was that they were allowed to go to the bathroom.) At Ashton-Tate, Scientology was very popular, with the company’s founder, George Tate, being a practitioner.

est and its imitators were quite the thing in the 1970s and early 1980s and created large cadres of sociopaths[3] who felt they were immune from such interpersonal obligations as saying they were sorry when they misbehaved and totally focused on gratifying their every last whim and desire (which, come to think of it, characterizes much of upper management at many high-tech companies today). Some est graduates finally snapped out of it (often after receiving the divorce papers or a punch in the nose), and one day Werner Erhardt bailed out of the business and moved to Europe. But an examination of the current zeitgeist indicates est’s solipsistic message of relying only on your experiences to “create your own reality” has taken hold in much of high tech (as well as in other industries), and there’s probably not much of a market for teaching something everyone already believes in. And while you’re busy tending to your own reality, you tend to not have much time for worrying about others’ realities, particularly unsuccessful ones, since your reality will obviously not include their failures.

Another theory focuses on the underlying nature of engineers and programmers, many of whom continue to create new and innovative companies that they often then destroy by repeating past stupidity. The best programmers and engineers are usually “world creators,” people who like to “live” in their work and are happiest when they have complete control over every aspect of the tools, techniques, and technologies they use to create products. The frequently written about “not invented here” (NIH) syndrome is a direct result of this ethos, and the damage it can wreak on a company is illustrated in Chapter 4, which discusses how a key programming group at MicroPro finally destroyed the company over the issue of product control. A corollary to NIH is DTMNBICTCAYD, or “Don’t tell me nothing, because I created this company and you didn’t,” an affliction that also frequently leads to history repeating itself.

Again, it’s not just high tech that suffers from these syndromes. In 1908, Henry Ford created the Model T, the car that allowed Ford to transform the face of America and become for more than 20 years the largest automotive company in the world. By the standards of the day, the T was high tech, well built, easy to maintain, reliable, and cheap. But car technology was rapidly changing, and in 1912, while Ford was away on a trip, several of his engineers built a prototype of a new Model T, an “upgrade” that incorporated improvements such as a smoother ride and more stable wheelbase. Ford’s response to this attempt to improve “his” car without his exercising direct control over the process was to smash and vandalize the prototype while the shocked engineers watched.[4]

Now, in all fairness, some businesses insist on using hindsight to study past failures: the airline industry is a good example of this. After a crash or major flight faux pas, NTSB investigations do not normally follow these lines:

NTSB investigator: “Uh, captain, I note you’ve just flown your airplane straight into that mountain and killed all your passengers.”

Airplane captain: By Jove, with the benefit of 20/20 hindsight, we can all see you’re right! But, you know, I just had to experience catastrophic aerial failure for myself to truly comprehend it. Having lived through the disaster, I’ve absorbed on a deeply personal level just how bad crashing my plane and killing all my passengers can be and will in the future understand intuitively why it’s an experience to be avoided in the first place!

Instead, after a crash or serious operating mistake by a flight crew, the circumstances are analyzed, broken into their constituent parts, and then programmed into a flight simulator, which can be thought of as an electronic box stuffed full of hindsight. After this, flight crews from all over the world are periodically summoned to attend simulator classes so they can directly learn from all this hindsight until their instructors are satisfied they are unlikely to repeat another’s mistake.

But, enough preaching. If you’re one of those sturdy types who march to their own drummer, who seeks to squeeze the juice of life from sources pure of the carping calls of second guessing, and who desires to personally experience every emotion directly so as to live a life unadulterated by the pallid personas of those cowards who shrink from the whip of disaster and scourge of financial failure, let me salute you and bid you good luck and Godspeed!

Just do me one favor. At some point, send me your e-mail address and a description of what you’re up to. I’ll need some good material for the third edition of In Search of Stupidity.

[1] For a ribald but also highly informative look at est, I suggest you rent a copy of the 1977 film Semi-Tough. Possibly the best movie ever made by Burt Reynolds, its depiction of est (only thinly disguised in the movie) is both accurate and very funny.

[2] Outrageous Betrayal by Steven Pressman (St. Martin’s Press, 1993)

[3] I speak about this from personal experience, having had close acquaintances who went through the training and who remained unbearable for years.

[4] The Reckoning by David Halberstam (William Morrow, 1986)

Edit

As noted in Chapter 2 of this book, the release of the Altair microcomputer in 1975 heralded the beginning of the modern high-tech industry. But observers of the period also believe there was more to the Altair than just chips; the unit seemed to emit a mysterious elixir that entered the body of computer aficionados worldwide and sparked a strange war of the soul that has raged in the body of the computer geekdom for more than three decades. The war is between those who advocate for free software and open, patentless technology available to all and those who believe in making substantial sums of money from selling proprietary software and the vigorous protection of intellectual property. It’s the Kumbayahs vs. the Capitalists.

Other influences may be responsible for the ongoing struggle. Perhaps Star Trek bears some of the blame. Few in microcomputing hadn’t watched the series, and as Captain Kirk, Mr. Spock, Bones, Scottie, and their innumerable successors went gallivanting through the galaxy, they seemed to have no visible means of financial support. No one in the Star Trek universe wearing green eye shades ever appeared to worry about the propensity of the various casts to blow up what you’d think were undoubtedly very expensive space ships, given their capabilities of violating the laws of physics, transporting the crew to numerous planets inhabited by women who spent most of their time wearing lingerie and dodging ray gun fire from angry races of aliens who kept screaming “kaplok!” (and who also seemed to have no monetary worries). Perhaps the reason for Captain Kirk’s insouciance lay in the fact that everyone in Star Trek had access to what were called “transporters,” magical devices that could be used to whisk you from the space ship Enterprise to a planet without having to pay a toll. Later in the series’ development, transporters could be used to create chocolate milk shakes, drinks, and even the occasional boyfriend or girlfriend via simple voice commands. And all for free!

Of course, no computer has a Star Trek–like transporter system built into it, but from the standpoint of people interested in obtaining software without forking over monetary compensation, it has something almost as good. That good thing is the “copy” command. And since software, unlike milk shakes, drinks, and boyfriends, is already digitized, just about anyone can execute this wondrous command and enjoy a cornucopia of software in an environment free of the distasteful economic friction of “paying.”

Technology’s interest in the concept of free software was demonstrated almost conterminously with the release of the Altair in the events surrounding the “liberation” of the first BASIC for this pioneering machine. When first available, the Altair had no useful software, and the market was eagerly awaiting the release of Altair BASIC (waiting was something Altairians were very good at doing because Altair maker MITS was legendary for announcing new products it couldn’t deliver, a habit the rest of the industry soon learned to emulate). The product had been developed by a small software firm, Micro-Soft, run by two people no one had ever heard of, Paul Allen and Bill Gates. Micro-Soft had cut a deal with MITS to receive a royalty on every sale of Altair BASIC and was eagerly waiting for a stream of revenue to flow into the tiny firm’s coffers upon the official release of the new product to a marketer eager to buy it.

Unfortunately for Gates’s and Allen’s short-term plans, someone had appropriated an early version of Micro-Soft’s BASIC, stored on paper tape, at a small MITS trade show held in Palo Alto in 1975. The tape was promptly reproduced and then handed out at such venues as the Homebrew Computer Club, a semilegendary group of computer hackers and enthusiasts who met regularly in Silicon Valley to share information, gossip, advice, and other things, such as “liberated” chips and especially liberated Altair software. Soon, paper tapes containing an early, buggy version of Altair BASIC were in wide use and oddly enough, no one offered to pay Micro-Soft a dime for the product.

In 1975 there was very little that was kumbayah about Bill Gates, and he responded to the purloining of Microsoft BASIC by writing an open letter to the software liberators, published in the Homebrew Computer Club’s newsletter (and in similar publications), chiding them for their thieving ways and asking them to voluntarily pay for the privilege of using his BASIC. His letter made the logical point that if people weren’t recompensed for all their time and hard work spent creating new and better software products, they would have no incentive to do so, and the software industry would wither and die.

Gates’s pleas for financial remuneration went widely unheeded. The very act of releasing the letter generated generous amounts of sneers and opprobrium from software’s kumbayahs, three hundred or four hundred letters addressed to Gates chastising him for his greed, and about three or four voluntary payments for Altair BASIC. Ruined by the premature widespread release of Altair BASIC and financial loss this entailed, Micro-Soft went out of business, and Gates and Allen were never heard from…aga…errr…no. That’s not what happened.

What actually happened was the widespread release of Altair BASIC established the product as the de facto standard for microcomputers. Despite some idiosyncrasies, Micro-Soft’s BASIC was regarded as an engineering triumph—lean, loaded with features, and, in comparison with the mainframe and mini-computer BASICs most programmers worked with, incredibly fast. Although everyone didn’t want to pay for Altair, which later became Microsoft (with no hyphen) BASIC, everyone wanted to use it. Since Microsoft’s deal allowed the company to license the product to other firms, Microsoft was soon enjoying a tidy business licensing its BASIC to a plethora of other computer companies. In point of fact, it was the industry’s high regard for Microsoft’s BASIC that led IBM to Bill Gates’s door and enabled him to take advantage of the biggest business opportunity of the 20th century.

Nonetheless, as the industry began its rapid development, resentment on the part of software entrepreneurs grew as software piracy spread. And make no mistake, spread it did. Copying a software program worth hundreds, or even thousands, of dollars, was as easy as inserting a blank floppy disk into a disk drive and typing in your system’s version of the “copy” command. Games in particular were the target of frequent liberation efforts, with user groups for systems such as the Amiga and Atari ST sponsoring “swap nights” where members were encouraged to bring in their software collections for communal sharing. Many businesses entered into the kumbayah spirit of things, with it being a common occurrence for a company to buy one copy of a business software package such as WordStar and distributing it to every member of the company.

To counter the practice of software liberation, now usually called “piracy,” a whole host of what were eventually called “copy protection” systems and techniques were developed. Most of these focused on protecting Apple software because this computer system attracted the bulk of new software development until the release of the IBM PC. Some of the techniques employed included things such as forcing a disk drive to write to locations on a floppy nominally off limits to the hardware; “Spiradisk,” a system that wrote data to the disk surface in a big spiral; hardware “dongles,” plastic keys that contained a chip with a software key embedded into it; and so on.

In response to the efforts of one part of the software industry to prevent pirating software, another part promptly launched an effort to thwart the protectors (this had the happy effect of employing more programmers). Anticopy protection systems included software products such as Locksmith, copy-cracking boards that sucked an entire software product into memory and spit it out to disk, products that were capable of reading dongle keys, and so on, and so on, and so on. As soon as one copy protection scheme was introduced, it was immediately under attack by resourceful folks following in the glorious tradition of Altair BASIC and the Homebrew Computer Club.

By the early 1980s, IBM entered the market with its own microcomputer, and the focus of the endless cat-and-mouse game between the Capitalists and Kumbayahs shifted to the PC. The software industry’s reaction to rampant software piracy was the general introduction of copy protection for many of the major software packages. WordStar 2000, Lotus 123, dBase, and other packages incorporated elaborate schemes meant to halt, or at least slow, the piracy tide. For a brief period in the 1980s, almost a dozen software companies were pitching other software companies on the effectiveness of their respective protection systems.

I initially had a great deal of sympathy for the effort. As a field software engineer for MicroPro, I had become quite accustomed to walking into a customer’s location and seeing multiple copies of WordStar (which was not copy protected) installed on every computer in the place but being able to spot only one set of manuals available to the “user” base. Some simple math seemed to indicate a lot of bread was being snatched from my mouth, or at least from the mouth of the company paying my salary.

It was also annoying to find myself spending time providing technical support to people who were clearly flying the software Jolly Roger. One of my responsibilities was to take local technical support calls while in the office from people who were having difficulty with our word processor. A disturbingly high number of my calls went something like this:

Me: Hi! This is MicroPro technical support. How can I help you?

The “customer”: I need help installing my NEC 3550 printer.

Me: No problem! Please pull out your installation manual out, and turn to page 256. (This was an age when users were a manly bunch, with thumbs thickly muscled from paging through software documentation similar in size and comprehensiveness to small encyclopedias. Not the like the effete perusers of PDFs and HTML you find today.) I’ll be glad to walk you through the process.

The “customer”: Uh, I don’t have a manual in front of me.

Me: No problem. I’ll hold on the phone until you can get it.

The “customer”: Uh, I don’t have a manual.

Me: Can I ask what happened to it?

The “customer”: Uh, the dog ate it. (Other popular claims focused on thieving kids, roaring fires, and torrential flooding).

The computing press (the members of which were used to obtaining all the free software they wanted) was, as you might imagine, generally unsympathetic to the plight of the software firms. Despite giving perfunctory lip service to the idea that software companies had a right to protect their property from theft, the companies were (and are) constantly being lectured on “not treating their customers” like thieves, despite the indisputable fact that large numbers of them were (and are). In 1984, MicroPro estimated that eight pirated copies of WordStar were in use for every one sold. In 2005, estimates put software piracy rates in China at more than 90 percent.

And yet, by the end of the 1980s, practically every software that had implemented copy protection dropped it. Several factors were driving this trend. One was that many companies resisted buying copy protected software because it added complexity and instability to desktop computing systems and strained the resources of IT departments. Another was that copy protection added considerably to the software industry’s support burden because users called up to complain about systems that wouldn’t install because of hardware peculiarities, lost or damaged “key” disks, arguments about the number of “valid” installs, and so on. And, although our feelings undoubtedly weren’t the strongest factor driving corporate decisions, most software firms were hearing whines and groans from their field sales and support personnel about the difficulty of dealing with protected products. WordStar 2000, for example, at one time used a copy protection system that limited users to three installations of the software on different systems. This meant that whenever I or another person had to install WordStar 2000 on a demo system at a remote location, we had to go through a wearying install/deinstall routine while listening to outraged disk drives go AAAHHHHKKKK SKRRRIIIKKK WAAKA WAAKA WAAKA in order to keep our quiver full of demo installs for future use. (Field personnel weren’t initially given non-copy-protected products. When we were, the practical facts we created “on the ground” provided another reason to drop copy protection).

And finally, despite the theoretical losses software companies were suffering from piracy, it was hard to see in reality how piracy was hurting the companies. As the decade progressed, many software companies did indeed stumble and fall, but in no case was it possible to pin the blame on piracy. Also, it started to become apparent to software firms that piracy had a definite upside, as Microsoft had discovered years ago with the Altair. When the number of people using your software increased, your perception as the market leader increased as well. And pirated software functioned as a sort of marketing kudzu, tending to choke out the competition as use of your product spread throughout the computing populace. Once you had displaced the competition, it was possible to convert X percent of the pirates to paid users via various inducements and offers. Corporations, worried about legal liabilities, were also usually not reluctant to buy purloined software if the price was right.

Becoming the market leader also opened up opportunities for bundling and original equipment manufacturing (OEM) deals. At MicroPro, WordStar’s early ubiquity made it the favored word processing product to include with such systems as the Osborne, Kaypro, and many others. While OEM products were sold at a considerable discount from the software’s retail price, in most case all the software publisher had to do was provide licenses and serial numbers to its customers; the OEM customer usually was responsible for manufacturing and supporting the product. One MicroPro OEM salesman referred to the firm’s OEM business as a “money-printing operation.” This model worked in the case of such products as WordStar, dBase, WordPerfect, and most notably, Microsoft Windows. Today, Microsoft’s Windows OEM business is the most profitable component in the company’s bottom line.

In the meantime, while the proprietary software companies were garnering all the attention (and making all the money) from the market, the kumbayah forces, led by an interesting fellow by the name of Richard M. Stallman, were keeping the dream of free software alive. Stallman had entered computing by way of MIT in 1971, where he worked as a systems programmer in the university’s AI lab, at that time a hotbed of innovation in such areas as LISP and related languages. Stallman developed a reputation as an ace programmer, and while at MIT developed the legendary program Emacs, a text editor backed up by a powerful and extensible macro system. Stallman was a militant believer in what was then called the “Hacker Ethic,” a belief system that preached that software and the information it represented should be open and available to all users to change and modify as they saw fit. Stallman was fervent in his belief about the evils of charging for software, at one time proclaiming that “the prospect of charging money for software was a crime against humanity.”[1]

Unfortunately for RMS, as his friends called him, by the 1980s the MIT lab was becoming corrupted by the sirens of commerce, who asked why geeks couldn’t also have fancy cars, big homes, and gorgeous girl friends. Two AI companies (both ultimately unsuccessful) dedicated to building LISP interpreters and dedicated LISP machines spun out of the MIT lab, taking with them many of the lab’s best programmers and all, in the opinion of RMS, of the lab’s kumbayah mojo.

After a period of mourning, Stallman left the lab with a vision fixed firmly in his imagination. He would create a powerful, free, and open software environment that would allow programmers to create new and wondrous products. This environment would be based on the popular (but proprietary) UNIX operating system and, in a display of geek wit, would be called GNU (GNUs not UNIX; we’re sure you appreciate the recursion). And to ensure that what had happened at MIT could never happen again, he’d protect this environment with a new and innovative concept, a “copyleft” agreement that required programmers who used his software to build new software to make the original GNU software, and any changes or improvements made to the software they had created, available for free to anyone who wanted it under the GNU General Public License (GPL). When the GPL was introduced, Stallman became software’s Dr. Open, the civilized, reasonable, humanitarian advocate of all that was good and pure in the world. (Bill Gates has traditionally played the role of Mr. Proprietary, but since he’s supposed to be leaving Microsoft to cure diseases worldwide, Steve Ballmer will be appearing in the part moving forward.)

This was a sharp and revolutionary contrast with the typical end-user license agreement (EULA) that accompanied most proprietary software. Most EULAs allowed “licensees” of software only the right to copy “their” software onto a limited number of computers. In fact, by 2006 the Microsoft retail EULA for Windows allowed you to copy your $100+ copy of Windows XP onto only one computer, regardless of how many computers you owned. And boy oh boy, better make sure you never, ever buy a four-core processor in your computer, because that seemed to violate the Microsoft EULA. And if you read the rest of the EULA, it warned of all kinds of other things you couldn’t do, and all the warnings were written in the Scary Lawyer dialect of the English language. In fact, most EULAs are full of scary language and all kinds of implied legal threats. Interestingly enough, despite that software companies have been using EULAs for decades, it is unclear whether they have any legal validity.[2] Fortunately for the industry, no one actually ever reads a EULA; if they did, everyone would probably use only free software.

Given the current excitement over open source software and technology, it would be easy to think that Stallman’s GPL took the industry by storm, but this was not the case. The first GPL was released in 1989, and the second version, the one in current use in high technology, in 1991. At the time of their issuance, few people paid them the least bit of attention. One reason for this may be that while Stallman may have thought charging for software was wrong, almost no one else thought so, especially the many programmers who were making good money selling software and didn’t want to give up their new cars, houses, and girlfriends. Another was that Stallman’s rantings about the evils of for-sale software and rationale for giving it away sounded a bit too close to Karl Marx’s formulation of “from each according to his abilities; to each according to his needs.” In an era when the Soviet dinosaur was noisily clanking and shaking its way to extinction, Stallman’s zeitgeist seemed off to many.

It’s Finally GNU for You

But perhaps the biggest obstacle to the widespread acceptance of Stallman’s credo was that although he was preaching about the glories of free software created with GNU, he hadn’t actually sat down and finished the project. Stallman had built a series of software utilities that could be used to create software (an activity beloved of many coders) but had neglected, years after the proclamation of GNU, to provide the system with its key component, an operating system. Instead, it was left to a 21-year-old Finnish student at the University of Helsinki by the name of Linus Torvalds to create a working implementation of Stallman’s dream. UNIX, Linux’s distinguished father, had slowly been withdrawn from the programming community and had become increasingly proprietary and fragmented. Dozens of companies took their version of UNIX and built custom extensions and walls around the software. This had the effect of raising UNIX prices (and allowing these companies to do a nice business selling their specialized UNIX versions). Dissatisfied with the UNIX clone he was currently using and unable to afford a proprietary version, Torvalds decided to take a stab at writing his own operating system using the GNU tools.

Linux .001 was released in September of 1991. Shortly after its introduction, Torvalds invited anyone interested in the OS to contribute to the development of the next release. Many people did, and the most significant open source project in the industry’s history was born.

Driven by the enthusiasm of what would become know as “the open source community,” Linux made great strides over the next few years, its progress assisted by Torvalds’s decision to release Linux under the GPL. Its growth driven by open source aficionados, by the late 1990s Linux began to do serious financial damage to companies such as SGI, Sun, SCO, and others, all of whom soon saw their business models being ravaged by the new upstart.

But while Linux was steadily eating away at the profits of the UNIX firms, the Windows world safely ignored Torvalds and his OS, for the most part. A few hobbyists played with the system,[3] and Microsoft’s behavior toward Netscape and the government’s antitrust case raised the blood pressure of free software advocates worldwide; however, that was about it. After all, Windows was very, very cheap. Most people received the product for “free” with their hardware and ignored the issue that their purchase price reflected the cost of Windows, something that was easy to do when computers cost $2,000 to $3,000. And even if you bought it, once you factored in the cost of inflation and the ability to install it on every machine you owned (and a few you didn’t), the cost per computer seemed very reasonable for an operating system that ran a huge amount of software and seemed to support just about every peripheral you owned.

Also, what many have called “the open source paradox” began to rear its ugly economic head (and still does). The paradox was that while GNU, Linux, and other open source software had been written ostensibly to liberate programmers from a world of evil capitalists, ultimately it seemed the evil capitalists were most likely to benefit the most from the whole movement. After all, while it was nice that car companies, oil companies, lawyers, grocery stores, Burlington Coat Factory, and lots of businesses of all types were saving money on purchases of software, there was no proof that programmers were sharing in the bounty from all these expenditure reductions. And if you looked at some of the companies that expounded the use of Linux the loudest, such as IBM, you couldn’t but help wonder. After all, IBM had become America’s most prominent business colossus by building the most proprietary of proprietary software and hardware. IBM had been driven from its perch of preeminence by tiny start-up Microsoft, which had then gone on to enrich more geeks than any other company in history. Microsoft had created thousands of millionaire programmers; how many millionaire programmers had IBM ever created? For that matter, if Linux was so great, were all the Linux millionaires?

Some Hot Tunes

In the meantime, while everyone was focusing on software, no one was paying any attention to the music business. There didn’t seem to be any reason to do so. After all, we all knew how the music business basically worked. Every few years the youth of the world generated yet another raft of disaffected grungesters, cute girls, cute boys, some performers of indeterminate sex, ghetto rappers, hip hop blasters, soul throbbers, chanteuses, lounge acts, and so on, and so on, all of whom were signed to contracts by large, institutionally corrupt music companies. These in turn distributed cash, girls (or boys), and cocaine (or the drug of your choice) to the band while paying off music stations to play the songs of the performers under contract to the company. When the current crop of crooners aged and lost their appeal or overdosed, they were promptly replaced by a new generation of cute girls, cute boys, and so on, and the cycle continued.

The distribution model was also well understood. Music was sold to the public via albums of records, cassette tapes, and later, almost exclusively on CDs. Most of the music on the album was filler, designed to surround the one or two good songs with enough extra musical noise to justify charging $20 per CD, a price that annoyed people who remembered that before the switch to the new technology in the early 1990s, a record had cost about eight bucks. The companies raised prices because they could but justified the new price tags to the public by talking about the expense of producing CDs (despite that it cost less to mass produce them as opposed to vinyl) and to industry insiders by noting that the price of drugs had sky rocketed over the years.[4]

The music industry had known for years that public dissatisfaction with the current state of affairs was high and that people were highly interested in mixing and matching songs to create custom listening sets that matched their interests and moods (I cover this point in greater detail in Chapter 14), but no one in the business cared. The music companies had the entire distribution system, the artists, and the technology under control. In fact, in the early 1990s, the industry was able to strangle a potential threat to its domination, consumer digital audio tape players, by loading them with enough integrated copy restrictions to the point that no one was interested in buying the units. Although some music executives were dimly aware of the problems software companies had with piracy, none felt thought had any lessons to learn from high tech’s digital travails.

While the music industry was ignoring both the desires of its customers and the advance of technology, software geeks worldwide were busily working on making the life of the jingle moguls miserable. First came the development of MP3 compression, a technology that allowed software to take any music recording and compress it to about a 12th of its original size with very little loss in sound quality. Work on the MP3 format had begun in 1987, and final specifications for the technology were released to the public in 1994. Once a song had been “MP3’d,” it was small enough to be easily and quickly transmitted electronically. The next step was taken with the spread of cheap read/write optical disk systems in the mid-1990s. This in turn drove the development of software that could “rip” (copy) music from CDs to the new MP3 format. The fourth and final piece of the puzzle dropped into place with the adoption of the Internet by the public. A complete solution to bypassing the music industry’s lock on the distribution system had come into existence.

The first major company to explore the possibilities the Internet opened up for music distribution was MP3.com. The service was founded in 1998 and offered downloadable musical for free (the artists were compensated via a system that gave them a small royalty payment based on the number of times their songs were downloaded). MP3.com was not a music piracy site; a trained staff winnowed through the uploads and stripped out copyrighted material. Everyone thought the site was wonderful, it grew rapidly, and in 1999 MP3.com launched an IPO that netted the company $370 million.

The good times ceased to roll at MP3.com when in January 2000 it launched the My.MP3.com service. This enabled customers to securely register their personal CDs (you had to actually stick the CD in your PC so that MP3.com could scan it) and then stream a digital copy from your system to an online music “locker room” hosted by the My.MP3.com service. At this point, the intelligent thing for the music industry to have done was to have studied MP3.com, partnered with it, and “trained” the public to interact with the site and ones similar to it for the benefit of all concerned. Instead, the music moguls, in a act of classic and far-reaching stupidity worthy of such famous moments in rock star history as Alice Cooper tossing a hapless chicken to its death to a crowd in Toronto or Ozzy Osborne masticating an innocent bat,[5] sued poor MP3.com for copyright infringement and found a judge dim-witted enough to agree with them. Rather than appeal the case, MP3.com handed over the bulk of its IPO money to the recording industry. Fatally weakened, the service gave up the ghost during the dot-com meltdown, to the music industry’s immense satisfaction.

The smirking and high-fiving came to an abrupt end with the appearance of a new service, Napster. Based on a peer-to-peer network system that allowed computers to directly transfer MP3 files across the Internet, Napster made little effort to prevent software piracy, and the site soon became one of the most popular on the planet. The music industry, having learned absolutely nothing from the MP3.com incident, sued Napster as well and eventually was able to shut it down. As already noted in Chapter 11, Napster’s great vulnerability lay in its use of centralized servers to store the names of the files being offered to other Napster users. Now, with Napster out of business, smart programmers quickly developed new software that didn’t require the use of centralized servers but instead relied on individual computer systems located worldwide to manage the task of file coordination. The recording industry’s intelligent response to this development was to sue 19,000 parents, children, dead Vietnam vets,[6] and others for copyright infringement, an act that had absolutely no impact on the widespread practice of downloading free MP3-compressed music. The industry also began suing the individual peer-to-peer networks such as LimeWire and Kazaa, but as soon as one network disappeared, another one promptly appeared. The music industry now existed in a Greek hell of its own creating, doomed, like Sisyphus, to push the rock of copyright litigation up and down a terrain that consisted of endless hills of peer-to-peer networks.

Getting to the Root of the Problem

The industry’s stupidity reached a dizzying crescendo with Sony BMG Music Entertainment’s 2004 release to its customers of something that proved to be far more exciting than any music video ever produced—a “rootkit.” A rootkit is perhaps the most dangerous of all malware, a vicious piece of Borgware that absorbs your computer’s operating system into a vast, evil collective over which you have no control. Rootkits integrate themselves so deeply into a computer’s innards that even high quality antivirus and antispyware products often cannot detect them. The Sony rootkit, targeted primarily at Windows (though it also infected Macs but to a lesser extent), was loaded onto 52 of its music CDs, and when someone put a rootkit-infected CD into their computer, Sony’s malware was surreptitiously installed onto the system. Once there, if detected, an attempt to remove the rootkit resulted in severe damage to Windows and a nonworking computer. Once hidden on your PC, the rootkit prevented you from copying songs from the CD to another CD or to the MP3 format (though this protection was almost instantly circumvented).

The Sony rootkit spread to more than half a million machines and networks, including those in the Department of Defense and other government agencies, before writer and Windows expert Mark Russinovich discovered its existence in October of 2005. He posted his discovery online, and news of the rootkit spread worldwide in a matter of hours. (Companies such as Symantec and McAfee were heavily criticized for failing to develop software that detected Sony’s malware until Russinovich’s disclosure of its existence.)

Sony’s handling of their self-inflicted PR nightmare showed the company’s collective intelligence was even with that of the wretched headless bat publicly decapitated by Ozzy Osborne. As outrage about the rootkit grew, Sony embarked on a damage control effort that included the following:

    *    Claiming the rootkit didn’t surreptitiously “phone home,” that is, use your Internet connection to contact Sony, when it did just that every time you played a song.

    *    Not realizing that the installation of the rootkit left every computer on which it had been installed with a giant security hole any hacker with knowledge of the rootkit’s behavior could exploit.

    *    Releasing an update that supposedly fixed the security hole created by the rootkit that required you provide your name, e-mail address, and other personal information to Sony. After installation, it continued to send information about your choice of music to Sony, but now it had a name to match up with your play list.

    *    Allowing Sony’s president of global digital business, Thomas Hesse, to go on National Public Radio and conduct an interview in which he told the listening audience that “Most people don’t even know what a rootkit is, so why should they care about it?” The hapless Hesse was apparently too stupid to realize that Sony was in the process of educating most of humanity on the dangers of rootkits.

    *    Not knowing that the company supplying its rootkits, software firm First4Internet, was using an open source encoder in the rootkit.[8]

Class action lawsuits against Sony were launched in California, New York, Texas, Italy, and lots of other places. Twelve days after the discovery of the rootkit, Sony announced it would no longer sell its self-infected CDs. Then it announced it was recalling all of the infected CDs and replacing them with non-copy-protected disks. Estimates of the eventual financial damages to Sony ran from $50 to $500 million (one of the reasons for the uncertainty was that thousands of Sony-infected PCs remain in use and vulnerable. As late as June of 2006, three virus creators were arrested for exploiting the security vulnerability created by the rootkit.[9])

More to the point, the entire fiasco helped convince millions of potential buyers of online music that the easiest, cheapest, and safest thing you could was log onto one of those nice peer-to-peer networks where the music selection was wide, the price was zero, and the number of rootkits you could expect to encounter was low.

Back to the Future with WGA

The year 2000, a date that saw most of the world looking forward, saw Microsoft looking back to the 1980s and copy protection. That year Microsoft announced its new “product activation” program. The new copy protection system worked by tethering, in theory, your copy of Microsoft Office 2000 to the Internet via a key found on Microsoft servers. The process worked by your first installing Office and then allowing the product activator to snoop through your computer, send a profile of your hardware to the Microsoft server, and receive a downloaded product key from Microsoft that would allow you to actually use the software you had bought. After initial trials, the scheme was extended to Windows XP when it was released in 2001. Soon, the entire copy protection system became known as Windows Product Activation (WPA).

There were, as you can imagine, some delightful aspects to WPA. If, for instance, you decided to change the motherboard, processor, graphics card or similar hardware on your system, you ran the risk of waking up WPA and having it nag you to reinstall Windows and your other WPA-protected programs, despite that the copy you were using was perfectly legal. Reinstalling Windows sometimes meant calling up a special 800 number and sitting through a long and wearying session that required you speak every last number of the CD key that came with your copy of Windows in the hope that the phone god with whom you were communing would deign to give you a new key. If that didn’t work, you could look forward to spending some time with someone named “Ramesh” or “Gupta” who was normally sitting in a call center in India or similar exotic location and explaining why you needed a new key that allowed you to actually use the software you’d bought…errr…“licensed.”

Freedom from Choice Is What You Want

Most people looked at WPA with the same affection shown a turd dropped in a punch bowl at a wedding, but in the main, Microsoft was able to finesse its introduction. There were several reasons for this. One was that many people received Windows bundled in with their computer and, as already noted, didn’t really think about what they had paid for the product. Another was that, as had happened before, the WPA copy scheme was quickly cracked, and many people simply bypassed WPA. A third was that Microsoft had given “universal keys” to many of its corporate customers; these allowed them to do mass installs of Windows at their business locations without having to waste time going through hundreds or thousands of activations. These keys had quickly leaked into the general public and were employed by many people to use Windows in pretty much the same way they had for more than a decade. All in all, it all turned out that most people could ignore WPA, for most of the time.

This Which seemed, to most people, fair. Microsoft now had legally sanctioned monopolies in desktop operating systems and office suites (but no mauling of the competition allowed)! The company seemed on its way to establishing a similar monopoly in network operating systems, had strong positions in the enterprise database market with its SQL product, was selling a great deal of Microsoft Exchange, had a nice business in mice, and by 2002 enjoyed the luxury of having approximately $49 billion in cash sitting in the company’s piggy bank. Why would any company in its right mind disturb such a wonderful status quo?

Of course, the open source and free software folks took a great deal of enjoyment in pointing out that Linux, which had steadily increased in functionality and ease of use, was free and never required you talk to Ramesh when changing a motherboard. And in the meantime, an interesting product called first StarOffice, then OpenOffice, had appeared on the scene. StarOffice began its life as an OS/2 office suite developed by a German company in the early 1990s. After the collapse of OS/2, the software morphed into a Windows product that was bought by Sun, ostensibly because it was cheaper for the company to buy its own office software than buy Microsoft’s. The real reason was the desire of Sun CEO Scott McNealy to give Bill Gates and his company a case of heartburn, which he attempted to do by open sourcing most of StarOffice’s code, which was then transformed into OpenOffice by a series of programmers dedicated to open source ideals (they didn’t become millionaires, though). Sun still sells a version of StarOffice, though there’s little compelling reason to buy it considering the price, free, of OpenOffice.

On the other hand, although Linux was free, installing it was a royal pain that the vast majority of people had no desire to experience. The price of freedom included the privilege of choosing which Linux you would pick from dozens of different packages, called “distros,” and then attempting to install your choice on your hardware. This was made more interesting by the fact that although the core Linux operating system was usually (though not always) the same from distro to distro, the various Linux bundles often used different install procedures, had different user interfaces, looked for key files in different places, included different utilities, and so on, and so on. And, although it  was nice that OpenOffice was free and that StarOffice was cheap, once one had copied Microsoft Office to all the computers it needed to be on, the price wasn’t really that bad after all.

All this changed in 2004 when Microsoft introduced, with an Orwellian fanfare of misleading language, its new Windows Genuine Advantage (WGA) program. Windows users were prompted (under threat of losing access to updates other than ones deemed critical to security) to download a program that checked their product key for authenticity. If Microsoft determined you were indeed “Genuine,” you could continue to receive all Windows XP updates. If you weren’t, well, no updates for you, at least until WGA was cracked by hackers (it took about a week). Everything seemed to continue on much as it had before, though the I-told-you-so cackling from the free software crowd grew louder, and people started becoming a little annoyed with Microsoft. It bordered on terminal chutzpah to threaten people with the inability to obtain via Microsoft’s update system access to such things as the latest version of Internet Explorer, a product that had been allowed to rot for five years after Microsoft dispatched Netscape. It was nice that Internet Explorer 7 would have tabbed browsing and all, but Firefox and Opera had been offering those features for years.

The rootkit hit the fan in July 2006 when Microsoft unleashed part deux of WGA, called “WGA notifications.” WGA notifications was a nifty bit of code that reminded everyone very much of a recent music company’s malware. Making utterly sure that WGA notifications would be instantly loathed by humanity, Microsoft misled the world by tucking the program onto its servers and transmitting it across the wires in the company of security patches with the appellation of a “critical update.” (WGA had nothing to do with security.) Once installed, the WGA program revealed the following charming characteristics:

    *    It phoned Microsoft every time you logged into Windows to tattle on you if it thought your install of Windows wasn’t valid (proving that Microsoft had learned absolutely, positively nothing from the Sony rootkit disaster of 2004).

    *    WGA now forced Windows to display an unending series of nagware messages urging you to get “Genuine,” that is, fork over more money into Microsoft’s giant cash hoard.

    *    The EULA that came with WGA notifications was misleading and didn’t properly request the user’s consent to install the software.

    *    If you wanted to “Get Genuine,” WGA didn’t make it easy for you to see other options other than give $149 to Microsoft. And there were other options. For example, if a repair shop had loaded an invalid copy of Windows onto your system during an overall of your system but you had bought a legal copy that was sitting on your bookshelf somewhere, you could restore your legitimate key to your system in a process that appeased WGA. But it was a genuine pain to find information about this process via all the “Genuine” nag screens.

    *    WGA was misidentifying hundreds of thousands, maybe millions, of legitimate installs as “nongenuine.” Exactly how many was somewhat mysterious, since Microsoft was not very forthcoming on the issue. The company did say that of the 60 million checks it had run, 80 percent of the machines tattled on by WGA were using invalid keys. That left about 12 million “others.” High levels of complaints were coming from a wide spectrum of users, particularly people who’d had Windows preinstalled on their laptops. As one blogger asked, “Is Dell a pirate?”

    *    If you read the EULA that came with WGA notifications, you realized you were being asked to download a beta product that had the potential to cripple your copy of Windows.

    *    WGA provided no advantages at all to the user (but plenty to Microsoft). The program was simply a copy protection/antipiracy scheme, and people weren’t stupid.

Reaction to the whole WGA mess was exactly what you would expect. Several class action lawsuits were launched against Microsoft claiming the company had violated laws against spyware in several states. Microsoft promptly replaced the big tattler in WGA with a littler tattler, one that would only “periodically” call home to tell on you. Microsoft also changed the EULA to inform you more clearly about its informant. A French company quickly released a program called RemoveWGA that kicked the Jewish mother (WGA notifications) out of your computer, though the basic WGA system remained intact. Several Windows pundits such as Brian Livingston began to recommend that people not use Windows Update but to instead rely on third-party services.[10]

Fresh from its initial success, Microsoft announced that the joys of WGA would soon be extended to all the products in its line. And to ensure that there were no embarrassing ambiguities in the future, WGA in all its glory would be directly integrated into Vista, the designated heir to XP whose father may have been Bill Gates but whose mother was clearly Steve Jobs. In the meantime, the chortles and snickers from the open sourcers turned to guffaws and screams of laughter as they fell to the floor holding their ribs from an excess of merriment.

Rumors then began to quickly spread that part three of Microsoft’s spyware system would introduce a new friend to WGA’s tattler and Jewish mother: an executioner. This would come in the form of a “kill switch” that would allow Microsoft to remotely disable your nongenuine Windows at the behest and whim of Redmond. (Industry wits noted that given the number of security attacks and virus infections afflicting Windows, most people might not notice any difference in operations.) In response to a query from Ziff-Davis columnist Ed Bott, a Microsoft PR representative, speaking in Modern Flack, provided the following chunk of verbiage:

No, Microsoft anti-piracy technologies cannot and will not turn off your computer. In our ongoing fight against piracy, we are constantly finding and closing loopholes pirates use to circumvent established policies. The game is changing for counterfeiters. In Windows Vista we are making it notably harder and less appealing to use counterfeit software, and we will work to make that a consistent experience with older versions of Windows as well. In alignment with our anti-piracy policies we have been continually improving the experience for our genuine customers, while restricting more and more access to ongoing Windows capabilities for those who choose not to pay for their software. Our genuine customers deserve the best experience, and so over time we have made the following services and benefits available only to them: Windows Update service, Download Center, Internet Explorer 7, Windows Defender, and Windows Media Player 11, as well as access to a full range of updates including non-security related benefits. We expect this list to expand considerably as we continue to add value for our genuine customers and deny value to pirates. Microsoft is fully committed to helping any genuine customers who have been victims of counterfeit software, and offer free replacement copies of Windows to those who’ve been duped by high quality counterfeiters. There is more information at our website http://www.microsoft.com/resources/howtotell.

A careful reading of this statement revealed plenty of ambiguities (we didn’t ask whether WGA was going to shut down the computer, but Windows), but Microsoft’s PR people clammed up and refused to talk further. Not making people feel any better was an online article by respected security analyst Robert Schneier in which he reported that a Microsoft representative had told him that:

In the fall, having the latest WGA will become mandatory and if it’s not installed, Windows will give a 30 day warning and when the 30 days is up and WGA isn’t installed, Windows will stop working, so you might as well install WGA now.[11]

At this point, the open source people were snorting liquids through their noses as they rolled around the floor laughing hysterically, but Windows people were depressed. Forums and blogs exploded with comments from users that now was the time to finally take a look at Linux, OpenOffice, and other open source alternatives to Windows.[12] It made sense. While Microsoft was spending time and energy figuring out ways to torture many of its customers, new versions of Linux had just about caught up to Windows in terms of ease of install, functionality, and peripheral support. There were still problems, but at least you could be sure that if anyone in the open source community attempted to put something like WGA into Linux, Richard Stallman would personally throttle them. No one was enthusiastic about the prospect of allowing Bill Gates and Steve Ballmer to hold a loaded pistol at their PCs on a 24/7 basis. Given the past experiences with WGA, just how could you be sure that some idiot at Microsoft wouldn’t inadvertently do something that crippled your system at just the wrong time? Certainly some people thought the possibility existed. Before finishing this book, I spoke to an acquaintance at Microsoft who told me that: this:

I recommend to my friends that they always keep a copy of OpenOffice on their systems in the event that MS Office’s activation system locks up the software when they’re not expecting it and they can’t reach a phone or the Internet to reactivate it. Interoperability is excellent and you can usually get something done. It’s good protection against our copy protection

It appeared that open source has a friend in Redmond, after all!

[1] Free as in Freedom: Richard Stallman’s Crusade for Free Software by Sam Williams (O’Reilly Media, 2002)

[2] http://en.wikipedia.org/wiki/EULA

[3] I purchased a retail copy of Red Hat Linux in the 1990s and attempted to install it on my PC. The install promptly failed when Linux failed to know what to do with my then state-of-the art Adaptec SCSI interface card. A plaintive inquiry sent to the famed Linux community was answered by a condescending message that since Adaptec wasn’t releasing its drivers under the GPL, I shouldn’t expect Linux to work. I promptly gave up on Red Hat and Linux and continued using and buying Windows.

[4] This sounds like a facetious statement. It’s not. The field sales office I worked in was located in Secaucus, New Jersey. The MicroPro offices were down the hall from the studios of one of the region’s most popular Top 40 radio stations at the time, Z-100, and I became used to seeing a limo periodically drive up to our forsaken location and drop off such music stars as Cyndi Lauper, Bob Geldof, Madonna, and so on, for on-the-air PR appearances. I struck up an acquaintance with one of the DJs who worked there, and he explained in loving detail how the industry worked.

[5] Rock Stars do the Dumbest Things by Margaret Moser (Renaissance Press, 1998). A long-buried classic worth your time!

[6] “The Shameful Destination of your Music Purchase Dollars” by David Berlind (http://blogs.zdnet.com/BTL/?p=3486), August 14, 2006

[7] The Borg are Star Trek’s baddest bad guys, a race of cyborgs ruled by queens who run around the galaxy in large cube-style ships assimilating other races while announcing “resistance is futile.” In high-tech, Bill Gates is usually assumed to be the chief Borg queen.  However, given Steve Job’s recent penchant for suing everyone, Apple’s increasing monopoly in the music world, and the suspicious design of the Apple Cube and the Next computer, many people think Apple’s CEO may auditioning for the role.

[8] LAME, licensed under the lesser GPL

[9] “Virus Suspects arrested in UK and Finland” by Quentin Reade. (Webuser, http://www.webuser.co.uk/news/87558.html?aff=rss), June 27th, 2006

[10] Windows Secret Newsletter, issue 78 (http://windowssecrets.com/comp/060629/)

[11] http://www.schneier.com/blog/archives/2006/06/microsoft_windo_1.html

[12] I have. I’m tired of talking to Ramesh every time I swap a motherboard, something I do fairly frequently.

Edit

As noted in Chapter 2 of this book, the release of the Altair microcomputer in 1975 heralded the beginning of the modern high-tech industry. But observers of the period also believe there was more to the Altair than just chips; the unit seemed to emit a mysterious elixir that entered the body of computer aficionados worldwide and sparked a strange war of the soul that has raged in the body of the computer geekdom for more than three decades. The war is between those who advocate for free software and open, patentless technology available to all and those who believe in making substantial sums of money from selling proprietary software and the vigorous protection of intellectual property. It’s the Kumbayahs vs. the Capitalists.

Other influences may be responsible for the ongoing struggle. Perhaps Star Trek bears some of the blame. Few in microcomputing hadn’t watched the series, and as Captain Kirk, Mr. Spock, Bones, Scottie, and their innumerable successors went gallivanting through the galaxy, they seemed to have no visible means of financial support. No one in the Star Trek universe wearing green eye shades ever appeared to worry about the propensity of the various casts to blow up what you’d think were undoubtedly very expensive space ships, given their capabilities of violating the laws of physics, transporting the crew to numerous planets inhabited by women who spent most of their time wearing lingerie and dodging ray gun fire from angry races of aliens who kept screaming “kaplok!” (and who also seemed to have no monetary worries). Perhaps the reason for Captain Kirk’s insouciance lay in the fact that everyone in Star Trek had access to what were called “transporters,” magical devices that could be used to whisk you from the space ship Enterprise to a planet without having to pay a toll. Later in the series’ development, transporters could be used to create chocolate milk shakes, drinks, and even the occasional boyfriend or girlfriend via simple voice commands. And all for free!

Of course, no computer has a Star Trek–like transporter system built into it, but from the standpoint of people interested in obtaining software without forking over monetary compensation, it has something almost as good. That good thing is the “copy” command. And since software, unlike milk shakes, drinks, and boyfriends, is already digitized, just about anyone can execute this wondrous command and enjoy a cornucopia of software in an environment free of the distasteful economic friction of “paying.”

Technology’s interest in the concept of free software was demonstrated almost conterminously with the release of the Altair in the events surrounding the “liberation” of the first BASIC for this pioneering machine. When first available, the Altair had no useful software, and the market was eagerly awaiting the release of Altair BASIC (waiting was something Altairians were very good at doing because Altair maker MITS was legendary for announcing new products it couldn’t deliver, a habit the rest of the industry soon learned to emulate). The product had been developed by a small software firm, Micro-Soft, run by two people no one had ever heard of, Paul Allen and Bill Gates. Micro-Soft had cut a deal with MITS to receive a royalty on every sale of Altair BASIC and was eagerly waiting for a stream of revenue to flow into the tiny firm’s coffers upon the official release of the new product to a marketer eager to buy it.

Unfortunately for Gates’s and Allen’s short-term plans, someone had appropriated an early version of Micro-Soft’s BASIC, stored on paper tape, at a small MITS trade show held in Palo Alto in 1975. The tape was promptly reproduced and then handed out at such venues as the Homebrew Computer Club, a semilegendary group of computer hackers and enthusiasts who met regularly in Silicon Valley to share information, gossip, advice, and other things, such as “liberated” chips and especially liberated Altair software. Soon, paper tapes containing an early, buggy version of Altair BASIC were in wide use and oddly enough, no one offered to pay Micro-Soft a dime for the product.

In 1975 there was very little that was kumbayah about Bill Gates, and he responded to the purloining of Microsoft BASIC by writing an open letter to the software liberators, published in the Homebrew Computer Club’s newsletter (and in similar publications), chiding them for their thieving ways and asking them to voluntarily pay for the privilege of using his BASIC. His letter made the logical point that if people weren’t recompensed for all their time and hard work spent creating new and better software products, they would have no incentive to do so, and the software industry would wither and die.

Gates’s pleas for financial remuneration went widely unheeded. The very act of releasing the letter generated generous amounts of sneers and opprobrium from software’s kumbayahs, three hundred or four hundred letters addressed to Gates chastising him for his greed, and about three or four voluntary payments for Altair BASIC. Ruined by the premature widespread release of Altair BASIC and financial loss this entailed, Micro-Soft went out of business, and Gates and Allen were never heard from…aga…errr…no. That’s not what happened.

What actually happened was the widespread release of Altair BASIC established the product as the de facto standard for microcomputers. Despite some idiosyncrasies, Micro-Soft’s BASIC was regarded as an engineering triumph—lean, loaded with features, and, in comparison with the mainframe and mini-computer BASICs most programmers worked with, incredibly fast. Although everyone didn’t want to pay for Altair, which later became Microsoft (with no hyphen) BASIC, everyone wanted to use it. Since Microsoft’s deal allowed the company to license the product to other firms, Microsoft was soon enjoying a tidy business licensing its BASIC to a plethora of other computer companies. In point of fact, it was the industry’s high regard for Microsoft’s BASIC that led IBM to Bill Gates’s door and enabled him to take advantage of the biggest business opportunity of the 20th century.

Nonetheless, as the industry began its rapid development, resentment on the part of software entrepreneurs grew as software piracy spread. And make no mistake, spread it did. Copying a software program worth hundreds, or even thousands, of dollars, was as easy as inserting a blank floppy disk into a disk drive and typing in your system’s version of the “copy” command. Games in particular were the target of frequent liberation efforts, with user groups for systems such as the Amiga and Atari ST sponsoring “swap nights” where members were encouraged to bring in their software collections for communal sharing. Many businesses entered into the kumbayah spirit of things, with it being a common occurrence for a company to buy one copy of a business software package such as WordStar and distributing it to every member of the company.

To counter the practice of software liberation, now usually called “piracy,” a whole host of what were eventually called “copy protection” systems and techniques were developed. Most of these focused on protecting Apple software because this computer system attracted the bulk of new software development until the release of the IBM PC. Some of the techniques employed included things such as forcing a disk drive to write to locations on a floppy nominally off limits to the hardware; “Spiradisk,” a system that wrote data to the disk surface in a big spiral; hardware “dongles,” plastic keys that contained a chip with a software key embedded into it; and so on.

In response to the efforts of one part of the software industry to prevent pirating software, another part promptly launched an effort to thwart the protectors (this had the happy effect of employing more programmers). Anticopy protection systems included software products such as Locksmith, copy-cracking boards that sucked an entire software product into memory and spit it out to disk, products that were capable of reading dongle keys, and so on, and so on, and so on. As soon as one copy protection scheme was introduced, it was immediately under attack by resourceful folks following in the glorious tradition of Altair BASIC and the Homebrew Computer Club.

By the early 1980s, IBM entered the market with its own microcomputer, and the focus of the endless cat-and-mouse game between the Capitalists and Kumbayahs shifted to the PC. The software industry’s reaction to rampant software piracy was the general introduction of copy protection for many of the major software packages. WordStar 2000, Lotus 123, dBase, and other packages incorporated elaborate schemes meant to halt, or at least slow, the piracy tide. For a brief period in the 1980s, almost a dozen software companies were pitching other software companies on the effectiveness of their respective protection systems.

I initially had a great deal of sympathy for the effort. As a field software engineer for MicroPro, I had become quite accustomed to walking into a customer’s location and seeing multiple copies of WordStar (which was not copy protected) installed on every computer in the place but being able to spot only one set of manuals available to the “user” base. Some simple math seemed to indicate a lot of bread was being snatched from my mouth, or at least from the mouth of the company paying my salary.

It was also annoying to find myself spending time providing technical support to people who were clearly flying the software Jolly Roger. One of my responsibilities was to take local technical support calls while in the office from people who were having difficulty with our word processor. A disturbingly high number of my calls went something like this:

Me: Hi! This is MicroPro technical support. How can I help you?

The “customer”: I need help installing my NEC 3550 printer.

Me: No problem! Please pull out your installation manual out, and turn to page 256. (This was an age when users were a manly bunch, with thumbs thickly muscled from paging through software documentation similar in size and comprehensiveness to small encyclopedias. Not the like the effete perusers of PDFs and HTML you find today.) I’ll be glad to walk you through the process.

The “customer”: Uh, I don’t have a manual in front of me.

Me: No problem. I’ll hold on the phone until you can get it.

The “customer”: Uh, I don’t have a manual.

Me: Can I ask what happened to it?

The “customer”: Uh, the dog ate it. (Other popular claims focused on thieving kids, roaring fires, and torrential flooding).

The computing press (the members of which were used to obtaining all the free software they wanted) was, as you might imagine, generally unsympathetic to the plight of the software firms. Despite giving perfunctory lip service to the idea that software companies had a right to protect their property from theft, the companies were (and are) constantly being lectured on “not treating their customers” like thieves, despite the indisputable fact that large numbers of them were (and are). In 1984, MicroPro estimated that eight pirated copies of WordStar were in use for every one sold. In 2005, estimates put software piracy rates in China at more than 90 percent.

And yet, by the end of the 1980s, practically every software that had implemented copy protection dropped it. Several factors were driving this trend. One was that many companies resisted buying copy protected software because it added complexity and instability to desktop computing systems and strained the resources of IT departments. Another was that copy protection added considerably to the software industry’s support burden because users called up to complain about systems that wouldn’t install because of hardware peculiarities, lost or damaged “key” disks, arguments about the number of “valid” installs, and so on. And, although our feelings undoubtedly weren’t the strongest factor driving corporate decisions, most software firms were hearing whines and groans from their field sales and support personnel about the difficulty of dealing with protected products. WordStar 2000, for example, at one time used a copy protection system that limited users to three installations of the software on different systems. This meant that whenever I or another person had to install WordStar 2000 on a demo system at a remote location, we had to go through a wearying install/deinstall routine while listening to outraged disk drives go AAAHHHHKKKK SKRRRIIIKKK WAAKA WAAKA WAAKA in order to keep our quiver full of demo installs for future use. (Field personnel weren’t initially given non-copy-protected products. When we were, the practical facts we created “on the ground” provided another reason to drop copy protection).

And finally, despite the theoretical losses software companies were suffering from piracy, it was hard to see in reality how piracy was hurting the companies. As the decade progressed, many software companies did indeed stumble and fall, but in no case was it possible to pin the blame on piracy. Also, it started to become apparent to software firms that piracy had a definite upside, as Microsoft had discovered years ago with the Altair. When the number of people using your software increased, your perception as the market leader increased as well. And pirated software functioned as a sort of marketing kudzu, tending to choke out the competition as use of your product spread throughout the computing populace. Once you had displaced the competition, it was possible to convert X percent of the pirates to paid users via various inducements and offers. Corporations, worried about legal liabilities, were also usually not reluctant to buy purloined software if the price was right.

Becoming the market leader also opened up opportunities for bundling and original equipment manufacturing (OEM) deals. At MicroPro, WordStar’s early ubiquity made it the favored word processing product to include with such systems as the Osborne, Kaypro, and many others. While OEM products were sold at a considerable discount from the software’s retail price, in most case all the software publisher had to do was provide licenses and serial numbers to its customers; the OEM customer usually was responsible for manufacturing and supporting the product. One MicroPro OEM salesman referred to the firm’s OEM business as a “money-printing operation.” This model worked in the case of such products as WordStar, dBase, WordPerfect, and most notably, Microsoft Windows. Today, Microsoft’s Windows OEM business is the most profitable component in the company’s bottom line.

In the meantime, while the proprietary software companies were garnering all the attention (and making all the money) from the market, the kumbayah forces, led by an interesting fellow by the name of Richard M. Stallman, were keeping the dream of free software alive. Stallman had entered computing by way of MIT in 1971, where he worked as a systems programmer in the university’s AI lab, at that time a hotbed of innovation in such areas as LISP and related languages. Stallman developed a reputation as an ace programmer, and while at MIT developed the legendary program Emacs, a text editor backed up by a powerful and extensible macro system. Stallman was a militant believer in what was then called the “Hacker Ethic,” a belief system that preached that software and the information it represented should be open and available to all users to change and modify as they saw fit. Stallman was fervent in his belief about the evils of charging for software, at one time proclaiming that “the prospect of charging money for software was a crime against humanity.”[1]

Unfortunately for RMS, as his friends called him, by the 1980s the MIT lab was becoming corrupted by the sirens of commerce, who asked why geeks couldn’t also have fancy cars, big homes, and gorgeous girl friends. Two AI companies (both ultimately unsuccessful) dedicated to building LISP interpreters and dedicated LISP machines spun out of the MIT lab, taking with them many of the lab’s best programmers and all, in the opinion of RMS, of the lab’s kumbayah mojo.

After a period of mourning, Stallman left the lab with a vision fixed firmly in his imagination. He would create a powerful, free, and open software environment that would allow programmers to create new and wondrous products. This environment would be based on the popular (but proprietary) UNIX operating system and, in a display of geek wit, would be called GNU (GNUs not UNIX; we’re sure you appreciate the recursion). And to ensure that what had happened at MIT could never happen again, he’d protect this environment with a new and innovative concept, a “copyleft” agreement that required programmers who used his software to build new software to make the original GNU software, and any changes or improvements made to the software they had created, available for free to anyone who wanted it under the GNU General Public License (GPL). When the GPL was introduced, Stallman became software’s Dr. Open, the civilized, reasonable, humanitarian advocate of all that was good and pure in the world. (Bill Gates has traditionally played the role of Mr. Proprietary, but since he’s supposed to be leaving Microsoft to cure diseases worldwide, Steve Ballmer will be appearing in the part moving forward.)

This was a sharp and revolutionary contrast with the typical end-user license agreement (EULA) that accompanied most proprietary software. Most EULAs allowed “licensees” of software only the right to copy “their” software onto a limited number of computers. In fact, by 2006 the Microsoft retail EULA for Windows allowed you to copy your $100+ copy of Windows XP onto only one computer, regardless of how many computers you owned. And boy oh boy, better make sure you never, ever buy a four-core processor in your computer, because that seemed to violate the Microsoft EULA. And if you read the rest of the EULA, it warned of all kinds of other things you couldn’t do, and all the warnings were written in the Scary Lawyer dialect of the English language. In fact, most EULAs are full of scary language and all kinds of implied legal threats. Interestingly enough, despite that software companies have been using EULAs for decades, it is unclear whether they have any legal validity.[2] Fortunately for the industry, no one actually ever reads a EULA; if they did, everyone would probably use only free software.

Given the current excitement over open source software and technology, it would be easy to think that Stallman’s GPL took the industry by storm, but this was not the case. The first GPL was released in 1989, and the second version, the one in current use in high technology, in 1991. At the time of their issuance, few people paid them the least bit of attention. One reason for this may be that while Stallman may have thought charging for software was wrong, almost no one else thought so, especially the many programmers who were making good money selling software and didn’t want to give up their new cars, houses, and girlfriends. Another was that Stallman’s rantings about the evils of for-sale software and rationale for giving it away sounded a bit too close to Karl Marx’s formulation of “from each according to his abilities; to each according to his needs.” In an era when the Soviet dinosaur was noisily clanking and shaking its way to extinction, Stallman’s zeitgeist seemed off to many.

It’s Finally GNU for You

But perhaps the biggest obstacle to the widespread acceptance of Stallman’s credo was that although he was preaching about the glories of free software created with GNU, he hadn’t actually sat down and finished the project. Stallman had built a series of software utilities that could be used to create software (an activity beloved of many coders) but had neglected, years after the proclamation of GNU, to provide the system with its key component, an operating system. Instead, it was left to a 21-year-old Finnish student at the University of Helsinki by the name of Linus Torvalds to create a working implementation of Stallman’s dream. UNIX, Linux’s distinguished father, had slowly been withdrawn from the programming community and had become increasingly proprietary and fragmented. Dozens of companies took their version of UNIX and built custom extensions and walls around the software. This had the effect of raising UNIX prices (and allowing these companies to do a nice business selling their specialized UNIX versions). Dissatisfied with the UNIX clone he was currently using and unable to afford a proprietary version, Torvalds decided to take a stab at writing his own operating system using the GNU tools.

Linux .001 was released in September of 1991. Shortly after its introduction, Torvalds invited anyone interested in the OS to contribute to the development of the next release. Many people did, and the most significant open source project in the industry’s history was born.

Driven by the enthusiasm of what would become know as “the open source community,” Linux made great strides over the next few years, its progress assisted by Torvalds’s decision to release Linux under the GPL. Its growth driven by open source aficionados, by the late 1990s Linux began to do serious financial damage to companies such as SGI, Sun, SCO, and others, all of whom soon saw their business models being ravaged by the new upstart.

But while Linux was steadily eating away at the profits of the UNIX firms, the Windows world safely ignored Torvalds and his OS, for the most part. A few hobbyists played with the system,[3] and Microsoft’s behavior toward Netscape and the government’s antitrust case raised the blood pressure of free software advocates worldwide; however, that was about it. After all, Windows was very, very cheap. Most people received the product for “free” with their hardware and ignored the issue that their purchase price reflected the cost of Windows, something that was easy to do when computers cost $2,000 to $3,000. And even if you bought it, once you factored in the cost of inflation and the ability to install it on every machine you owned (and a few you didn’t), the cost per computer seemed very reasonable for an operating system that ran a huge amount of software and seemed to support just about every peripheral you owned.

Also, what many have called “the open source paradox” began to rear its ugly economic head (and still does). The paradox was that while GNU, Linux, and other open source software had been written ostensibly to liberate programmers from a world of evil capitalists, ultimately it seemed the evil capitalists were most likely to benefit the most from the whole movement. After all, while it was nice that car companies, oil companies, lawyers, grocery stores, Burlington Coat Factory, and lots of businesses of all types were saving money on purchases of software, there was no proof that programmers were sharing in the bounty from all these expenditure reductions. And if you looked at some of the companies that expounded the use of Linux the loudest, such as IBM, you couldn’t but help wonder. After all, IBM had become America’s most prominent business colossus by building the most proprietary of proprietary software and hardware. IBM had been driven from its perch of preeminence by tiny start-up Microsoft, which had then gone on to enrich more geeks than any other company in history. Microsoft had created thousands of millionaire programmers; how many millionaire programmers had IBM ever created? For that matter, if Linux was so great, were all the Linux millionaires?

Some Hot Tunes

In the meantime, while everyone was focusing on software, no one was paying any attention to the music business. There didn’t seem to be any reason to do so. After all, we all knew how the music business basically worked. Every few years the youth of the world generated yet another raft of disaffected grungesters, cute girls, cute boys, some performers of indeterminate sex, ghetto rappers, hip hop blasters, soul throbbers, chanteuses, lounge acts, and so on, and so on, all of whom were signed to contracts by large, institutionally corrupt music companies. These in turn distributed cash, girls (or boys), and cocaine (or the drug of your choice) to the band while paying off music stations to play the songs of the performers under contract to the company. When the current crop of crooners aged and lost their appeal or overdosed, they were promptly replaced by a new generation of cute girls, cute boys, and so on, and the cycle continued.

The distribution model was also well understood. Music was sold to the public via albums of records, cassette tapes, and later, almost exclusively on CDs. Most of the music on the album was filler, designed to surround the one or two good songs with enough extra musical noise to justify charging $20 per CD, a price that annoyed people who remembered that before the switch to the new technology in the early 1990s, a record had cost about eight bucks. The companies raised prices because they could but justified the new price tags to the public by talking about the expense of producing CDs (despite that it cost less to mass produce them as opposed to vinyl) and to industry insiders by noting that the price of drugs had sky rocketed over the years.[4]

The music industry had known for years that public dissatisfaction with the current state of affairs was high and that people were highly interested in mixing and matching songs to create custom listening sets that matched their interests and moods (I cover this point in greater detail in Chapter 14), but no one in the business cared. The music companies had the entire distribution system, the artists, and the technology under control. In fact, in the early 1990s, the industry was able to strangle a potential threat to its domination, consumer digital audio tape players, by loading them with enough integrated copy restrictions to the point that no one was interested in buying the units. Although some music executives were dimly aware of the problems software companies had with piracy, none felt thought had any lessons to learn from high tech’s digital travails.

While the music industry was ignoring both the desires of its customers and the advance of technology, software geeks worldwide were busily working on making the life of the jingle moguls miserable. First came the development of MP3 compression, a technology that allowed software to take any music recording and compress it to about a 12th of its original size with very little loss in sound quality. Work on the MP3 format had begun in 1987, and final specifications for the technology were released to the public in 1994. Once a song had been “MP3’d,” it was small enough to be easily and quickly transmitted electronically. The next step was taken with the spread of cheap read/write optical disk systems in the mid-1990s. This in turn drove the development of software that could “rip” (copy) music from CDs to the new MP3 format. The fourth and final piece of the puzzle dropped into place with the adoption of the Internet by the public. A complete solution to bypassing the music industry’s lock on the distribution system had come into existence.

The first major company to explore the possibilities the Internet opened up for music distribution was MP3.com. The service was founded in 1998 and offered downloadable musical for free (the artists were compensated via a system that gave them a small royalty payment based on the number of times their songs were downloaded). MP3.com was not a music piracy site; a trained staff winnowed through the uploads and stripped out copyrighted material. Everyone thought the site was wonderful, it grew rapidly, and in 1999 MP3.com launched an IPO that netted the company $370 million.

The good times ceased to roll at MP3.com when in January 2000 it launched the My.MP3.com service. This enabled customers to securely register their personal CDs (you had to actually stick the CD in your PC so that MP3.com could scan it) and then stream a digital copy from your system to an online music “locker room” hosted by the My.MP3.com service. At this point, the intelligent thing for the music industry to have done was to have studied MP3.com, partnered with it, and “trained” the public to interact with the site and ones similar to it for the benefit of all concerned. Instead, the music moguls, in a act of classic and far-reaching stupidity worthy of such famous moments in rock star history as Alice Cooper tossing a hapless chicken to its death to a crowd in Toronto or Ozzy Osborne masticating an innocent bat,[5] sued poor MP3.com for copyright infringement and found a judge dim-witted enough to agree with them. Rather than appeal the case, MP3.com handed over the bulk of its IPO money to the recording industry. Fatally weakened, the service gave up the ghost during the dot-com meltdown, to the music industry’s immense satisfaction.

The smirking and high-fiving came to an abrupt end with the appearance of a new service, Napster. Based on a peer-to-peer network system that allowed computers to directly transfer MP3 files across the Internet, Napster made little effort to prevent software piracy, and the site soon became one of the most popular on the planet. The music industry, having learned absolutely nothing from the MP3.com incident, sued Napster as well and eventually was able to shut it down. As already noted in Chapter 11, Napster’s great vulnerability lay in its use of centralized servers to store the names of the files being offered to other Napster users. Now, with Napster out of business, smart programmers quickly developed new software that didn’t require the use of centralized servers but instead relied on individual computer systems located worldwide to manage the task of file coordination. The recording industry’s intelligent response to this development was to sue 19,000 parents, children, dead Vietnam vets,[6] and others for copyright infringement, an act that had absolutely no impact on the widespread practice of downloading free MP3-compressed music. The industry also began suing the individual peer-to-peer networks such as LimeWire and Kazaa, but as soon as one network disappeared, another one promptly appeared. The music industry now existed in a Greek hell of its own creating, doomed, like Sisyphus, to push the rock of copyright litigation up and down a terrain that consisted of endless hills of peer-to-peer networks.

Getting to the Root of the Problem

The industry’s stupidity reached a dizzying crescendo with Sony BMG Music Entertainment’s 2004 release to its customers of something that proved to be far more exciting than any music video ever produced—a “rootkit.” A rootkit is perhaps the most dangerous of all malware, a vicious piece of Borgware that absorbs your computer’s operating system into a vast, evil collective over which you have no control. Rootkits integrate themselves so deeply into a computer’s innards that even high quality antivirus and antispyware products often cannot detect them. The Sony rootkit, targeted primarily at Windows (though it also infected Macs but to a lesser extent), was loaded onto 52 of its music CDs, and when someone put a rootkit-infected CD into their computer, Sony’s malware was surreptitiously installed onto the system. Once there, if detected, an attempt to remove the rootkit resulted in severe damage to Windows and a nonworking computer. Once hidden on your PC, the rootkit prevented you from copying songs from the CD to another CD or to the MP3 format (though this protection was almost instantly circumvented).

The Sony rootkit spread to more than half a million machines and networks, including those in the Department of Defense and other government agencies, before writer and Windows expert Mark Russinovich discovered its existence in October of 2005. He posted his discovery online, and news of the rootkit spread worldwide in a matter of hours. (Companies such as Symantec and McAfee were heavily criticized for failing to develop software that detected Sony’s malware until Russinovich’s disclosure of its existence.)

Sony’s handling of their self-inflicted PR nightmare showed the company’s collective intelligence was even with that of the wretched headless bat publicly decapitated by Ozzy Osborne. As outrage about the rootkit grew, Sony embarked on a damage control effort that included the following:

    *    Claiming the rootkit didn’t surreptitiously “phone home,” that is, use your Internet connection to contact Sony, when it did just that every time you played a song.

    *    Not realizing that the installation of the rootkit left every computer on which it had been installed with a giant security hole any hacker with knowledge of the rootkit’s behavior could exploit.

    *    Releasing an update that supposedly fixed the security hole created by the rootkit that required you provide your name, e-mail address, and other personal information to Sony. After installation, it continued to send information about your choice of music to Sony, but now it had a name to match up with your play list.

    *    Allowing Sony’s president of global digital business, Thomas Hesse, to go on National Public Radio and conduct an interview in which he told the listening audience that “Most people don’t even know what a rootkit is, so why should they care about it?” The hapless Hesse was apparently too stupid to realize that Sony was in the process of educating most of humanity on the dangers of rootkits.

    *    Not knowing that the company supplying its rootkits, software firm First4Internet, was using an open source encoder in the rootkit.[8]

Class action lawsuits against Sony were launched in California, New York, Texas, Italy, and lots of other places. Twelve days after the discovery of the rootkit, Sony announced it would no longer sell its self-infected CDs. Then it announced it was recalling all of the infected CDs and replacing them with non-copy-protected disks. Estimates of the eventual financial damages to Sony ran from $50 to $500 million (one of the reasons for the uncertainty was that thousands of Sony-infected PCs remain in use and vulnerable. As late as June of 2006, three virus creators were arrested for exploiting the security vulnerability created by the rootkit.[9])

More to the point, the entire fiasco helped convince millions of potential buyers of online music that the easiest, cheapest, and safest thing you could was log onto one of those nice peer-to-peer networks where the music selection was wide, the price was zero, and the number of rootkits you could expect to encounter was low.

Back to the Future with WGA

The year 2000, a date that saw most of the world looking forward, saw Microsoft looking back to the 1980s and copy protection. That year Microsoft announced its new “product activation” program. The new copy protection system worked by tethering, in theory, your copy of Microsoft Office 2000 to the Internet via a key found on Microsoft servers. The process worked by your first installing Office and then allowing the product activator to snoop through your computer, send a profile of your hardware to the Microsoft server, and receive a downloaded product key from Microsoft that would allow you to actually use the software you had bought. After initial trials, the scheme was extended to Windows XP when it was released in 2001. Soon, the entire copy protection system became known as Windows Product Activation (WPA).

There were, as you can imagine, some delightful aspects to WPA. If, for instance, you decided to change the motherboard, processor, graphics card or similar hardware on your system, you ran the risk of waking up WPA and having it nag you to reinstall Windows and your other WPA-protected programs, despite that the copy you were using was perfectly legal. Reinstalling Windows sometimes meant calling up a special 800 number and sitting through a long and wearying session that required you speak every last number of the CD key that came with your copy of Windows in the hope that the phone god with whom you were communing would deign to give you a new key. If that didn’t work, you could look forward to spending some time with someone named “Ramesh” or “Gupta” who was normally sitting in a call center in India or similar exotic location and explaining why you needed a new key that allowed you to actually use the software you’d bought…errr…“licensed.”

Freedom from Choice Is What You Want

Most people looked at WPA with the same affection shown a turd dropped in a punch bowl at a wedding, but in the main, Microsoft was able to finesse its introduction. There were several reasons for this. One was that many people received Windows bundled in with their computer and, as already noted, didn’t really think about what they had paid for the product. Another was that, as had happened before, the WPA copy scheme was quickly cracked, and many people simply bypassed WPA. A third was that Microsoft had given “universal keys” to many of its corporate customers; these allowed them to do mass installs of Windows at their business locations without having to waste time going through hundreds or thousands of activations. These keys had quickly leaked into the general public and were employed by many people to use Windows in pretty much the same way they had for more than a decade. All in all, it all turned out that most people could ignore WPA, for most of the time.

This Which seemed, to most people, fair. Microsoft now had legally sanctioned monopolies in desktop operating systems and office suites (but no mauling of the competition allowed)! The company seemed on its way to establishing a similar monopoly in network operating systems, had strong positions in the enterprise database market with its SQL product, was selling a great deal of Microsoft Exchange, had a nice business in mice, and by 2002 enjoyed the luxury of having approximately $49 billion in cash sitting in the company’s piggy bank. Why would any company in its right mind disturb such a wonderful status quo?

Of course, the open source and free software folks took a great deal of enjoyment in pointing out that Linux, which had steadily increased in functionality and ease of use, was free and never required you talk to Ramesh when changing a motherboard. And in the meantime, an interesting product called first StarOffice, then OpenOffice, had appeared on the scene. StarOffice began its life as an OS/2 office suite developed by a German company in the early 1990s. After the collapse of OS/2, the software morphed into a Windows product that was bought by Sun, ostensibly because it was cheaper for the company to buy its own office software than buy Microsoft’s. The real reason was the desire of Sun CEO Scott McNealy to give Bill Gates and his company a case of heartburn, which he attempted to do by open sourcing most of StarOffice’s code, which was then transformed into OpenOffice by a series of programmers dedicated to open source ideals (they didn’t become millionaires, though). Sun still sells a version of StarOffice, though there’s little compelling reason to buy it considering the price, free, of OpenOffice.

On the other hand, although Linux was free, installing it was a royal pain that the vast majority of people had no desire to experience. The price of freedom included the privilege of choosing which Linux you would pick from dozens of different packages, called “distros,” and then attempting to install your choice on your hardware. This was made more interesting by the fact that although the core Linux operating system was usually (though not always) the same from distro to distro, the various Linux bundles often used different install procedures, had different user interfaces, looked for key files in different places, included different utilities, and so on, and so on. And, although it  was nice that OpenOffice was free and that StarOffice was cheap, once one had copied Microsoft Office to all the computers it needed to be on, the price wasn’t really that bad after all.

All this changed in 2004 when Microsoft introduced, with an Orwellian fanfare of misleading language, its new Windows Genuine Advantage (WGA) program. Windows users were prompted (under threat of losing access to updates other than ones deemed critical to security) to download a program that checked their product key for authenticity. If Microsoft determined you were indeed “Genuine,” you could continue to receive all Windows XP updates. If you weren’t, well, no updates for you, at least until WGA was cracked by hackers (it took about a week). Everything seemed to continue on much as it had before, though the I-told-you-so cackling from the free software crowd grew louder, and people started becoming a little annoyed with Microsoft. It bordered on terminal chutzpah to threaten people with the inability to obtain via Microsoft’s update system access to such things as the latest version of Internet Explorer, a product that had been allowed to rot for five years after Microsoft dispatched Netscape. It was nice that Internet Explorer 7 would have tabbed browsing and all, but Firefox and Opera had been offering those features for years.

The rootkit hit the fan in July 2006 when Microsoft unleashed part deux of WGA, called “WGA notifications.” WGA notifications was a nifty bit of code that reminded everyone very much of a recent music company’s malware. Making utterly sure that WGA notifications would be instantly loathed by humanity, Microsoft misled the world by tucking the program onto its servers and transmitting it across the wires in the company of security patches with the appellation of a “critical update.” (WGA had nothing to do with security.) Once installed, the WGA program revealed the following charming characteristics:

    *    It phoned Microsoft every time you logged into Windows to tattle on you if it thought your install of Windows wasn’t valid (proving that Microsoft had learned absolutely, positively nothing from the Sony rootkit disaster of 2004).

    *    WGA now forced Windows to display an unending series of nagware messages urging you to get “Genuine,” that is, fork over more money into Microsoft’s giant cash hoard.

    *    The EULA that came with WGA notifications was misleading and didn’t properly request the user’s consent to install the software.

    *    If you wanted to “Get Genuine,” WGA didn’t make it easy for you to see other options other than give $149 to Microsoft. And there were other options. For example, if a repair shop had loaded an invalid copy of Windows onto your system during an overall of your system but you had bought a legal copy that was sitting on your bookshelf somewhere, you could restore your legitimate key to your system in a process that appeased WGA. But it was a genuine pain to find information about this process via all the “Genuine” nag screens.

    *    WGA was misidentifying hundreds of thousands, maybe millions, of legitimate installs as “nongenuine.” Exactly how many was somewhat mysterious, since Microsoft was not very forthcoming on the issue. The company did say that of the 60 million checks it had run, 80 percent of the machines tattled on by WGA were using invalid keys. That left about 12 million “others.” High levels of complaints were coming from a wide spectrum of users, particularly people who’d had Windows preinstalled on their laptops. As one blogger asked, “Is Dell a pirate?”

    *    If you read the EULA that came with WGA notifications, you realized you were being asked to download a beta product that had the potential to cripple your copy of Windows.

    *    WGA provided no advantages at all to the user (but plenty to Microsoft). The program was simply a copy protection/antipiracy scheme, and people weren’t stupid.

Reaction to the whole WGA mess was exactly what you would expect. Several class action lawsuits were launched against Microsoft claiming the company had violated laws against spyware in several states. Microsoft promptly replaced the big tattler in WGA with a littler tattler, one that would only “periodically” call home to tell on you. Microsoft also changed the EULA to inform you more clearly about its informant. A French company quickly released a program called RemoveWGA that kicked the Jewish mother (WGA notifications) out of your computer, though the basic WGA system remained intact. Several Windows pundits such as Brian Livingston began to recommend that people not use Windows Update but to instead rely on third-party services.[10]

Fresh from its initial success, Microsoft announced that the joys of WGA would soon be extended to all the products in its line. And to ensure that there were no embarrassing ambiguities in the future, WGA in all its glory would be directly integrated into Vista, the designated heir to XP whose father may have been Bill Gates but whose mother was clearly Steve Jobs. In the meantime, the chortles and snickers from the open sourcers turned to guffaws and screams of laughter as they fell to the floor holding their ribs from an excess of merriment.

Rumors then began to quickly spread that part three of Microsoft’s spyware system would introduce a new friend to WGA’s tattler and Jewish mother: an executioner. This would come in the form of a “kill switch” that would allow Microsoft to remotely disable your nongenuine Windows at the behest and whim of Redmond. (Industry wits noted that given the number of security attacks and virus infections afflicting Windows, most people might not notice any difference in operations.) In response to a query from Ziff-Davis columnist Ed Bott, a Microsoft PR representative, speaking in Modern Flack, provided the following chunk of verbiage:

No, Microsoft anti-piracy technologies cannot and will not turn off your computer. In our ongoing fight against piracy, we are constantly finding and closing loopholes pirates use to circumvent established policies. The game is changing for counterfeiters. In Windows Vista we are making it notably harder and less appealing to use counterfeit software, and we will work to make that a consistent experience with older versions of Windows as well. In alignment with our anti-piracy policies we have been continually improving the experience for our genuine customers, while restricting more and more access to ongoing Windows capabilities for those who choose not to pay for their software. Our genuine customers deserve the best experience, and so over time we have made the following services and benefits available only to them: Windows Update service, Download Center, Internet Explorer 7, Windows Defender, and Windows Media Player 11, as well as access to a full range of updates including non-security related benefits. We expect this list to expand considerably as we continue to add value for our genuine customers and deny value to pirates. Microsoft is fully committed to helping any genuine customers who have been victims of counterfeit software, and offer free replacement copies of Windows to those who’ve been duped by high quality counterfeiters. There is more information at our website http://www.microsoft.com/resources/howtotell.

A careful reading of this statement revealed plenty of ambiguities (we didn’t ask whether WGA was going to shut down the computer, but Windows), but Microsoft’s PR people clammed up and refused to talk further. Not making people feel any better was an online article by respected security analyst Robert Schneier in which he reported that a Microsoft representative had told him that:

In the fall, having the latest WGA will become mandatory and if it’s not installed, Windows will give a 30 day warning and when the 30 days is up and WGA isn’t installed, Windows will stop working, so you might as well install WGA now.[11]

At this point, the open source people were snorting liquids through their noses as they rolled around the floor laughing hysterically, but Windows people were depressed. Forums and blogs exploded with comments from users that now was the time to finally take a look at Linux, OpenOffice, and other open source alternatives to Windows.[12] It made sense. While Microsoft was spending time and energy figuring out ways to torture many of its customers, new versions of Linux had just about caught up to Windows in terms of ease of install, functionality, and peripheral support. There were still problems, but at least you could be sure that if anyone in the open source community attempted to put something like WGA into Linux, Richard Stallman would personally throttle them. No one was enthusiastic about the prospect of allowing Bill Gates and Steve Ballmer to hold a loaded pistol at their PCs on a 24/7 basis. Given the past experiences with WGA, just how could you be sure that some idiot at Microsoft wouldn’t inadvertently do something that crippled your system at just the wrong time? Certainly some people thought the possibility existed. Before finishing this book, I spoke to an acquaintance at Microsoft who told me that: this:

I recommend to my friends that they always keep a copy of OpenOffice on their systems in the event that MS Office’s activation system locks up the software when they’re not expecting it and they can’t reach a phone or the Internet to reactivate it. Interoperability is excellent and you can usually get something done. It’s good protection against our copy protection

It appeared that open source has a friend in Redmond, after all!

[1] Free as in Freedom: Richard Stallman’s Crusade for Free Software by Sam Williams (O’Reilly Media, 2002)

[2] http://en.wikipedia.org/wiki/EULA

[3] I purchased a retail copy of Red Hat Linux in the 1990s and attempted to install it on my PC. The install promptly failed when Linux failed to know what to do with my then state-of-the art Adaptec SCSI interface card. A plaintive inquiry sent to the famed Linux community was answered by a condescending message that since Adaptec wasn’t releasing its drivers under the GPL, I shouldn’t expect Linux to work. I promptly gave up on Red Hat and Linux and continued using and buying Windows.

[4] This sounds like a facetious statement. It’s not. The field sales office I worked in was located in Secaucus, New Jersey. The MicroPro offices were down the hall from the studios of one of the region’s most popular Top 40 radio stations at the time, Z-100, and I became used to seeing a limo periodically drive up to our forsaken location and drop off such music stars as Cyndi Lauper, Bob Geldof, Madonna, and so on, for on-the-air PR appearances. I struck up an acquaintance with one of the DJs who worked there, and he explained in loving detail how the industry worked.

[5] Rock Stars do the Dumbest Things by Margaret Moser (Renaissance Press, 1998). A long-buried classic worth your time!

[6] “The Shameful Destination of your Music Purchase Dollars” by David Berlind (http://blogs.zdnet.com/BTL/?p=3486), August 14, 2006

[7] The Borg are Star Trek’s baddest bad guys, a race of cyborgs ruled by queens who run around the galaxy in large cube-style ships assimilating other races while announcing “resistance is futile.” In high-tech, Bill Gates is usually assumed to be the chief Borg queen.  However, given Steve Job’s recent penchant for suing everyone, Apple’s increasing monopoly in the music world, and the suspicious design of the Apple Cube and the Next computer, many people think Apple’s CEO may auditioning for the role.

[8] LAME, licensed under the lesser GPL

[9] “Virus Suspects arrested in UK and Finland” by Quentin Reade. (Webuser, http://www.webuser.co.uk/news/87558.html?aff=rss), June 27th, 2006

[10] Windows Secret Newsletter, issue 78 (http://windowssecrets.com/comp/060629/)

[11] http://www.schneier.com/blog/archives/2006/06/microsoft_windo_1.html

[12] I have. I’m tired of talking to Ramesh every time I swap a motherboard, something I do fairly frequently.

Edit

THE COMPLETE TITLE of In Search of Stupidity includes the phrase “High- Tech Marketing Disasters,” and from these words you might conclude that it’s a firm’s marketers who usually bear the chief responsibility for major corporate catastrophes. This isn’t true. To be worthy of mention in this book, it took the combined efforts of personnel in upper management, development, sales, and marketing, all fiercely dedicated to ignoring common sense, the blatantly obvious, and the lessons of the past. Major failure doesn’t just happen: To achieve it, everyone must pull together as a team.

Chapter 4 of In Search of Stupidity helps drive this point home. For MicroPro to plummet from the software industry’s pinnacle to permanent oblivion took a) upper management’s mishandling of development and market timing, b) the marketing department’s idiotic decision to create a fatal product-positioning conflict, and c) the development team’s dimwitted decision to rewrite perfectly good code at a critical time because it wanted to write even better code that no one really needed. A magnificent example of different groups within a company all cooperating to ensure disaster.

In this spirit, I’ve decided to include selected portions of an interview with Joel Spolsky that ran on Softletter.  (By the way, this interview was “picked up” by Slashdot [http://www.slashdot.org], a website dedicated to technology, coding, open source, and all things nerd. It generated a considerable amount of comment and controversy. You can search the Slashdot archives to read what other people thought and gain further insight into Joel’s opinions.)

I regard Joel Spolsky, president and one of the founders of Fog Creek Software (http://www.fogcreek.com), as one of the industry’s most fascinating personalities. He worked at Microsoft from 1991 to 1994 and has more than 10 years of experience managing the software development process. As a program manager on the Microsoft Excel team, Joel designed Excel Basic and drove Microsoft’s Visual Basic for Applications (VBA) strategy. His website, Joel on Software (http:// www.JoelonSoftware.com), is visited by thousands of developers worldwide every day. His first book, User Interface Design for Programmers (Apress, 2001), was reviewed by me and I regard it as a must-have for anyone involved in developing and marketing software.

Why this interview? If you’ve ever worked on the software side of high technology, you’ve probably experienced the following: After a careful analysis of your product’s capabilities, the competition, and the current state of the market, a development and marketing plan is created. Release time frames are discussed and agreed upon. Elaborate project management templates are built, and milestones are set. You post the ship date up on a wall where everyone in your group can see it, and your team begins to work like crazed beavers to meet your target.

Then, as the magic day looms nearer, ominous sounds emit from development. Whispers of “crufty code” and “bad architecture” are overheard. Talk of “hard decisions” that “need to be made” starts to wend its way through the company grapevine. People, especially the programmers, walk by the wall on which you’ve mounted the ship date, pause, shake their heads, and keep walking.

Finally, the grim truth is disgorged. At a solemn meeting, development tells everyone the bad news. The code base of the current product is a mess. Despite the best and heroic efforts of the programmers, they’ve been unable to fix the ancient, bug-ridden, fly-bespeckled piece of trash foisted on them by an unfeeling management. No other option remains. The bullet must be bitten. The gut must be sucked up. The Rubicon must be crossed. And as that sinking feeling gathers in your stomach and gains momentum as it plunges toward your bowels, you realize that you already know what you’re about to hear. And you already know that, after hearing it, you’ll be groping blindly back to your cubicle, your vision impeded by the flow of tears coursing down your face, your eyes reddened by the sharp sting of saline. And you’ve already accepted it’s time to get your resume out and polished, because the next few financial quarters are going to be very, very ugly.

And then they say it. The product requires a ground-up rewrite. No other option exists.

Oh, you haven’t been through this yet? Well, just wait. You will. However, as you’ll learn, what you’re going to be told may very well not be true. After reading this interview, you’ll be in a better position to protect your vision and your career in the wonderful world of high tech.

And now . . .

An Interview with Joel Spolsky

Rick Chapman: Joel, what, in your opinion, is the single greatest development sin a software company can commit?

Joel Spolsky: Deciding to completely rewrite your product from scratch, on the theory that all your code is messy and bug-prone and is bloated and needs to be completely rethought and rebuilt from ground zero.

Uh, what’s wrong with that?

Because it’s almost never true. It’s not like code rusts if it’s not used. The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it.

Well, why do programmers constantly go charging into management’s offices claiming the existing code base is junk and has to be replaced?

My theory is that this happens because it’s harder to read code than to write it. A programmer will whine about a function that he thinks is messy. It’s supposed to be a simple function to display a window or something, but for some reason it takes up two pages and has all these ugly little hairs and stuff on it and nobody knows why. OK. I’ll tell you why. Those are bug fixes. One of them fixes that bug that Jill had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes a bug that occurs in low-memory conditions. Another one fixes some bug that occurred when the file is on a floppy disk and the user yanks out the diskette in the middle. That LoadLibrary call is sure ugly, but it makes the code work on old versions of Windows 95.

When you throw that function away and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

Well, let’s assume some of your top programmers walked in the door and said, “We absolutely have to rewrite this thing from scratch, top to bottom.” What’s the right response?

What I learned from Charles Ferguson’s great book (High St@kes, No Prisoners [Crown, 1999]) is that you need to hire programmers who can understand the business goals. People who can answer questions like “What does it really cost the company if we rewrite?” “How many months will it delay shipping the product?” “Will we sell enough marginal copies to justify the lost time and market share?” If your programmers insist on a rewrite, they probably don’t understand the financials of the company, or the competitive situation. Explain this to them. Then get an honest estimate for the rewrite effort and insist on a financial spreadsheet showing a detailed cost/benefit analysis for the rewrite.

Yeah, great, but, believe it or not, programmers have been known to, uh, “shave the truth” when it comes to such matters.

What you’re seeing is the famous programmer tactic: All features that I want take 1 hour, all features that I don’t want take 99 years. If you suspect you are being lied to, just drill down. Get a schedule with granularity measured in hours, not months. Insist that each task have an estimate that is 2 days or less. If it’s longer than that, you need to break it down into subtasks or the schedule can’t be realistic.

Are there any circumstances where a complete code rewrite is justified?

Probably not. The most extreme circumstance I can think of would be if you are simultaneously moving to a new platform and changing the architecture of the code dramatically. Even in this case you are probably better off looking at the old code as you develop the new code.

Hmm. Let’s take a look at your theory and compare it to some real-world software meltdowns. For instance, what happened at Netscape?

Way back in April 2000, I wrote on my website that Netscape made the single worst strategic mistake that any software company can make by deciding to rewrite their code from scratch. Lou Montulli, one of the five programming superstars who did the original version of Navigator, e-mailed me to say, “I agree completely; it’s one of the major reasons I resigned from Netscape.” This one decision cost Netscape 4 years. That’s 3 years they spent with their prize aircraft carrier in 200,000 pieces in dry dock. They couldn’t add new features, couldn’t respond to the competitive threats from IE, and had to sit on their hands while Microsoft completely ate their lunch.

OK, how about Borland? Another famous meltdown. Any ideas?

Borland also got into the habit of throwing away perfectly good code and starting from scratch. Even after the purchase of Ashton- Tate, Borland bought Arago and tried to make that into dBASE for Windows, a doomed project that took so long that Microsoft Access ate their lunch. With Paradox, they jumped into a huge rewrite effort with C++ and took forever to release the Windows version of the product. And it was buggy and slow where Paradox for DOS was solid and fast. Then they did it all over again with Quattro Pro, rewriting it from scratch and astonishing the world with how little new functionality it had.

Yeah, and their pricing strategy didn’t help.

While I was on the Excel team, Borland cut the MSRP on Quattro Pro from around $500.00 to around $100.00. Clueless newbie that I was, I thought this was the beginning of a bloody price war. Lewis Levin,1 Excel BUM (business unit manager) was ecstatic. “Don’t you see, Joel, once they have to cut prices, they’ve lost.” He had no plan to respond to the lower price. And he didn’t need to.

Having worked at Ashton-Tate, I have to tell you the dBASE IV code base was no thing of beauty. But, I take your point. Actually, I saw this syndrome at work in Ashton-Tate’s word-processing division. After they bought MultiMate, they spent about 2 years planning a complete rewrite of the product and wasted months evaluating new “engines” for the next version. Nothing ever happened. When a  new version of the product was released, it was based on the same “clunky” engine everyone had been moaning about. Of course, in those 2 years WordPerfect and Microsoft ate Ashton-Tate’s wordprocessing lunch.

Ashton-Tate had a word processor?

Yes, but nothing as good as WordStar, mind you!

Hmm. That reminds me that Microsoft learned the “no rewrite” lesson the hard way. They tried to rewrite Word for Windows from scratch in a doomed project called “Pyramid,” which was shut down, thrown away, and swept under the rug. Fortunately for Microsoft, they did this with parallel teams and had never stopped working on the old code base, so they had something to ship, making it merely a financial disaster, not a strategic one.

OK, Lotus?

Too many MBAs at all levels and not enough people with a technical understanding of what could and needed to be built. SMS: And I suppose building a brand-new product called “Jazz”2 instead of getting 1-2-3 over to the Mac as quickly as possible, thus staking Microsoft to a 2-year lead with Excel, is an example of the same thing?

Actually, they made a worse mistake: They spent something like 18 months trying to squeeze 1-2-3/3.0 into 640KB. By the time the 18 months were up, they hadn’t succeeded, and in the meantime, everybody bought 386s with 4 megs of ram. Microsoft always figured that it’s better to let the hardware catch up with the software rather than spending time writing code for old computers owned by people who aren’t buying much software any more.

WordPerfect?

That’s an interesting case and leads to another development sin software companies often make: using the wrong level tools for the job. At WordPerfect, everything, including everything, had to be written in assembler. Company policy. If a programmer needed a little one-off utility, it had to be hand-coded and hand-optimized in assembler. They were the only people on Earth writing all-assembler apps for Windows. Insane. It’s like making your ballerinas wear balls and chains and taping their arms to their sides.

What should they have been coding in?

In those days? C. Or maybe Pascal. Programmers should only use lower-level tools for those parts of the product where they are adding the most value. For example, if you’re writing a game where the 3D effects are your major selling point, you can’t use an off-the shelf 3D engine; you have to roll your own. But if the major selling point of your game is the story, don’t waste time getting great 3D graphics—just use a library. But WordPerfect was writing UI code that operates in “user time” and doesn’t need to be particularly fast. Hand-coded assembler is insane and adds no value.

 Yes, but isn’t such code tight and small? Don’t products built this way avoid the dreaded “bloatware” label?

Don’t get me started! If you’re a software company, there are lots of great business reasons to love bloatware. For one, if programmers don’t have to worry about how large their code is, they can ship it sooner. And that means you get more features, and features make users’ lives better (if they use them) and don’t usually hurt (if they don’t). As a user, if your software vendor stops, before shipping, and spends 2 months squeezing the code down to make it 50 percent smaller, the net benefit to you is going to be imperceptible, but you went for 2 months without new features that you needed, and that hurt.

Could this possibly account for the fact that no one uses WordStar version 3.3 anymore despite the fact it can fit on one 1.4 meg floppy?

That and Control-K. But seriously, Moore’s law makes much of the whining about bloatware ridiculous. In 1993, Microsoft Excel 5.0 took up about $36.00 worth of hard drive space. In 2000, Microsoft Excel 2000 takes up about $1.03 in hard drive space. All adjusted for inflation. So stop whining about how bloated it is.

Well, we’ve had much personal experience with the press slamming a product we were managing. For example, for years reviewers gave MicroPro hell over the fact it didn’t support columns and tables. Somehow the fact that the product would fit on a 360KB floppy just didn’t seem to mean as much as the idea that the reviewer couldn’t use our product to write his or her resume.

There’s a famous fallacy that people learn in business school called the 80/20 rule. It’s false, but it seduces a lot of dumb software start-ups. It seems to make sense. Eighty percent of the people use 20 percent of the features. So you convince yourself that you only need to implement 20 percent of the features, and you can still sell 80 percent as many copies. The trouble here, of course, is that it’s never the same 20 percent. Everybody uses a different set of features. When you start marketing your “lite” product and you tell people, “Hey, it’s lite, only 1MB,” they tend to be very happy, then they ask you if it has word counts, or spell checking, or little rubber feet, or whatever obscure thing they can’t live without, and it doesn’t, so they don’t buy your product.

Let’s talk about product marketing and development at Microsoft. How did these two groups work together?

Well, in theory, the marketing group (called “product management”) was supposed to give the development team feedback on what customers wanted. Features requests from the field. That kind of stuff. In reality, they never did.

Really?

Really. Yes, we listened to customers, but not through product management—they were never very good at channeling this information. So the program management (design) teams just went out and talked to customers ourselves. One thing I noticed pretty quickly is that you don’t actually learn all that much from asking customers what features they want. Sure, they’ll tell you, but it’s all stuff you knew anyway.

You paint a picture of the programmer almost as a semideity. But in my experience, I’ve seen powerful technical personalities take down major companies. For instance, in The Product Marketing Handbook for Software (Aegis Resources, 2006), I describe how the MicroPro development staff’s refusal to add the aforementioned columns and table features to WordStar badly hurt the product’s sales.3 How do you manage situations like these?

This is a hard problem. I’ve seen plenty of companies with prima donna programmers who literally drive their companies into the ground. If the management of the company is technical (think Bill Gates), management isn’t afraid to argue with them and win—or fire the programmer and get someone new in. If the management of the company is not technical enough (think John Sculley), they act like scared rabbits, strangely believing that this one person is the only person on the planet who can write code, and it’s not a long way from there to the failure of the company.

If you’re a nontechnical CEO with programmers who aren’t getting with the program, you have to bite the bullet and fire them. This is your only hope. And it means you’re going to have to find new technical talent, so your chances aren’t great. That’s why I don’t think technology companies that don’t have engineers at the very top have much of a chance.

Joel, thank you very much.

Edit

The value of a business book is in its ability to provide you guidance to the future. If you’ve read “In Search of Stupidity: Over 20 Years of High-Tech Marketing Disasters,” any edition, you understand I am highly skeptical of most business tomes that claim to have unlocked the formula for sure business success. But the best proof is in the telling, and in 2005, I wrote this analysis of what was happening at Apple for the second edition of Stupidity. I leave it your judgement as to my ability to analyze the present so as to predict the future. 


In the third edition, Apple now substitutes for Microsoft in the complementary section.

Excerpted from the “On Avoiding Stupidity” chapter 

For instance, let’s take the success of Microsoft Windows, to date high-tech’s most dizzying product triumph. Overcoming its humble roots as a clumsy imitation of the far more sophisticated Macintosh operating system, Windows’s success from 1990 onward drove Microsoft by 2005 to more than $40 billion in revenue and 60,000 employees, with 2005 profits exceeding $3 billion. Windows was first announced in 1983 when the GUI wars were first taking shape in the wake of Xerox’s pioneering work in the field and the first version was released in 1985. Over the years Windows bested GEM, VisiOn, GeoWorks, the Mac OS, and, most notably, OS/2 in the war for supremacy. What clearer example could exist of a company having a strategic vision for a product and then pursuing that vision to ultimate success?

But for Windows to achieve its current monopoly position, the following events had to occur:

  • Xerox, the original inventor of what we now call the graphic user interface, had to never develop a clue about how to commercialize most of the ground-breaking developments that came out of its PARC labs.
  •     *    Digital Research had to blow off IBM when it came calling for an operating system for the original IBM PC.
  • IBM, which during the early years of its relationship with Microsoft could have crushed the company like a bug, had to behave as if prefrontally lobotomized from 1985 to 1995 as the gruesome OS/2 saga ground on.
  • Apple had to decide to not license the Macintosh operating system, a decision that led to the company going from approximately 30 percent market share in the early 1980s to 4 percent market share by 2006.

Other events that contributed to the eventual success of Windows also encompassed the following:

  • The failure of industry pioneer VisiCorp to release a successful version of VisiOn, an early graphical OS for the PC that scared Bill Gates into almost shampooing his hair.
  • Apple suing Digital Research over the release of its DOS shell, GEM, shortly after the product’s release. GEM was a direct Windows competitor and far more sophisticated than early releases of Windows in its look and feel (it looked and felt like a Mac). Before the Apple suit crippled the product, GEM was on the verge of achieving widespread adoption in the PC market.
  • An unexpected run-up in the cost of memory chips (and temporary violation of Moore’s law), which helped cripple the release of OS/2 1.0.

Now, how does one fashion a credible strategic plan that assumes your competition will agree to collectively shoot itself in the forebrain while unpredictable market forces break in such a way as to help ensure your eventual success?

The answer is that you can’t. Microsoft’s success with Windows, which, depending on how you count these things, ranges from $60 to $100 billion (and still counting!) is as much a result of good luck and stupidity on the part of its competition as much as any vision on the part of Microsoft. No strategic plan that anyone would take seriously could include the actual events as they unfolded over the decades. And whatever strategic plans Microsoft had for Windows in 1983 were obsolete by the product’s release in 1985. And whatever plans Microsoft had in 1985 were obsolete by 1987, the year of OS/2’s release. And certainly by 1990 everyone’s plans for Windows were obsolete as a technically inferior but useful DOS shell swept to market supremacy over far more sophisticated and feature-rich rivals that couldn’t do much.

But in the meantime, as I’ve already pointed out, while Microsoft’s competition was engaged in various sorts of self-immolation, the company was continually executing business basics effectively. From the early 1980s through the 1990s the company entered the word processing, spreadsheet, and business presentation markets with good products that sold well and received generally favorable reviews. During this same period, Microsoft was creating a PR campaign that effectively developed a pleasing persona around Bill Gates that supported Microsoft’s marketing and sales efforts. The company also continuously improved and refined their development products, releasing new IDEs, languages, and tools that were well received by developers. In 1993 the company fortuitously stumbled onto the Office concept and rode its success to even larger profits. It also figured out how to make profits during the Internet bubble by selling products such as FrontPage. In the aggregate, all these events have contributed to Microsoft’s success, and little strategic planning was involved. Microsoft simply gravitated to good opportunities, executed well (or at least better than its competitors), and reaped the rewards.

You’re not convinced? OK, let’s look at another seminal company in the industry, one undergoing a seemingly miraculous rebirth in high tech. Let’s look at Apple, a company I had quite a bit of fun with in the first edition of Stupidity.

Now, before we go further, I’m going to give you a test. Let’s imagine, for a few minutes, that you have gone down to the mall to visit your local Apple store in order to peruse its wares and decide whether you’re going to buy a sleek, dazzling new Apple Intel-based Powerbook or save a few hundred bucks and buy a boring but decent Dell laptop. As you fight your way into the place past hordes of crazed shoppers battling to scarf up the latest iPod, a dazzling light suddenly appears from nowhere in middle of the store’s ceiling. The light grows brighter and more intense, and everyone in the place, except you, falls into a deep sleep and slumps gently  to the store’s floor, still clutching their iPod boxes. As you watch in amazement, the light  contracts into  a glowing orb that descends to the floor and coalesces into a beautiful girl. (I feel these Disney trappings most appropriate in light of Steve Job’s ascension to the Disney board of directors as a result of the Pixar buyout.) This dazzling apparition is dressed in a gown of diaphanous gold filigree and wafts a wand so white it almost hurts to look at it. As you gape in amazement, the wand glows and shimmers while emitting magical sparks that seem to distort reality itself! You reach out in delight to touch this marvelous  instrument, but the vision in front of you quickly yanks it away with a warning that the thing scratches like heck. Tucking the wand safely away in a silicon rubber holster, the magical lady explains that she is your Apple Fairy Godmother and that she has come to ask you to develop an enchanted strategic business plan.

You are, she explains as you listen with rapt attention, to help Good King Steve Jobs come up with a wondrous way  to help Apple return to the Glory Days of the late 1970s and early 1980s, when Apple was the predominant player in the nascent microcomputer industry. It shouldn’t be too difficult, she says, for someone as brave and handsome as you. And, after all, she says with a lustrous smile on her face, Apple has exquisitely designed and colored computers on which reside the industry’s slickest and most intuitive GUI, OS X, version Panther, or Tiger,  or KittyKat, or something. This is aAll running on top of a rock-solid, open source foundation called Darwin, a derivative of the widely praised FreeBSD. OS X Server, OS X’s bigger, brawnier brother, is a snap to set up and maintain. And the incredible success of the iPod has put Apple’s name on every consumer’s tongue and in just about every music lover’s pocket.

Now, what’s your plan? How do you plan to succor Good King Jobs? We’ll stop the book for a bit and give you some time to think through what you’re going to do.

OK, time is up.

What you do, of course, is smile regretfully and explain to the hallucination in front of you that you intend to quickly recover from the slight concussion you suffered when a shopping-hardened yuppie sprinting up the aisle in pursuit of the last white 6 gig Nano accidentally hit you upside your head with a purse loaded with a PDA, cell phone, and her current 4th-generation 60 gig iPod. Shaking your head vigorously, the fairy disappears with a *POOF* and the shoppers resume their mad scrambles. Then, after browsing quickly through the software displayed on the shelves and spending some time on the store’s web kiosk, you bail out of the place. You see, you’re a finance guy with an accounting degree working on your CPA, and one day you plan to be a CFO somewhere. You’re looking for a specialized package that can roll up budgets across different company divisions and business units and create a unified financial model of the entire company, something you really can’t do with plain old Microsoft Excel. No one offers such a program for the Mac, so it will have to be the Dell.

Now, why didn’t you let the magic linger a little longer? Why not take a stab at planning to put Apple back on the throne from which it once reigned microcomputing 25 years ago? After all, everyone is bored with Windows and hates its copy protection. Linux, the only possible other competitor, has all the computing charm of a diesel truck and requires a degree in computer science to install. And everything the Apple Fairy Godmother said is true, and she left out some hard revenue facts besides. In 2003, Apple’s annual revenue hovered around $6 billion. In 2005, Apple sold more than 32 million iPods, and more one billion songs were downloaded from its iTunes service by the winter of 2006. Yearly revenues from 2005 were almost $14 billion with more than a billion of that being profit.

 Becauses such a plan is as impossible to write as was a 1983 strategic plan for Windows that possessed any credibility. In 2003, when writing the first edition of In Search of Stupidity, I noted that Apple had about 3 percent to 4 percent market share of new computers sold worldwide (an observation that carries over to the Apple OS, which still runs only—officially—on Apple boxes). Actually, I was generous; by the time the book went to print, Apple’s share had slipped to less than 3 percent in some analyses. And today, after the iPod’s stunning success, Apple’s worldwide market share of PCs/operating systems worldwide is now about…3 percent to 4 percent.

It isn’t as if Apple hasn’t tried to change this. Since Steve Jobs returned to Apple, the company has launched several “switch to the Mac” campaigns, all of which have had little impact on the market. (Apple doesn’t even pretend to try hard in the server market, despite its product’s excellent performance). Apple has been able to hold onto its installed base, but little more. People seem quite content to connect their Apple iPods to their Wintel machines. Teenagers, always harbingers of new trends and fads, seem happy to rely primarily on Windows-based peer-to-peer networks to “liberate” music via the Internet and break the RIAA’s heart. And many I speak to seem quite put out by iTunes’s digital rights management (DRM) schemes. Apple’s growth is coming from consumer electronics, not computers, and no one on this planet has ever figured out how to take a company from 4 percent market share to industry dominance in the face of an entrenched competitor determined to defend its turf. Apple came close to industry dominance in the early 1970s and 1980s, but this was before IBM woke up. And despite Microsoft’s creeping development of the senescence that inevitably afflicts all megasized corporations, unless a big meteor hits Redmond and Bellevue, Apple cannot hope Steve Ballmer and Bill Gates are going to stand idly by while Apple lops off significant amounts of market share and money from Microsoft.

Does this mean Apple will eventually leave the PC business? Maybe. One possible scenario is that the company focuses on building more consumer devices, using the Apple OS as an embedded operating system to run ever more sleek and scratch-prone proprietary gadgets. Perhaps Apple eventually merges with Sony or another major consumer electronics giant and merges their technology with the new company. Apple has already provided their Intel-based computers with an easy way to run Windows, and the company gracefully exits the market with a solution that doesn’t leave its customers with the option of running only soon-to-be obsolete software. Given the pace of hardware advancement and evolution, the entire affair would take only two to three years.

Or maybe the market is changing under Microsoft, and Apple is in position to take advantage of the chaos that will ensue. The iPod’s success is ushering in a new era of content where music, film, and, eventually, literature is casting off its ties to the physical. Say a permanent good-bye to liner notes and beautiful album covers (two institutions already wounded by the move to CDs). Today’s new music consumer expects to take their music with them, be it on an airplane, in a room, or even from their hotel room. iPods are just way stations, disposable transmitters that facilitate the job of providing personalized content 24/7/365 to consumers. And if you want cover art with that music, well, that’s what websites and screen savers are for. And isn’t it nice those pretty images are also available anytime from anywhere?

In this milieu, what’s needed is a beautifully designed and easy-to-use system that seamlessly manages the task of providing, creating, and managing content for both professionals and the masses, a plan that calls for a hardware platform with plenty of oomph. It’s called convergence, and high tech has been waiting years for it to occur. For Microsoft, the problem is Windows doesn’t seem suited to the task; the system is feature laden but hard to use, loaded with extrusions and encrustations that make the heads of people already defeated by the remote control ache. But anyone who has used an iPod knows Apple can build lean, elegant, easy-to-learn interfaces people like. And its computers are certainly powerful enough to handle content management and transmission. So perhaps it’s Apple that dominates this new world, leaving Windows to its fate as a backroom grease monkey that does the grimy, dirty work of chugging through spreadsheets and grinding out yet more business memos. The consumer market is now where it’s at, after all, with COMDEX replaced by CES as high-tech’s major show. And now that Steve Jobs is on the board of Disney, where obviously he plans to sit quietly in the background and provide some helpful advice to the new CEO, we can hope the video iPod and its successors will at least provide us with a steady diet of nice cartoons and the latest Pixar/Disney movies.

There are many other possible scenarios. Perhaps Microsoft buys into several key markets and stitches together a convergence solution that, although not as elegant as Apple’s, has enough functionality, price advantage, and nonproprietary advantages to succeed in extending Windows into the living room. After all, who wants to bet against Microsoft and all those billions? And Microsoft has already executed such a strategy, with considerable success.

Of course, if you write enough business plans, I suppose one of them will be the right one. But this smacks of hiring a room full of chimps to sit in front of a group of terminals and hack randomly at a business plan software package in the hopes they’ll crank out the next Netscape IPO. The last time this worked was during the Internet bubble, and I think you’ll have to wait a few more years before you can get away with this.

Another paradox that awaits strategic plans and planners is that, paradoxically, as a company grows larger, its ability to plan strategically withers away. IBM and Microsoft are both excellent exemplars of this principal. In the early 1980s IBM ruled the mainframe world, it was equal with rivals DEC and Data General in midsized systems, and the story of the PC’s success doesn’t need repeating. IBM was also the largest software company in the world, with its business products in use in practically every industry on the globe. The company even introduced several desktop software titles, such as an editor, that were initially well received. IBM was in a position to buy any company it needed to help ensure its continued supremacy and indeed was at one time or another rumored or actively interested in buying Intel (in which it held a significant minority stake), MicroPro, Microsoft, Novell, Apple, and many others. Yet today IBM is out of the PC business. Microsoft dominates software. The mainframe market is still profitable, but static. Minicomputers are gone. IBM’s most successful business is now in consulting, telling other businesses how to use technology that in many cases IBM no longer produces.