Futurism and Deathwatches
Long-time readers may think I despise futurism. That’s not quite true. Those who believe I despise deathwatches—assertions that “X is dead”—are closer to the truth. In both related cases (after all, any deathwatch is an assertion about the future), there’s a complexity of motives and methods, and I only despise some of them.
We need to think about the future. We need to plan for a plausible range of futures—that’s what planning is all about. Some self-labeled futurists specialize in building ranges of desired futures and seeing what it would take to improve the chances of reaching those futures. That’s admirable, useful, necessary. Some futurists specialize in building sets of possible futures, not necessarily desired ones. If that’s done in order to explain how things could play out, what might help to move toward one possibility rather than another, it’s also admirable.
Problems arise with the kind of futurism that gets the publicity and yields the books I love to make fun of 10 or 15 years after they’re published. This kind of futurism asserts the ability to predict the future, and it’s “the future,” not “one of many possible futures.” It’s almost always a less complex future than the present and the predictions frequently include the magic word “inevitable.” Sure, it’s possible to make some narrow, partial, reasonable-probability short-term and medium-term projections—but if it was plausible for the best experts to make broader predictions, you could point to some 20-year-old sets of predictions that actually worked out. I haven’t seen any such sets, even if you define “worked out” as “appreciably better than flipping a coin.”
That sort of futurism, which is most of what we see in the press, bothers me a lot—the more so because futurists are rarely, if ever, held accountable for their manifest failings. Did Being Digital work out the way Negroponte asserted? Not at all—but Negroponte is still regarded as a guru whose words are always worth following.
I’ve come to despise futurism when it’s used as a handy way to dismiss opinions and preferences, when it’s used to dumb down the real world into claptrap clichés such as “the digital future” (where there’s an implicit “all-” before “digital”), when it becomes a world-weary way to stop thinking.
Deathwatches are a particularly noxious form of dumbed-down futurism. When you proclaim that “X is dead” or “X is dying” you are explicitly telling people who prefer X that they’re wrong and irrelevant, that the future is some great monolithic steamroller and their preferences are doomed to be part of the pavement. Far too frequently, deathwatches are expressions of arrogance and an ugly form of claimed superiority, especially when phrased as “X must die” or with the addition “and good riddance.”
But that’s not always the case…although when it’s not, thoughtful writers will substitute some nuanced alternative to “X is dead.” Tell me “X appears to be diminishing” and I’ll ask for evidence. Tell me “X may, in the future, not be viable for these reasons” and I’ll read the reasons carefully and thoughtfully. In both cases, you’re offering an argument and presumably providing evidence—and that’s quite different from dismissing X as “dead” because you say so. Or, perhaps worse, giving us lists of things that “must die” because you, and the truly important people who are exactly like you, think that alternatives should be used.
Commentaries that X appears to be replacing Y? That’s quite a different thing, at least if it’s done without an “…and Y really should disappear” undertone. If there are specific reasons that Y should disappear, other than “it’s not NEW” or “it’s not what I like” or “it’s not sufficiently digital,” that’s a different issue—but those reasons should be stated. I do some of that myself, and it’s an important part of journalism and nonfiction writing in general. I’m not arguing that people shouldn’t draw comparisons and note when things seem to be declining and why. I’m arguing that people shouldn’t oversimplify, gloat, and make assumptions based on universalizing from their own preferences or deciding that the underprivileged or those with limited discretionary funds simply don’t matter.
That’s enough overall philosophizing (or ranting, if you prefer). In February 2010 (Cites & Insights 10:2), I devoted half an issue to a T&QT Perspective, Trends and Forecasts. It might be interesting to go back to that issue in a few years, since it’s just chock full of offensive deathwatching and “everyone else is like me” futures. I omitted some pieces back then because there wasn’t room. This essay picks up those items and adds newer items—although I’ve been avoiding tagging most deathwatch and futurist items because I find them so aggravating and because commenting on them may be a waste of time. In that issue, I noted the final “disContent” column about deathwatches and that I couldn’t reprint the column at that point. The period of exclusivity for EContent has long since passed and the final “disContent” column appears at the end of this Perspective. (Expect to see other “disContent” columns in some future issues—and, as announced on Page 1, there’s now a limited edition collection of them.)
Items that follow are in no particular order other than (generally) chronological. They are mostly items that had been flagged for the February 2010 perspective and didn’t fit.
The subtitle on this June 25, 2009 Chronicle of Higher Education piece by Jeffrey R. Young is “Scholarly E-Mail Lists, Once Vibrant, Fight for Relevance.” Young quotes T. Mills Kelly at George Mason University saying “the time of scholarly e-mail lists has passed, meaningful posts slowing to a trickle as professors migrate to blogs, wikis, Twitter, and social networks like Facebook.”
Maybe I should mention that Kelly is “associate director at the Center for History and New Media” and made that argument on his blog—mentioning it again on “the technology podcast he hosts with two colleagues.”
When Young looks at some large mail lists (Listserv® is still a trademark for one particular list system) and says they show “signs of enduring life and adaptation to the modern world,” Kelly is “not swayed.” He used to participate in some mail lists, but one shut down and others aren’t doing well. Here comes the bad futurism, the “I don’t, so nobody should” argument:
“As more and more people become comfortable with blogs and Twitter, e-mail lists will become increasingly irrelevant,” he said. “They’re just a much less dynamic form of communication.”
Blogs and twitter. Blogs as the future of scholarly discussion? Really?
When Young asked about this on his Twitter feed, he got lots of agreement. Thesis: Those who love Twitter will tend to dislike email. Sounds right to me. And CHE editors were ready to title the piece “Death of the E-Mail List,” deathwatch at its finest and most typical.
But then a surprising thing happened. I started to hear passionate defenses of listservs from other people in my digital network, even those who are just as plugged in to the latest trends.
Young was still asking the question only to those who are social networking participants, but even by broadening it to Facebook he got lots of responses saying lists are still useful, some even growing. Some have fewer messages, and that’s not necessarily a bad thing. Some have disappeared: That’s going to happen, no matter what the medium. (Seen any blogs go silent lately? How are your ever-growing networks of Second Lifers doing?)
Lists have changed. They’re not as dominant today as they were in, say, 1999: How could they be? They’re used for different purposes. Much of the ephemeral traffic has moved to Twitter, particularly for topics where 140 characters is all there is to say. But “less dominant” is one thing; “irrelevant” is quite another. I regard email lists as a lot less useful and central than they were a decade ago; that doesn’t make them irrelevant or dying. I like Young’s closing paragraph—except that the “did” in the first sentence should be a “do,” since radio hasn’t gone anywhere:
Perhaps e-mail lists will occupy a space like radios did in the television age, sticking around but fading to the background. Although people are fond of declaring the death of e-mail in general, it remains a key tool that just about everyone opens every day. As long as that’s true, the trusty e-mail list will be valuable to scholars of all stripes.
Ah, the sixties, what memories. No, this was “How is America Going to End?” by Josh Levin, appearing August 3, 2009 in Slate—subtitled “The world’s leading futurologists have four theories.” The world’s leading futurologists? Based on what—track records?
The Global Business Network “answers the same question for all its corporate and government clients: What happens next?” Wow. You hire GBN and you get the answer? I’m impressed. The article goes on to quote one GBN hotshot, Peter Schwartz, saying scenario planning “brings rigor to the inevitably imprecise art of forecasting.” Except that if you’re providing a range of scenarios, you’re not answering the question, you’re providing a range of speculative possibilities.
This article is based on a session during which these forecasters plotted scenarios in which the U.S. could end in the next century. Well, OK: A set of possible scenarios is interesting and almost certainly worthwhile. Indeed, in a 1991 book about scenario planning, Schwartz says professional forecasters (futurists) are not oracles—that they do not predict the future. (So they tell you “what happens next” without predicting the future? That’s a neat trick.)
What do these futurists come up with for the end of the U.S. by 2109? Schwartz offers racial war as one idea. The group comes up with four scenarios they consider plausible:
· Collapse: The country falls apart after a series of catastrophes—so far apart that the national government becomes irrelevant.
· Friendly breakup: We decide that the U.S. is unmanageable and break it up into smaller parts—you know, like the USSR?
· Global governance: That’s right, world government—and another Slate contributor believes that we’ll either have global governance or chaos.
· Global conquest: Some nation conquers the U.S. and the rest of the world.
The caveat: Schwartz and, apparently, the other futurists believe that the most likely scenario is “that the city of Washington will still be a capital of a nation-state on this continent.” In other words, all four scenarios are improbable, with Global Conquest the least likely of all.
The article cites a December 2008 item in the Wall Street Journal where a Russian academic (Igor Panarin) is quoted as believing that the U.S. would dissolve in 2010: A future he’s been predicting for more than a decade and the Russian state media were apparently taking seriously in December 2008. Here’s what this futurist actually predicted: mass immigration, economic decline and “moral degradation” would trigger a civil war in the fall of 2009, the collapse of the dollar, and the breakup of the U.S. in June or July 2010 into six pieces, with Alaska reverting to Russian control. According to this professor, using classified data, California (and Oregon, Washington, Arizona, etc.) should now be part of China, Texas and the South part of Mexico, New York and New England part of the European Union and the Midwest and Plains states part of Canada. Maybe this all happened a few months ago and we just missed it?
Panarin has the right attitude for a Bad Futurist. When somebody asked White House spokesperson Dana Perino about his prediction, at a December 2008 news conference, she responded “I’ll have to decline to comment”—as the article says, “amid much laughter.” Panino’s reaction? “The way the answer was phrased was an indication that my views are being listened to very carefully.” Sure it was.
Most of this article is about Good Futurism—preparing possible scenarios, providing the argumentation for them and considering implications about steps to be taken. The article links to a “Choose Your Own Apocalypse” tool—including Levin’s collection of “144 potential causes of America’s future death.” Levin posted another “How Is America Going to End” story (August 7, 2009: www.slate.com/id/ 2224425/device/html40/workarea/3/) with “the apocalypse you chose.” More than 60,000 readers selected their Fave Five, dystopian futures that seemed most likely—call it crowdfuturism, if you like. The five most popular paths to our demise?
· Loose Nukes: Insurgents take control of nuclear weapons in Pakistan or Russia and wipe out the U.S.
· Peak Oil: As oil production declines, alternative energy sources can’t maintain our lifestyle (and so we dissolve the country? really?)—a long-time “hobbyhorse” of various prophets.
· Antibiotic Resistance: Superbacteria wipe us out.
· China Unloads U.S. Treasurys: Thus bankrupting the country and wiping out the national government.
· Israel-Arab War: Erupts and becomes so major that it destroys the U.S.
The most popular cluster (since people could choose up to five scenarios) combined the first four above with “peak water,” in which we no longer have adequate water. Just to keep life interesting, Slate started a Choose Your Own Apocalypse social network—but that link yields an empty page on my browser. I guess it dissolved along with the U.S.?
Here’s an example of a different sort of near-term futurism, one that does not cause my bile to rise. I picked this item up from David Booker on The Centered Librarian, posted January 7, 2010, but it’s based on a TechCrunch article by Eric Schonfeld—with, unfortunately, a link to TechCrunch as a whole rather than the individual item.
Based only on Booker’s summary and comments, this appears to be a list of things that could be important in 2010—which is quite different than Stating The Future. I might argue with details, but that’s argument—I don’t see many flat-out predictions here. Maybe I would if I was ready to page through search results to find the original article, but I’m too lazy to do that (it wasn’t on the first page and I don’t see date-organized archives at TechCrunch).
The ten? Tablets, geolocation, realtime search, Chrome OS, HTML5, mobile video, augmented reality, mobile transactions, Android, and “social CRM”—that is, co-opting social networks for business purposes. This late in the year, it’s easy to suggest that Chrome OS isn’t going to be that important in 2010, but overall, I suspect these are mostly reasonable suggestions.
This, on the other hand, is nonsense—from a source I expect better of, namely ars technica (written by Jon Stokes on January 12, 2010). Stokes notes that the big push for 3DTV at CES in January 2010 was met with yawns. But, Stokes says, it doesn’t matter. “Because of the specific approach that the industry has settled on, consumers don’t have to be bowled over for 3D TV to wind up in every living room. Here’s a look at the current state of 3D TV, and at why it’s coming to a screen near you whether you like it or not.”
First, Stokes says, “Everyone bet on the same 240Hz LCD horse.” That is, not only are all the 3DTVs using active shutter-based glasses, but they’re all LCD sets. Whoops—turns out that 3DTV seems to work a lot better on plasma sets. It’s not just Panasonic (noted in the article); Samsung’s also pushing 3D plasma. Oh, there’s AMOLED TVs, but they’re nowhere near commercial introduction at large-screen sizes. Basically, Stokes is saying the least viewable of the current options for 3DTV—where his thought was “I’m going to have a headache if I don’t get these glasses off immediately”—is the winner we’re all going to buy even if we don’t want it. Then there’s a bunch of blather about the desirability of sets that aren’t going to make it to the market.
Here’s where it gets truly strange. Stokes describes plasma sets—not as good as AMOLED, but better than LCD—and notes that there’s nothing here that would encourage most “cash-strapped consumers” to go 3D. But, he says, it doesn’t matter. “We won’t have a choice.” Why? Because 3DTVs can also be used as 2DTVs:
Eventually, when all of the TV panels produced by the panel-makers are 3D-capable due to economies of scale, you’ll have as hard a time finding a non-3D-capable display as you do finding a non-HD display today.
This involves a whole bunch of assumptions, among them that 3D will have long-term legs. That’s not a given. In any case, even if some future panel has 3D capability built in, it’s not a 3DTV unless it includes both the emitter and 3D glasses—and if Stokes is claiming these devices will become universal even if people don’t want them, he’s talking out of his hat. Or some other region.
Who’s Richard Nash? He was a publisher at Soft Skull Press (which I’d never heard of) and became a consultant and, I guess, entrepreneur, pushing a “portfolio of niche social publishing communities” called Cursor. (With a description like that, how can it lose?) He believes in “long-form edited narrative texts” (in other words, books) and the “future of connecting writers and readers, in a Web 3.0 that’s about the filters.” (I’d comment on that, but first I’d have to understand it.) Cursor has the tagline “Transforming the social contract of publishing by restoring the writer-reader relationship to its true equilibrium.” I honestly just don’t get a lot of this—I guess I don’t see book-length writing as always inherently or desirably involving membership communities combining authors and readers.
But that’s me, and in my elder years I may be a bear of little brain. Nash seems to favor long-form text, whether called books or something else, and so do I. He has a blog that uses black sans type on a medium-gray background (difficult to read) for long essays, which surprises me for a supposed publishing expert, and his writing seems…well, far be it from me to criticize other writers.
Back to the item cited above, which actually appeared on January 5, 2010 at GalleyCat. Nash is in prediction mode and a strange set of predictions they are. Take the first: “Most predictions for 2020 that are not actually wrong will happen by 2015 or sooner.” You can’t fault a statement like that—for one thing, sensible futurists give themselves some leeway and, for another, “that are not actually wrong” is a loophole big enough to drive any truck through. #2: “Most predictions for 2020 ungrounded in history will be inadequate.” Huh? For one thing, precious few predictions are “ungrounded in history”—most futurists don’t blow them entirely out of their nether parts—and “inadequate” is a conclusive term whose meaning is, um, inadequate. I felt as though I should eat the fortune cookie at this point.
The rest? #3 seems to be a very long and involved way of saying “Big Publishers and Big Bookstore Chains won’t dominate the landscape in 2020,” and that’s probably right (but I may be misunderstanding the paragraph)—the blockbuster-book syndrome and the “return unsold copies for full credit” model are both economically absurd in the long run. #4 is a remarkably obtuse way of saying “Text-only books will survive; multimedia doesn’t kill the book” (or “long-form text-only narrative,” to avoid the nasty b-word). I think #5 is saying letterpress books may be treasured, but maybe he’s just saying physical books will still be desirable—damned if I can be sure. #6 speaks of “the last days of publishing,” and this seems to mean New York-style Big Publishing House publishing. #7 is one of these “the Golden Age wasn’t so golden” comments that’s true enough, easy to say and rather pointless—and #8 (developing nations are going to produce boatloads of novels) seems almost certain.
I think I may agree with much of what he’s saying. I’m not sure. I’m not at all surprised that one commenter, “howardwhite,” seems to think long-form narrative and novels are only around because of the way printed books work. Oh really?
Did I say something earlier about crowdsourcing futurism? That’s what Pew Internet does with its panel of “experts”—895 “technology stakeholders and critics.” I’m generally steering clear of Pew Internet these days for various reasons, and maybe I should steer clear here as well. Indeed, I’m deliberately choosing not to click through to the detailed responses, but you might find it interesting to do so. For what it’s worth, I tend to agree with the majority on the five issues—that is, I don’t believe Google is making us stupid; I do believe that, in general, the internet is and will be a positive factor for reading and writing; I’m certain successful innovation won’t generally be predictable; it’s likely that most information will flow freely over most of the internet in most nations (with some big exceptions); and it will still be possible, but not easy, to be anonymous online in 2020.
I don’t want to pile on Nicholas Carr, who’s apparently making a career of being shallow and stupid, but generalizing from yourself is never a good thing. Here’s what he said to Pew with regard to the “stupid” question:
What the Net does is shift the emphasis of our intelligence, away from what might be called a meditative or contemplative intelligence and more toward what might be called a utilitarian intelligence. The price of zipping among lots of bits of information is a loss of depth in our thinking
Bull. Nobody forces you to spend all your time on the net. If you’re incapable of turning off the damn computer and contemplating, don’t blame the computer. In this case, I’ll agree with Peter Norvig of Google, as paraphrased in a February 22, 2010 piece on Discover’s Discoblog:
Because Google makes so much information available instantly, it’s a good strategy for a knowledge-seeker to skim through many offerings first to get an overview. Then the user can settle down with the best sources for a deeper read. He added that skimming and concentrating can and should coexist.
I tend to use Bing rather than Google, but the point’s the same. Skimming and concentrating can and should coexist. They always have and I believe they always will.
This one’s mostly for fun, from What’s Next: Top Trends on January 12, 2010—that blog (by Richard Watson) being an ongoing source of interesting ideas from a “supposed futurist.” It’s just a list of ten items, some silly and some way too true. For example:
4. Say things that are very difficult to substantiate.
5. Be hazy about when things will happen.
7. If any prediction ever comes true make a lot of noise about it.
8. If anything doesn’t come true keep really quiet about it.
Maybe this is time to note “On Futurists” from the same blog (dated April 1, 2010). Watson quotes somebody in an audience: “I love listening to futurists, they are always interesting. And they are always wrong.” He nails one reason for this: “Part of the problem is that futurists seem to believe in only one future. The one they have picked.” In fact, Watson says, there must be more than one future—and, noted later, “we have the power to invent the future we want.” Not wholly, to be sure, but we sure can influence it.
Then there’s this:
The other problem futurists seem to suffer from is that they get ahead of themselves. Quite often their ‘what’ is quite accurate but their ‘when’ is usually way off. Their timing stinks and once again I think that’s because they assume a singular future. They assume, for example, that all newspapers will be e-papers in the future or that all music will be digital. But the word rarely works like that. It’s a marginal world out there and hardly anything is ever 100%...
Yep, although I’d disagree with “Quite often their ‘what’ is quite accurate.”
That’s the title of this March 2, 2010 Slate article by Farhad Manjoo—if you look at the page itself. If you look at the browser header, though, it’s “How to suss out bad tech predictions,” which appears as the tease on the page itself. In any case, Manjoo begins by noting Clifford Stoll’s Silicon Snake Oil and Newsweek essay based on it. As you may remember, Stoll was (is?) even more of an internet skeptic than I am—to the point of being a knowledgeable denier. On the other hand, he was partly right. “No CD-ROM can take the place of a competent teacher” sounds right to me, and while “no online database will replace your daily newspaper” is effectively wrong, I’m not sure this is a good thing. On the other hand, Stoll had far too many “nevers”—and, by the way, Stoll is one of those rare experts who will admit to being wrong. (On February 26, he said of this essay “Of my many mistakes, flubs, and howlers, few have been as public as my 1995 howler.”) Stoll also said no computer network would change the way government operates—and he may be right about that one. Realistically, Stoll was saying the internet as the “information superhighway” was being overhyped in 1995, and in that he was right—at the time. (Remember when we were all going to have our groceries and pet food delivered, with supermarkets doomed?)
Getting past Stoll, Manjoo offers two sentences that maybe should have ended the article:
Given how wrong they tend to be, it’s generally a good idea to ignore all predictions. The future is unknowable—especially in the digital age, when we’re constantly barraged with new technologies.
But we can’t have that, can we? So Manjoo offers some rules from separating good predictions from bad predictions. “Good predictions are based on current trends.” Well, sure, except that bad predictions take current trends and do linear projections (or, worse, geometric projections) that become laughable. “Don’t underestimate people’s capacity for change.” Maybe, but bad predictions commonly underestimate people’s desire for choice and frequent preference for continuity. “New stuff sometimes comes out of the blue.” That’s true enough…as long as it’s coupled with “but it generally doesn’t sweep away old stuff.” Here’s an odd one: “These days it’s best to err on the side of optimism.” Yep, that’s why the house we purchased last year in Livermore was worth 50% more in 2009 than in 2005 and why the Dow is at 30,000. Oh, wait… In this case, Manjoo’s telling us something about himself, not about good predictions. Indeed, he seems to think that Raymond Kurzweil’s “singularity” predictions (and projected immortality) are “based on current trends, and nothing about them seems really impossible.” Sure.
Some future-related commentaries from a library or librarian’s perspective—or leftovers from the June 2010 The Zeitgeist: There is No Future.
Andy Woodworth posted this on February 24, 2010 at Agnostic, Maybe (still one of the best liblog names I’ve ever encountered). He found himself pondering this question:
Where will information content be in five years? Ten years?
Woodworth decided he couldn’t come up with an answer, doesn’t believe anybody else has a non-speculative answer, and that—if you took “the answers” from a bunch of people, sealed them up, and looked at them in five or ten years, “they would be mostly (if not completely) wrong.” I might respond that there is no (single) answer for the same reason there is no future: There are many answers, and most of them will be partly true, partly false.
Woodworth decides to look at the past—specifically the websites he uses now and where they were five years ago. It’s an interesting list, although it might be even more interesting to get a list of “clear game-changer” sites from five years ago and see which of them are still important or even around.
It’s unfortunate that Woodworth feels the need to add a comment about “the general decline in printed newspaper and periodical readership that has trended during this time period”—since that “general decline” in periodical readership is neither clear nor necessarily true. (Even for printed newspapers, it’s not a general decline; it’s mostly a decline in afternoon newspapers and large metropolitan newspapers.)
There are simply a lot of things going on; too much, I believe, for anyone to grasp in terms of the big picture. And I think it’s time that the librarian community admits that we really don’t know where exactly information content is going to end up in that time. Sure, we can say where it will be in the short short scale of maybe a year, perhaps two, but beyond that is lost to us.
I’d put that differently. There will always be a lot going on—and the big picture is likely to be made up of a lot of little pictures, not well suited to grand statements or generalizations. In fact, the “general decline in printed newspaper and periodical readership” is one of those generalizations better avoided.
This is a reaction to Seth Godin’s deeply ignorant post about libraries (discussed in June) that I missed—it’s from Erin Downey Howerton at schooling.us and appeared January 9, 2010. Some of what Howerton has to say (it’s a reasonably short post, and maybe you should go to schoolingdotus.blogspot.com and read it yourself), noting that text in quoted italics comes from Godin’s post:
“They can’t survive as community-funded repositories for books that individuals don’t want to own (or for reference books we can’t afford to own.)” I have yet to see the person able to afford all the books they will ever need in their lifetime. Or a personal subscription to all the magazines they might want to read, or all the databases they might need to consult… I’m not sure I’d want to live in a world where we only had access to the ideas we could afford to buy.
“The information is free now.” Information is never free. Libraries and librarians work to provide access (using your tax dollars) to hugely diverse, authoritative sources of information in many formats. Yes, there is more access to information than ever before but access is not equal for all…
My last thought: in many communities, the public library is the last truly democratic place. Anyone can come in, anyone can read for free, anyone can meet freely. There needs to be at least one place that is open to all in every community, and the library is as much a place as it is a collection.
I’ve stopped taking Seth Godin seriously, particularly as his blog seems to be turning into a series of fortune cookies, but other people do take him seriously. It’s good that there are thoughtful people like Howerton responding. It’s unfortunate that her blog probably has a small fraction of Godin’s audience.
I’ve now read this ACRL publication (33 pages, published June 2010, available at www.ala.org/ala/mgrps/ divs/acrl/issues/value/futures2025.pdf) twice. And thought about what I might say about it. And concluded that this is what I should say:
This is an interesting set of more than two dozen scenarios—not a future but a varied set of possible changes in the future, with informed comments on both the probability (and timing) of each and the impact on academic libraries. It’s well worth reading and thinking about if you’re in academic libraries or care about academia. The price is right and I believe the approach is sensible. My own opinions on the 25 scenarios? Even if I have them, they’re really irrelevant. Go read it and think about it.
A much smaller group of the largest academic libraries did something vaguely similar a few months later, yielding The ARL 2030 Scenarios: A User’s Guide for Research Libraries (www.arl.org/bm~doc/arl-2030-scenarios-users-guide.pdf). The differences? We get four grand scenarios instead of 26 smaller scenarios; the timeline is five years further out; the publication is much longer (92 pages)…and, frankly, I didn’t read that one in full. Nor will I comment on it—or the October 19, 2010 Chronicle of Higher Education piece about it and some of the comments that piece received.
With the preface “Guest Post:”, this appeared on July 1, 2010 on Steve Lawson’s See Also…, perhaps accidentally posted three months late—or perhaps not. It’s set as email from “a person whom I don’t know” with an attachment that might be parody.
The “attached article” begins “Within the next 25 years, libraries will become wholly unnecessary. This is a good thing, not a tragedy” and goes from there. It is…well…it is what it is. There’s a lot of Technological Inevitability here and some first-rate snark. There are a handful of direct comments—and a copy of a much larger discussion from FriendFeed.
Do I regard this as serious library futurism? Probably not. Do I believe the post—and more, I think, the FF discussion—make some interesting points? Probably so. As for my own thoughts, well, I’m part of that FF discussion and will stand by what I said there: Wholly imaginary scenarios aren’t terribly instructive. But I could be wrong.
This one’s by John Dupuis, posted July 27, 2010 at Confessions of a Science Librarian—and it’s mostly about “good futurism,” thinking about future possibilities and looking for surprising implications rather than trying to predict simplistic futures. Dupuis quotes futurist Jamais Casico—and any futurist who says (in advocating that people craft multiple futures) “Whatever you come up with, you’ll be wrong” has disarmed my snarky instincts right off the bat.
Good futurism is about considering possibilities and thinking through implications—and seeing to what extent we can or should try to create the futures we prefer. The post discusses good futurism; it’s a version of the start of a book Dupuis is working on. I think it’s likely to be worth following, just as this post is worth reading.
No particular order, and too many of these to give extended commentary—except that, as I go through two dozen items, I now find that most of them aren’t even worth mentioning. Deathwatches are as much lazy writing as anything: Say something extreme to get the reader’s attention, whether you have evidence or not. The most absurd may be the one that occupied a full magazine cover: Wired’s “The Web is dead”—but that’s Wired, which specializes in hyperbole.
A word about hyperbole: I’ve had one valued colleague in the library field defend hyperbole as his approach to speaking. I don’t buy it, mostly because too many in the audience won’t be aware that it is hyperbole. Tell people “apps will probably be less important in 2011 than they are in 2010,” and I want to know more. Tell people “apps are dead” and you are, of course, dead wrong—except that a fair number of listeners, who don’t recognize that you’re fond of hyperbole, will go back and shut down their modest app efforts because that’s what you said and you’re apparently worth listening to.
I don’t buy the need for hyperbole. I believe it does more harm than good. This may be one reason I’m not getting speaking invitations.
Here’s a piece by AnnaMaria Andriotia from SmartMoney, appearing on Yahoo! Finance, that might be a lot less annoying without that deliberate advice in the title and the introductory paragraphs. That is, these aren’t things that might have smaller market shares this year than in the past; they’re things you should actively avoid buying. Why? Because they “appear poised for a dip in sales, which could be a prelude to obsolescence.”
Look at the reasoning here: Because we think X might suffer falling sales, which could mean that it’s nearing obsolescence, therefore you shouldn’t buy X. I’ll say this for that logic: A neater summary of self-fulfilling predictions could hardly be stated. “If all of you do what I say, then my predictions will be correct. Therefore, you should do what I say.” Bleh.
It is, of course, also a “the new is always better than the old” piece and touts “revolutionary products” that will replace “old mainstays.” It offers the flat statement that “DVDs, books, newspapers and magazines will continue to lose ground to services like in-home movie rentals and gadgets like the Amazon Kindle”—and urges readers to be part of that shift.
Here’s the list, with my comments—noting that the issue here is not whether some things mentioned may have a declining market share, but whether it’s sensible to tell people to avoid them in 2010.
· DVDs. You shouldn’t buy DVDs because Blockbuster’s in trouble and DVDs (can) cost more than on-demand rentals. What? You want to see a movie or TV series several times? Nobody does that! My own situation: We’ll be buying fewer DVDs in the future…because if we’re going to buy something, it’s likely to be a Blu-ray Disc. I’m guessing this writer thinks we should avoid BD as well.
· Home Telephone Service (that is, landlines). You should avoid them now because “it will probably take a while, but home landlines could become as archaic as the rotary phone.” You get better call clarity on landlines? Doesn’t matter.
· External Hard Drives. What? Even as they’re getting absurdly cheap? Nope. “An up-and-coming alternative might be simpler and save you another transition down the road.” It’s the cloud, of course—even though it’s more expensive (as stated in the article, which overstates the starting price for an external hard disk). This one makes no sense to me at all, except on the basis that “more digital is even better than some digital.”
· Smartphone Also-Rans. By which the writer apparently means anything other than iPhones and BlackBerry units. Oh, and Android phones. I don’t know what to say here, other than that the claimed market shares don’t match what I’ve seen elsewhere for installed bases of phones.
· Compact Digital Cameras. Really? That’s right: You should not buy a compact digital camera—you should buy a digital SLR instead. Even though it will cost several times as much and be considerably bulkier. This is “everybody should have the same preferences” nonsense at its worst.
· Newspaper Subscriptions. “The morning newspaper has been replaced by a growing online media presence.” That’s it: Newspapers are dead. Oh, and 360 magazines shut down in 2009 (as another few hundred began), therefore they’re dead too. And, you know, ebook Readers “could increasingly become one-stop sources to access newspapers, magazines and books.” Therefore, you should stop buying newspapers even if you prefer them.
· CDs. “When was the last time you bought a CD or even walked into a record store?” Within the last six months for the first, a while longer for the second. But so what? If I don’t buy them, then you don’t need to tell me not to buy them; if I do, then you have no business telling me not to.
· New College Textbooks. Hey, if I was in college and could legitimately get by with used texts or downloadable books, great.
· Gas-Guzzling Cars. I’m all for telling people they shouldn’t buy gas-guzzling cars because they’re bad for the environment and use up a limited resource. But the pitch here is that gas hogs may become less popular, therefore you should avoid them.
· Energy-Inefficient Homes and Appliances. There are excellent reasons not to buy these things. Popularity isn’t one of them.
The last two? Probably good advice, but for the wrong reasons. The rest? The worst kind of deathwatch: Don’t buy these because we think they might become obsolescent. You know how long something can be obsolescent before it becomes either obsolete or useless? Decades. You know what they call people who don’t buy things that meet their needs or preferences because they’re informed that those things could become obsolescent? Fools.
That’s from Tony Hirst’s OUseful.Info, the blog on August 9, 2009—and it’s a classic “it isn’t working for as many people as we’d like, therefore it’s dead” case. The post says “RSS subscription hasn’t worked in the browser, or on the Windows desktop” and very little more. The first comment notes that the RSS icon is nearly universal in browser address bars (so you don’t need an explicit RSS link)—and Hirst’s response clarifies the problem: “I think you’re wrong: for most people, I’d be willing to wager the feed icon in the browser address is invisible to them…” So what’s really being said is that most people don’t subscribe to RSS feeds. That’s probably true. So what? (The suggestion that people would use RSS more if it was called “follow” or something…I’m doubtful.)
RSS is a classic case of a technology that doesn’t suit everybody but works extremely well for those who want it. Similarly for delicious (which I was late to adopt): It astonishes me when I’m told that people (apparently, all people) mark something they want to read later by bookmarking it in their browser, which I regard as a cumbersome way to do it. Most people don’t use delicious or any of its competitors: That neither makes them dead nor useless.
Back to the comments, “harrym” may have it right here:
It’s…not that surprising that not many people use RSS. It’s a feature for heavy users—which, by definition, most people aren’t.
But, as harrym also says, RSS isn’t dead. It’s just not universal. When Hirst and others talk of how successful social networks are at this sort of thing…well, you know, Twitter isn’t used by most people, FriendFeed by a lot fewer. I’d bet that active Facebook users who participate and follow—let’s say at least once a day—represent a small minority of web users.
Classic deathwatch by Farhad Manjoo, who should (but clearly does not) know better, posted September 10, 2009 at Slate. He’s saying the “days of the dedicated music player have come and gone.” It’s nonsense—particularly when he extends it to assert that all special-purpose digital devices are headed towards being general-purpose portable computers.
As is typical with this sort of thing, Manjoo gets some facts wrong—e.g., the assertion that it’s “now impossible” to get a cell phone that doesn’t have a camera. There’s a reason the Jitterbug is popular with millions of people; lack of extraneous features is part of that reason. More to the point, adding secondary features needn’t distract from a primary feature: The iPod won’t be dead until and unless people stop wanting things that are primarily music players. My cute little Sansa Express was technically not a dedicated MP3 player: Like almost every non-Apple MP3 player that’s ever been produced, it included FM radio and voice recording. (Yes, Apple finally turned these on, but they’re late to the game.) So what? I didn’t use them, they didn’t affect the overall design, they were largely hidden frills. My even cuter and slightly larger 8GB Sansa Fuze can show video and also has that FM radio and voice recording, and I tested just enough to know that the FM radio works extremely well—but for me, it’s a dedicated MP3 player. Period.
Maybe Manjoo isn’t really talking about dedicated devices. Maybe he’s talking about Apple’s apparent need to keep soaking its dedicated followers for new versions of whatever they have. But no, he flatly says all players “will morph into computers,” that specialized devices always turn into general-purpose devices.
Maybe I shouldn’t be surprised that, 10 months later, there have been exactly zero comments on this story. Maybe people are just yawning and turning the page—er—following another link.
That’s not the actual headline on this April 27, 2010 piece from Bloomberg Businessweek, but it’s what Sumner Redstone of Viacom seems to have said, attacking Rupert Murdoch for investing in newspapers. A direct quote: “there won’t be any newspapers in two years.” But then, Redstone is an 86-year-old who says he plans “to live forever” and that “movies and television will be here forever, like me.”
I don’t place serious bets, ever, but if I did, I would gladly bet $10,000 that there will be newspapers in 2013—indeed, I’d bet that more than 80% of the newspapers publishing in 2010 (which is more than 80% of those publishing in 2005) will still be publishing in 2013. Since Redstone’s so sure, I wonder whether he’d give me odds?
A classic. This astonishing screed from Daniel Eran Dilger on RoughlyDrafted Magazine (like a blog but with pretensions) was posted a day too late: April 2, 2010. In 3,000 words, this Apple enthusiast tells us that Jobs “likes to kill old things” (and somehow seems to assert that Apple was the USB leader, an interesting rewrite of history) and Dilger seems to think killing things is a great idea. Oh, and in Dilger’s mind, Apple is more successful than any other company—because, you know, it’s Apple.
Maybe we get enough before his list: “TV killed off the radio” and a string of other nonsense statements. Anyway, he offers a paragraph on each of 19 things that are dead: DVDs, eReaders, “stacks of papers in office meetings,” textbooks, netbooks (a discussion in which he says netbooks have “already killed off the desktop PC”), handheld game devices, brochures, single-purpose industrial gadgets, other tablets, “the credibility of haters” (you need to understand the code: if you say anything negative about Apple, you’re a “hater,” where if you denounce everybody except Apple, you’re an informed commentator), Flash et al, Office, TiVo and set-top boxes, idle moments, Chrome OS, Android, Windows Phone 7, in-flight entertainment, Google’s ad monopoly.
In amongst the explanations, you learn that everybody else rips off Apple (the only true originator), you understand that we’re all going to have iPads right away and use them all the time…you get a sense of the mind of a reasonably literate fanboi. (The Office discussion? I’m sure it’s written in English, but I won’t even attempt to make sense of it.) As you might expect, most comments are from people who read this, um, magazine regularly, so they’re mostly supportive. We get the all-too-predictable “hard disks are dead too” item (since, you know, flashram gets cheaper by 50% every two years, where hard disks only get cheaper by…well, by about 50% every year, but never mind). A few people call out the extreme fanboi attitude—but you know how blog audiences are. Oh, and “idle moments” being dead…well, for those who crave constant interruptions, that’s been true for a very long time. For those of us who understand balance, not so much. (Some “dissenters” let me know just how much this is a specific audience, such as one who says Office would only die if Microsoft stopped developing Mac-specific versions. Which probably account for about 5% of Office sales…not that Microsoft has any intention of that, as evidenced by Office 2011.) Encountering the writer’s snide responses to one of the legitimate dissents, in which it becomes clear that “dead” can mean anything from “no longer a monopoly” to “I don’t like them” is also informative. The man is simply vicious about anyone who disagrees with him, throwing out personal insults as though he’s an untalented version of Don Rickles.
That’s from Jeffrey Pomerantz on June 1, 2010 at PomeRantz. He’s even less ambiguous in the test than in the title. Here’s what he says about the announcement from Facebook of a beta question-answering service:
I say, this is the death of library reference. Not that this Facebook service specifically will kill reference. But the fact that Facebook has jumped on the Q&A bandwagon is a signal that the last nail on the coffin of library reference was put in place some time ago.
There’s more to it than that, but that’s the gist—or maybe it’s that moving from “one player” (when, exactly, were libraries the only way to get questions answered?) to a market means the “one player” can’t rely on that business. In explaining that, he mentions that “IBM is no longer in the hardware business,” which must come as one hell of a surprise to IBM (it’s not in the PC business, but it’s the world’s largest server manufacturer, among other things). Later, Pomerantz says—in boldface—”Libraries need to give up the notion that question answering is a core service of the library.” He thinks libraries should only offer reference services on issues that “only the library can deal with.” Further, he seems to be saying that, in general, libraries can only exist to the extent that they do something nobody else does, or more generally that a business must be a monopoly to succeed. (I may be overinterpreting here, but not by a lot.)
I’ve rarely used reference services at public libraries. Does that make them useless? Well, I don’t use story hours or adult programming or DVDs or romance novels or how-to-do-it nonfiction either, so I guess libraries should stop all those irrelevant things. All of which have competitors or alternate sources.
In practice, good librarians have research skills and resources that most patrons don’t. I’ve seen my wife at work on various projects; her librarian skills make her superior at digging out real answers to tough questions. Crowdsourcing may work for some of that, but not for all of it—and since there have been crowdsourced Q&A services for many, many years, adding FB to the mix is hardly tantamount to pushing libraries out of it.
Comments range from the mysterious to the thoughtful—and Pomerantz returns to say “This was a rant” and he’d rather not engage in a thoughtful conversation. His followup also makes an important distinction: “I’m mostly referring to academic library reference services.” Whoops! Another academic librarian who simply ignores public libraries. He calls it a “hazard of the trade,” and that’s a nice way to put it. In the end, he essentially says he’s right, so there’s no point in discussing it. OK, then.
Oh, sorry, it’s from Wired Magazine (posted July 28, 2010, and appearing in the August 2010 issue), and I shouldn’t be shooting fish in a barrel. Thompson finds that he’s making a lot fewer phone calls, and of course (hey, he writes for Wired) moves directly to “the death of the telephone call.” ‘Cuz, you know, The New Generation Doesn’t Make Phone Calls. At all. Period. Full stop. End of story.
The role of phone calls has changed, thanks in part to email (over the past 20 years), messaging, etc., etc. That’s true. It’s generally a good thing. Heck, I hate phone calls. I make and get very few of them.
But dead? And The Digital Generation Doesn’t Make Them, Ever? Give me a break. Retitle this “Clive Thompson needed a column topic” and you’ve said just as much.
Turns out I was using the “deathwatch” tag in delicious for two kinds of commentary: Those that engaged in deathwatching, and those that comment on deathwatches.
Steve Lawson’s essentially given up on See also… and that may be a shame. Some of his infrequent posts have been wonderfully thought provoking (or just provoking, and provocateurs have their place), such as this one, posted April 16, 2010.
He begins by noting a general problem—one that I’ve struggled with: What to do with posts and tweets about conference presentations when you weren’t actually there.
It’s too tempting to take quick conference blog posts (or worse, Twitter posts) at face value, and assume that
· what was reported is actually what was said;
· the person who said it belives it; and
· the person who reported it appoves of the sentiment.
None of that is necessarily true. So it’s tempting to decide simply not to comment at all. I know that Walt Crawford tries to do that.
I try to do that because I’ve gotten too much grief for not doing it. Steve, who’d just finished reading Pierre Bayard’s How to Talk About Books You Haven’t Read (hmm: I should read that), concludes that he doesn’t need to be so circumspect—and takes off from the Dead Technology session at the Computers in Libraries, which he didn’t attend. “The mere existence of such a panel prompted people to create their own lists of dead tech and have their own arguments online, and it also prompted people to second-guess the technologies that were reported by eyewitnesses.” There were FriendFeed threads; there were hashtags. I participated in the FriendFeed thread Lawson links to (how would I not?).
And there are interesting points. One participant said Velcro® was dead—and as a technology, that might be true. It’s used all over the place and is likely to continue for decades—but it’s not viewed as a technology any more than print books are viewed as technological devices. It is, of course, as are they. It’s just established technology. Or, as Lawson puts it, “It’s no longer technology; it is lint.”
He discusses truly almost-dead technologies, where the question is why we should care. He uses microcards as an example. I’m not sure they’re actually dead, but they’re pretty close. He also suggests you could fruitfully discuss technologies that you believe to carry the seeds of their own destruction, and thinks the ludicrous “the iPad is dead” might be one such case.
What he’s saying, I think, is that the useful version of “X is dead” is this (taken directly from the most recent comment, which happens to be by Lawson):
For a “dead media” topic to be interesting, it would probably get you to not only think differently about the medium in question, but to think differently about what it means to be “dead.”
Generalize that to deathwatches in general, and I’m inclined to agree. “X is dead” can be simplistic, arrogant or just wrong—but the other questions are interesting. Do they need the “dead” moniker? Only to bring in the crowds.
When you’re a publishing conglomerate like Condé Nast, you can have a mix of extremist and more nuanced sites and publications; for example, ars technica may be the saner cousin of Wired.com. Chris Foresman posted this item “five months ago” (as of October 22, 2010) at ars technica—and Foresman does something sensible.
Yes, the growth in netbook sales has declined considerably. Some analysts claim this is because of the iPad (and, in extreme cases, that “netbooks are dead.”). Foresman looks at the historic record and makes a far more probable conclusion: Netbooks have started to saturate their market niche. It’s also important to note that it is simply not possible to keep up 300% annual growth rates over more than a year or two and that netbook sales have not started falling: They’ve just stopped growing rapidly.
Here’s that pesky ars technica again—this time Matthew Lasar “2 months ago” (love the site, hate the dating methodology). It’s all indirect: Because of an agreement on artist royalties for various media, tech bloggers are referring to FM radio as “dying” and “obsolete.” A writer who should know better proposes shutting down FM entirely.
And yet…Arbitron says that radio listening is growing and currently reaches more than 93% of those 12 and over at least once a week. Arbitron may be off, but not by such an extent that “FM is dying” is anything but gratuitous nonsense. The more general point:
We sometimes brand things “obsolete” or “dying” based not on their actual use, but on the fact that something else has come along that we think is (or will be) better.
Hard to argue with that. Oh, but commenters do—somehow believing that FM is being kept alive through various conspiracies. (The background issue is a proposed FCC rule that would force mobile devices to include FM radios—which might be “cumbersome,” but nearly every non-Apple MP3 player has always had them, they’re already part of the chipsets in many cell phones, including iPhones and Symbion-based devices…in other words, it might add $1 or so to the production cost of some devices. Should it be mandated? I’d be opposed—I agree that it’s a ridiculous mandate.) Do I listen to the radio? Only when in the car or when there’s an emergency. Does that count as “no”? Oh, wait…
Here’s Harry McCracken at Technologizer on August 18, 2010, with the teaser “Microsoft, Firefox, Facebook, the Mac—they live on in our hearts.” He’s commenting on Wired’s asinine “Web is dead” story:
I’m not sure what the controversy is. For years, once-vibrant technologies, products, and companies have been dropping like teenagers in a Freddy Krueger movie. Thank heavens that tech journalists have done such a good job of documenting the carnage as it happened. Without their diligent reporting, we might not be aware that the industry is pretty much an unrelenting bloodbath.
Following which he provides a bunch of historical image captures on the death of, well, “practically everything.” Internet Explorer died in 2004. The Mac? June 6, 2005. Linux: 2006—the same year as TV. Office didn’t die until 2007, but Microsoft itself died that year—as did Email. Facebook? 2008, along with BlackBerry, while FireFox and the desktop might have died in 2009. The iPod? McCracken cites the same Manjoo story I did. 2010 hasn’t been much better. The Wii died in February. The netbook in April. OpenOffice in May.
All of these illustrated with segments of actual stories, mostly from semi-reputable sources. The list could go on almost forever, couldn’t it?
This one, from James Ledbetter on September 1, 2010 at Slate, is narrow: “Why is everyone always writing off Netflix?” The lead sentence is a magnificent example of counterhyperbole or drastic understatement: “People who think and write about technology companies for a living are prone to be wrong now and again.” You think?
As Ledbetter notes, Netflix has been killed off or regarded as obsolete more often than most—including the stock analysts who’ve called it worthless. (Really: One analyst called Netflix a “worthless piece of crap” in 2005, and others continue to claim that it’s doomed.) Good old crazy-man Jim Cramer told viewers to sell Netflix when it was $19 a share—and later ate a piece of a hat with Netflix’ stock symbol on it. (When Ledbetter’s piece appeared, Netflix was trading at $130. As I’m editing this, it’s at $168.)
The story offers some useful thoughts on why analysts consistently get Netflix wrong, although I think Ledbetter misses one key element: Netflix grows loyal customers by treating us well. He does understand that “keeps its customers happy” is key to Netflix success—but maybe not just how good it is at that job. Every time I get a sandwich at Subway, I walk by one of the remaining Blockbuster stores with a huge poster telling me why Blockbuster’s DVD-by-mail plan is so superior to Netflix. Which is presumably why Blockbuster is bankrupt. (Comments are interesting. As usual, the few who’ve left Netflix seem to think they should be able to get every movie the day it’s released and don’t care about anything older—which means they shouldn’t be Netflix customers. And, of course, there’s one True Capitalist who doesn’t give a damn about Netflix as a company—only whether the stock price will go up or down.)
“End-ism” is another word for what I’ve called deathwatches or deathspotting: The labeling of things as dead or ended or over. This post is by Simon Waldman on September 4, 2010 at Creative Disruption. Noting some books and magazine articles—the end of work, the end of history, the death of advertising and, to be sure, the death of the Web—Waldman also notes the tendency to pronounce something dead without a lick of evidence, as in Read/Write Web’s pronouncement that Blockbuster’s bankruptcy might mean the end of the DVD. He calls it all End-ism.
There’s two things at play here. The first is simple editorial flourish. After all ‘The Web is Dead’ is much more enticing than the more accurate ‘There’s a big shift in the nature of online behaviour’. ‘The End of Work’ is better than ‘Structural change in employment patterns in the 21st Century, and its consequences.’ [it’s still worth a read, by the way].
But, there is also an underlying thought process going on–what I’ll call ‘End-ism’–which is a dangerously reductive way of viewing the impact of structural and disruptive change within a sector. Whenever a business, a medium or a way of doing things that has been dominant for decades faces a profound challenge, perhaps the most significant in its existence, End-ists will automatically declare it ‘dead’ or ‘over.’
As Waldman notes, “End-ists are also normally rampant neophiliacs” unable to comprehend that “the rest of the world is still devoutedly wedded to the old.”
The problem with this thinking is that the existence and growth of the shiny and new doesn’t automatically mean the end of the old.
Preach it, brother—I’ve been saying this for, what, fifteen years now? Not that anybody’s listening. Waldman notes that “end-ism” is “no problem” for blogging (well…), but gets dangerous when it influences actual thinking within business. I’ll go farther: it’s dangerous in speaking and writing and blogging because people listen and believe.
There’s a lot more to Waldman’s piece, more an article than a blog post. Worth reading.
An unwieldy title for an October 16, 2010 post at Samir Husni’s MrMagazine.com. He’s quoting David Granger from the November 2010 Esquire “Letter from the Editor.” I’ll repeat most of the quote—if it’s fair use for Husni, it’s fair use for me:
I lose patience with pundits who prophesy and lobby for the demise of all traditional media in favor of newer forms… [T]he reality is that all of these forms of expression—new and old, digital and analog—are going to continue, and they are going to continue to prosper. The things we create in print and in digital are so completely different from each other that they appeal to fundamentally distinct needs. The war between old and new is a false construct. Nothing goes away. The human need to create is too great, and the human desire to be entertained is too intense to allow any form, whether books or oil painting or even blogging, to disappear.
Emphasis added. I’m inclined to agree—but you already knew that.
The Web is Dead? I’ve read the articles (two of them running in
parallel, both annoying and absurd). But why bother? Wired will be Wired,
for what it’s worth—which, for me, is the $0 worth of about-to-expire airline
miles I “paid” for it. That would have expired by now, except that I also
subscribed to a really good publication by Condé Nast, Portfolio—a new
business magazine almost as well written as Fortune. That,
unfortunately, didn’t make it; instead, I’m
blessed stuck with Wired for
a while longer. I can assure you that the subscription will not be renewed.
Let’s end this with the first (in five years—I did a few of these in 2001-2005) of an ongoing series of “disContent” columns that originally appeared in EContent Magazine—in this case, the last of a decade’s worth of columns, first published in December 2009. The column appears exactly as it did in the magazine, followed by a brief postscript. Note that this column is now available as the last essay in a limited-edition casebound book, disContent: The Complete Collection, described on Page 1 of this issue.
When was the last time you read some piece of econtent (or print content) proclaiming “X is dead”—where X is something other than a person who’s recently deceased? Five minutes ago? An hour ago? Yesterday?
Unless you’re luckier than most or read only in rarefied circles, I’ll bet it’s been less than a week—probably a lot less. I’ll also bet that X is not dead.
It’s gone beyond cliché to the point that it weakens stories to which it’s attached. Many stories that use it are sloppy futurism, equating “weaker than it was a year ago” with “dead or about to be,” which isn’t the way most things work. Others just ache for attention, such as articles that explicitly say “OK, so X is not dead” after a paragraph or two of sensationalism. I believe the usage itself should die.
There’s an alternative formulation—”X is dead; long live X!” I’m afraid that’s also become a cliché. The first title for this piece was “‘Is Dead’ Is Dead, Or At Least It Should Be,” but that was an “X is dead” in itself—and I’m determined to avoid those in the future. (Mea culpa: I’ve used the alternative formulation.)
How prevalent is this nonsense? OCLC WorldCat shows thousands of occurrences. More than 200 titles using the “X is dead; long live X” cliché include cases where X equals advertising, the book, affirmative action, the revolution, photography, DEC, the church, teaching, economic income, the military, the career, marriage, the party, the sitcom and more—usually, but always, with that “!” at the end to make it extra-special. Do you want to write a book or ebook that uses exactly the same title formulation as more than 200 others? Really?
But of course, “X is dead; long live X!” is self-negating. Others—where, at least from the title, the claim for death is meant to be serious, include cases where X equals print, school, good, the NBA, grunge, theater, relativity—and, in a rare double-cliché, Get Over It! Educational Reform Is Dead, Now What?
Still, for all these books (and hundreds more), the real problem is with articles, posts and other econtent…and a profusion of declarations that X is dead even as X is just starting to emerge. More than one article has said “Ebooks are dead” or “The ebook is dead” or “The Kindle is dead.”
More indicative of the pure failure of the “X is dead” theme are proclamations that this or that social medium or social network is dead. Wired (isn’t that dead yet?) has announced that blogging is dead. So have many others (including Dan Lyons, better known as Fake Steve Jobs)—sometimes in blog posts. Someone finds a hotter technology for them—and that means blogs are dead. (EContent used the alternative cliché about blogs in the January/February 2009 issue—few of us are immune from this tired usage.)
Andrew Baron declares Twitter is dead because Tumblr’s better. A number of people have pronounced that Google is dead—or that PageRank is dead. Facebook? Yep, as pronounced on a Wall Street site in December 2008; with the qualifier “in US” earlier in 2008 (we all fled our Facebook accounts, remember?); and of course elsewhere. MySpace, of course, is dead, as are user generated content and content itself. (Content was killed by community—and community is dead.)
What about TV—or network TV, broadcast TV, or scripted TV? Dead for years now, pronounced deceased almost as frequently as print books (remember print books?). Newspapers? They died years ago. Why, US papers will probably have a mere $36 billion in ad revenue this year. Apple? Dead for years: Look it up. (So is Microsoft. So is Intel. So is the CPU…) Bing yields the improbable “208 million results” for “email is dead” as a phrase search—but even Google’s 63,100 results are a sign that, well, precise search results are dead. Bing, of course, has been declared dead more than once. So has Ning. So have lists. So have short message services and texting.
Mostly, it’s nonsense. Vinyl isn’t dead—turntables and albums represent small but apparently profitable and growing businesses. Magazines aren’t dead or even close. Books? $40 billion net revenue to publishers last year in the US, according to the Book Industry Study Group. Sounds dead to me!
Even AOL isn’t quite dead yet. According to Alexa (as of Aug. 10, 2009), it’s still running 34 million daily visitors and 130 million daily page views.
“AOL is declining” isn’t a snappy headline. It has the virtues of being accurate and making a good lead for an explanation of why that’s so and what it means. Isn’t that better? Or is nuance dead?
By the time I wrote this, I think I already knew that disContent was dead—and as final columns go, this one’s not bad.
Comments should be sent to firstname.lastname@example.org. Cites & Insights: Crawford at Large is copyright © 2010 by Walt Crawford: Some rights reserved.
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/ licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.