It’s been more than a year since the last Old Media/New Media roundup. While some of these items may seem a little dated, I think they’re still relevant. For the first few, you can make up your own narrative.
Yet another Pew—this time the Pew Research Center for the People and the Press—studied that question in a 2007 survey, comparing responses to a similar study in 1989. As the summary points out, we’ve seen the emergence of 24-hour cable “news” as a dominant news source (I added the scare quotes) and the “explosive growth of the internet” should give us all better access to news and current affairs.
You can probably guess the results: “On average, today’s citizens are about as able to name their leaders, and are about as aware of major events, as was the public nearly 20 years ago.” Specifically? A lower percentage could name the current vice president, their state’s governor or the president of Russia. People do much better naming the speaker of the house and a little better knowing which party controls the House—but, astonishingly, only 68% knew that America has a trade deficit (down from 81% in 1989). (How’s this: 69% knew our VP’s name and only 66% knew who their governor was. Only 37% could identify the Chief Justice as being conservative, but that’s up from 30%.
Unfortunately, the survey also “provides further evidence that changing news formats are not having a great deal of impact on how much the public knows about national and international affairs.” You may already have heard this one: Looking at news sources, two groups tied as having the highest percentage of knowledgeable people. One group views major newspaper websites. The other watches The Daily Show or the Colbert Report. Nearly tied for least knowledgeable: Those who get their news from network morning shows, local TV news—and Fox News.
There are always items about the (inevitable) death of this medium or that as it’s inexorably replaced by digital equivalents. Annalee Newitz writes “the future of paper” at San Francisco Bay Guardian Online starting with this simple statement: “Twenty years from now, paper will no longer be a tool for mass communication.” There it is: By 2028, all the large-circulation magazines will be gone, all the newspapers will have died, there will be no best-selling print books. Gone, all gone. This columnist reads a press release from a Finnish paper company looking for new uses for paper and concludes, “Print communication is dying out, and with it goes the paper industry.” And she’s unhappy—not because print is in a “fast decline” (clearly, she thinks that’s great and inevitable and doesn’t need facts about that so-called fast decline) but because journalists will disappear with print journalism.
Given the inevitable fast decline of all print media, it might be worth noting a recent National Newspaper Network study, as reported July 22, 2008 at Media Life (www.medialifemagazine.com): “Newspaper readership is up for the second straight year, rising 2.5 percent this spring over last, to 80.5 million readers.” One reason is that newspapers are emphasizing local news—and that’s something they simply do better than anybody else. (Most newspapers aren’t big metro dailies, and the big metro dailies have suffered most of the declining circulation. Truly local papers have, by and large, been doing just fine all along.)
A March 3, 2008 piece by Gene Ely in Media Life gets it right—“They’re back, the snake oil sales folks.” He understands how new media work:
The internet is not going to kill magazines or radio or the local daily newspaper. In so many ways, they are thriving now, despite all the grim talk, and they will continue to thrive alongside the internet even as this sorting out process continues. If anything, the internet serves to enhance what they do well.
None of these media is as vulnerable as the doomsters would have us believe.
People still listen to the radio while driving… People still like to read print newspapers. They will not go away.
The same for magazines. Some are closing, made redundant by the internet, but many are thriving. Magazines do for readers what no other medium can, and likewise for advertisers. As a newer competitor, the internet is forcing magazines to reinvent themselves, which is all for the good….
When evaluating all the new hype over the internet, it’s important to keep several things in minds, and one is that through history newer media have not killed off older media.
TV didn’t kill off radio, radio did not kill off newspapers and magazines and neither of those killed off out-of-home advertising. As it turned out, in fact, the newer media simply increased the size of the pie. They increased consumer engagement with all media, and they gave advertisers more ways to reach those consumers.
Steven Chabot posted “The myth of the digital sublime” on May 8, 2008 at Subject/Object (subjectobject.net). He cites some quotes from Vincent Mosco’s book The Digital Sublime: Myth, Power, and Cyberspace in which any number of earlier new technologies were hailed as changing everything—with “the exact same language that we use to describe the internet.” So, for example, the telegraph transformed our whole human existence, removed causes of misunderstanding and promoted peace and harmony throughout the world. The telephone was the harbinger of a new social order. Radio was “a means for general and perpetual peace on earth.” Television was “a torch of hope in a troubled world” that “will usher in a new era of friendly intercourse between the nations of the earth.” Remember that, in the 1930s, TV was expected to be a great democratic and educational tool. .Will the internet decline from “something sacred” to “purely profane” as the younger generation understands that it’s just another medium? Could be.
Finally, an Ars Technica report dated July 14, 2008. Despite all the talk that downloading dooms DVDs and Blu-ray and that P2P undermines commercial sales, spending on DVDs and Blu-ray during the first half of 2008 increased over the first half of 2007—and spending on rentals rose even more. Neither increase was all that significant, but any increase may seem surprising. (The numbers? For the first half of 2008, U.S. only, I believe: $6.87 billion in DVD & Blu-ray sales, $3.9 billion in rentals.)
Just how thick is that supposed long tail, as opposed to the thick head of truly mass media, best sellers and A-list bloggers? A few notes:
That’s the title of Michael Jensen’s article in the Spring 2007 Journal of Electronic Publishing (www.journalofelectronicpublishing.org). Jensen is at National Academies Press and says that 17% of NAP’s income is pretty “long tail-y”: roughly one-third of the items (print books and PDFs) available for sale in 2006 were purchased fewer than 10 times in the year.
The press makes all of its recent publications available for page-by-page browsing. That’s opened up an “incredibly huge audience”—the NAP site gets more than 1.5 million visitors per month. How many of those visitors buy anything? Two in a thousand: 0.2%. Here’s the thing about the true long tail, and how you have to think about it:
This vanishingly small conversion rate (of visitor-to-buyer) seems pitiful. But with that tiny fraction of a percentage, we are still able to sell enough publications online to be essentially self-sustaining, because the raw audience is so huge.
Jensen talks about the “deep niche”: “people who, on any given day, because of a passing fancy, or a new career, or a new experience, are interested in (and potentially willing to pay for) affordable high-quality content.” And he projects what that “deep niche” could mean “when every adult person is online”—which is quite a ways from where we are today:
On any given Wednesday, if 0.001%—one in a hundred thousand—of the English-speaking Web includes people who are newly interested in Elizabethan costumery, that’s still 10,000 people poking around online that day. Perhaps 0.2% of them—or 20—might be willing to purchase a high-value scholarly publication (with illustrations) on that topic.
Even if only 0.01% of them actually make a purchase—one in ten thousand—that’s still one sale per Wednesday, and one sale a day, while not a bestseller, is still enough to be a business. If it were two or three a day, for most publications and publishers, life would be good.
There it is: You need to be able to make a business out of one sale a day—and that’s when everyone’s on the web. That’s the reality of the long tail. It means keeping items available forever, so they’re there when someone suddenly shows an interest. Remember, though: For NAP, what’s actually happening, in many cases, is ten sales in a year—or maybe only one (more than 1,100 of the 15,000 items sold only once in 2006). With PoD and downloads, it may be feasible to keep items available with one-a-year sales.
Jensen’s “current favorite example” of a book for which the “deep niche” works is a 1997 report, Toxicologic Assessment of the Army's Zinc Cadmium Sulfide Dispersion Tests. In 2006, 11,500 people visited it online—and six of them (0.05%) decided to lay out $37.50 for the PDF version or $45 for the print book. (Oh, look: NAP believes it’s reasonable to charge for downloads!)
An interesting report—and a challenge of sorts for people and publishers who can’t make six sales a year, or even one sale a day, work in terms of supporting the effort to produce a book.
That’s part of the title of a March 18, 2008 post at Novelr (www.novelr.com), but I’m also noting a February 8, 2008 post, “Applying the long tail to online fiction.” Maybe it’s a good time to state the “concept” of the long tail: “In a market with near infinite supply…a demand will exist for even the most obscure products.” And, thanks to some sloppy analysis early on, some people came to believe there were examples of total sales for that long tail exceeding sales for the “thick head”—the small number of big sellers. That isn’t the case, at least not so far, and there’s a tricky question that needs to be asked of “a demand”:
Is that demand likely to be enough that it can justify creation and continued provision of the product?
The Jensen article above says yes—if one sale a day (or six a year) is “enough.” I know I’d be reasonably happy if total sales for the books I’ve done through Lulu and CreateSpace averaged one a day—and that I can’t justify doing any more if they’re closer to six a year. But everybody’s different.
The blog in question is about online fiction and blooks. The February post sees two ways that the long tail concept counts. The first one’s fairly obvious: Traditional book publishing filters out most submissions, including some that aren’t complete rubbish—where PoD and online publishing eliminate most distribution costs. (The blogger says it also “costs you virtually nothing” to market your work; that’s open to question.) The second: With appropriate collaborative filtering, people who are willing to read online fiction can plow through all the crap out there to find the good stuff. (In a way, that’s a circular argument: People who aren’t willing to plow through all the crap may not be a target audience for online fiction unless it has a brand.) Ah, but the blogger makes the classic .com mistake, one Jensen doesn’t make:
Our target audience shouldn’t have to be just people who are willing to sort through the dross: if that’s the case online writing will forever be in the dark, pushed into the corners of the web by other bigger, better, more instantly gratifying web distractions. If, say 1% of web surfers are actively finding/reading online fiction, the ideal solution shouldn’t be just to find that 1%, but to expand upon it. In other words, we should not find a target audience—we have to create one, so the 1% becomes 5%, or more.
“If we can only get 5%...” That’s propounded by another problem—one that’s characteristic in this blog. Namely, the writer assumes traditional media are dying. “Newspapers are dying out, losing to online news sources…”—and in an unrelated post, “We know that the traditional publishing industry is upon dark times.” Ah, but never mind. We learn that “collaborative filters” are what we need to make online fiction more accessible for others—but, and it’s a big but, you have to get people to look at those filters before they’re of any use. The writer mentions a website, Pages Unbound, that can provide the collaborative filtering. I visited briefly. Wow. Ugly white sans text on a dark-gray background, making it hard to read. A front page that seems more manifesto than invitation—and the claim that readers may need mental adjustment to read web novels. Let’s just say that, as one who might be willing to read online fiction, I’m decidedly not bookmarking this site.
There doesn’t seem to be a ready solution for the collaborative filtering gotcha: Without the thick head, people don’t come to the filter. Andersen claims (incorrectly, I believe) that that’s what killed the original MP3.com (I believe it was mostly the costs of the copyright infringement settlement over My.MP3.com—paying out $200 million will kill off almost any small business). It may be bad history, but it’s still true that most people don’t go to a collaborative filtering system that only includes obscure material.
The second post, “1000 true fans: Making money off your blook,” works off Kevin Kelly’s latest concept/gimmick: the idea that a creative artist “needs to acquire only 1,000 True Fans to make a living.” Yes, it works for some people—and the post seems to assert that it can work for writers. All you need to do is write something brilliant…and find those 1,000 true fans. Easy, right? As one commenter notes, a true niche can work for a musician who knocks out a song a week—but how many authors can write that much? “At best they would offer a book a year, and 1000 people at $8 a pop—well, that isn’t enough to feed the cat, really.” (Our cats obviously don’t dine as well as this commenter’s!)
This is a very different perspective on “the long tail,” and in this case it’s the nearly infinite tail of lesser-known web resources. Barbara Fister posted this on July 18, 2008 at ACRLog (acrlog.org)—and begins with a Science-published report that researchers are actually citing a smaller range of sources despite access to a much broader range of sources. In other words, in science, the thick head may be getting thicker.
Fister is more interested in undergrads because that’s who she teaches. She considers some of the real problems undergrads have in doing research, most of which have very little to do with technology. It’s a long, interesting, carefully thought out post that you really should read in the original; I’m only touching on aspects of it. First, we have Anita Elberse (Harvard Business School) who says that the digital environment actually amplifies the dominance of blockbusters:
She also says that crowds, in their wisdom, gravitate toward blockbusters because they find them more satisfying than less-well-known items, and manufacturers and retailers should therefore put their money on known winners, not on promoting a longer tail. Naturally, there has been much debate about her methodology and conclusions, but it’s all very thought-provoking.
Then Fister considers undergrads struggling with resources in an information-rich environment:
Perhaps their experience with Wikipedia has been that it’s easy and it works better than more obscure alternatives. They have less trouble finding and deciphering the meaning of Wikipedia articles than they do making choices among thousands of scholarly articles and then having to figure out what an article means when it’s written for experts, which they are not. The blockbuster works. Except they don’t learn how to do the hard stuff or interpretation and building new meaning, which is why we torture them in the first place.
But what scaffolding helps them succeed at the hard stuff? And how, amidst the enormously long tail of information that students could use, do they find good sources - the kinds that can be used to build an original and compelling understanding of whatever it is they’re researching? We pay a lot of attention to exposing students to the abundance; not so much with the much harder job of making good choices. Wherever you fall on the Elberse / Anderson debate, we’re making a false assumption when we say more is always better.
I’m leaving out a lot here—trust me, you really need to read the post (it’s just over 1,100 words—not much more than a third of this essay so far)—but here’s the conclusion:
Relying on blockbusters—Wikipedia or Google or USA Today or the book / movie / person everyone is talking about—won’t cut it. But neither will simply assuming they’ll find it in the long tail. We need to think hard about not just increasing our resources and our training on how to use them, but helping faculty help students develop the ability to get to the good stuff. And not just to complete that paper, but to complete themselves as free and thoughtful human beings.
Technically, this topic isn’t really old media/new media: It’s new technology in support of old media. In this case, the new technology is the Espresso Book Machine, not the only self-contained book production system that’s been announced but certainly the one with the most hype surrounding it.
I would have sworn I’d written about this before—and when I look back, I have (at least indirectly): six years ago, in May 2002. At the time, Jason Epstein was making a future wager with Vint Cerf—one I’ll bet they both lose. Epstein, who was apparently already working on the idea behind Espresso, wagered that “By 2010, more than 50 percent of books sold worldwide will be printed on demand at the point of sale in the form of library-quality paperbacks.” Cerf’s take? “By 2010, 50 percent of books will be delivered electronically.” Of course, Cerf can gin up definitions of “books” and “delivered” that might make this true—but in any real sense (that is, 50% of the book market being ebooks), it’s as sure a loser as point-of-sale PoD being half the industry two years from now.
But the Espresso Book Machine does exist—in eleven sites (according to OnDemandBook’s website as of July 30, 2008). Those aren’t all “point of sale”—one’s at the Internet Archive, one’s at Bibliotheca Alexandrina—but it’s a start. How much does it cost? An August 17, 2007 Library Journal article said that the prototypes cost $200,000 but that 2008 models would run around $20,000. The site doesn’t mention prices, but one recent news story suggests $50,000 as an actual price, while another says the machine will be leased rather than sold. “A penny per page” is the typical print cost—and I do believe this is per page, not per sheet. So it’s not there to produce buck-a-copy paperbacks unless they’re very short or sold at a loss.
According to a June 20, 2008 story, Blackwell will be installing Espresso Book Machines in its 60 UK bookstores—and they should be able to print not only the 200,000 public-domain titles previously available, but also around 600,000 in-copyright titles through a partnership with Lightning Source.
The idea of moving short-run book production directly into the bookstore (or even a library) makes sense. Will it scale? We shall see. Will it represent half of the book market in 2010? I can’t imagine how.
The last time I wrote about ebooks and ebook readers (Thinking About Kindle and Ebooks, C&I 8:4, April 2008), the commentary was long and more disjointed than usual. One excuse for that was that I hadn’t dealt with ebooks and ebook readers since October 2006. But there’s also a reason: Commentaries on Kindle and other ebook readers also tended to be commentaries on reading itself, aided considerably by Steven Levy’s silly Newsweek article, “The future of reading.”
This time around, I’m going to focus on items primarily concerned with ebook readers (primarily Amazon’s Kindle but also the Sony Reader). I’m saving items that use the Kindle as a springboard to discuss reading itself and combining them with other stuff—e.g., reactions to 2007’s NEA alarmism about reading, the Slow Reading non-movement, and stuff happening at the New York Times. In the fullness of time, expect a Perspective with a title like “Writing about Reading.” There’s plenty of source material already, but it could use more time to ferment…
And for a change I’ll keep this very short by using each item as a bullet rather than a subsection and avoiding lengthy quotations. Here, then, a few notes along the way:
• Evan Schnittman wrote “Looks like a million to me” on June 9, 2008 at the OUP Blog (blog.oup.com). He believes the Kindle and Sony Reader will sell one million units in 2008—and, while he calls the prediction “pretty outlandish” he also believes it’s substantiated—because Prime View International, the maker of the e-ink screens both devices use, says it expects module shipments to reach 120,000 units per month in the second half of 2008 (and that it’s currently shipping 60,000 to 80,000 units per month). The source article also says 60% of those units go to Amazon and 40% to Sony. Oh, and Schnittman also believes that 10 million ebooks will be purchased for the two devices this year. (That estimate depends on a calculation just riddled with stated assumptions.) Of course, he also quotes a music industry executive as saying that more than half their revenues now come from digital music—which, if true, must be a very unusual company, since overall music is still at least 80% CDs.
• Roy Tennant predicted “The Kindle goes down in FLAMES” in a Digital Libraries blog post that same day, partly commenting on the OUP piece in a calm, reflective manner: “All I have to say about this is: ‘are you on drugs?’” It’s fair to say Tennant sees a future for ebooks—probably read on multipurpose devices like iPods and smart phones—but not for the Kindle (and, like me, can’t imagine why Amazon won’t release sales figures if it’s a hit). Some commenters agreed, some disagreed (sometimes vehemently).
• Jason Griffey tried to comment on Tennant’s post, but his comment was too long for Library Journal’s comment system (I was going to add “clunky,” but that’s the whole LJ blog system, not just the comments). So he posted it on June 12, 2008 at Pattern recognition (www.jasongriffey.net/wp/), and by then Griffey’s very own Kindle had arrived. He’s pro-Kindle and makes an analogy with the early iPod. He thinks the Kindle is “great for reading” and gains a lot by coming from Amazon. Steve Lawson makes an excellent point in the comments: Even if the first Kindle isn’t doing great business (nobody knows since Amazon has a Google-like secrecy on the subject), that doesn’t mean the Kindle will never work well.
• There will apparently be new Kindles in October 2008, even as Amazon cut the current model’s price. A CrunchGear post with rumors of new models also mentions a May 2008 analyst estimate that some 10,000 to 30,000 Kindles had sold by then—along with an estimate that Amazon would sell $400 to $750 million of them by 2010. A different analyst projected that global ebook sales at Amazon could reach $2.5 billion by 2012—but that’s based on a growth pattern matching digital music, which may be a bizarre assumption.
Sony hasn’t given up on the Reader. It’s released a firmware upgrade that allows the current Sony Reader PRS-505 to “reflow” PDFs and use the ePub format without DRM. A Sony spokesperson called the Reader “an open device.”
What does it all mean? I have no idea. The least plausible projection I see is the idea that ebooks will succeed along the same path as digital music. (Incidentally, downloaded songs and mobile-phone ringtones—the real money in digital music—still represent less than 20% of global music sales in 2007.) Could the two big ebook readers sell a million units through the end of this year? I don’t see why not—and I’ll almost guarantee that if Sony ever sells half a million Sony Readers, we’ll hear about it!
Cites & Insights is sponsored by YBP Library Services, http://www.ybp.com.
Opinions herein may not represent those of PALINET or YBP Library Services.
Comments should be sent to firstname.lastname@example.org. Cites & Insights: Crawford at Large is copyright © 2008 by Walt Crawford: Some rights reserved.
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.