Cites & Insights: Crawford at Large
ISSN 1534-0937
Libraries · Policy · Technology · Media

Selection from Cites & Insights 11, Number 6: June/July 2011

Trends & Quick Takes

Time for another Random Roundup, part of an ongoing effort to offer quick notes on interesting things. When I did a catch-up edition of T&QT in October 2009, I noted that—with my switch in March 2009 from printing leadsheets for interesting source material to tagging items in Delicious—I was up to 50 items in September 2009 tagged “tqt” (the tag for this section) out of 643 items altogether, far more items than I ever had “on hand” prior to Delicious.

If you’ve been keeping track, you’ll be aware that I gave up on Delicious after Yahoo! basically issued its death warrant and, after asking for advice and doing some exploring, switched to Diigo, taking my Delicious-tagged items with me (evaluating many of them along the way). I’m not thrilled with one specific aspect of Diigo (the alphabetic list of all tags is clumsy to use because it’s not a list), but otherwise it’s just fine—but boy, do I have a lot of stuff tagged, even after wiping out a hundred items in one recent essay.

The count as of April 21, 2011: 1,294 items in all. Take away GBS (Google Book Settlement, which I may scrap entirely) with 230, and you still have more than a thousand, including 106 tagged tqt. So, well, this roundup in an issue full of roundups is another attempt to do a little catching up, five thousand (or so) words at a time.

Idealism, Firefox and HTML5

There’s an odd January 25, 2010 item at ReadWriteWeb, written by Sarah Peter: “Will Idealism be Firefox’s Downfall?” The gist: YouTube was moving to support HTML5 so videos could be viewed without Flash—but the list of browsers supporting the new option excluded Firefox. Why? The new YouTube version uses H.264 as a codec (compression-decompression format—like MP3), which is patented and not royalty-free. To support H.264 in Firefox, Mozilla would need to pay $5 million a year to MPEG-LA (a licensing group)—as would anybody trying to introduce a Firefox variant or other open-source browser.

As an update, Microsoft has a Firefox plugin that allows Windows7 users to use the native OS-level support for H.264 within Windows7.

There’s an interesting John Hermann article from February 3, 2010 at Gizmodo: “Giz Explains: Why HTML5 Isn’t Going to Save the Internet.” It covers some of what makes HTML5 interesting, a few reasons why it’s not a miracle cure and more—although as I read it I missed one little item: That is, that HTML5 is years away from being a fully adopted standard, much less a fully implemented standard. Still, interesting reading.

The Subscription War

It’s something I’ve commented on before, and it’s good to see it noted at Gizmodo in this January 18, 2010 item by Brian Barrett: “The Subscription War: You’re Bleeding to Death.” After applauding the wonders of his smartphone, the “2,454,399 channels on my HDTV” via broadband and his ability to “access the internet from a freaking airplane!” he gets to something that doesn’t seem to concern much of anyone:

A well-equipped geek will, in our research, have a subscription and service bill total of between 200 and 750 dollars a month.

For many of us $200/month seems high and $750/month is simply out of the question. (For the median U.S. household, that would be close to a fifth of the household income.)

How does he arrive at the total? There’s a graphic spelling it out:

·         $80 to $120 for unlimited voice, text and data on one smartphone.

·         $20 to $60 for a netbook/smartbook plan with 10MB to 5GB data.

·         $0-$60 for “slate” (iPad) connectivity.

·         $25-$145 for broadband, noting that $25’s only going to get you 1.5Mbps.

·         $32-$130+: Cable. (Here, you can do a little better, if you don’t mind “limited basic” coverage, that is, just the local broadcast channels and a shopping network or two.)

·         $0-$50: Landline phone with unlimited domestic calls.

·         $20-$60: 3G dongle to add mobile internet to your notebook.

·         $0-$43: WiFi hotspots.

·         $9-$21: NetFlix with streaming and one to three discs.

·         Plus another $22-50, prorated, for annual subscriptions to TiVo, Xbox Live Gold, Hulu rentals, Flickr Pro and turn-by-turn GPS navigation.

Before you say “but you don’t need a landline and the iPad can connect for free with Wifi,” note that $0 is stated as the base price in both cases.

That's right: if you want to stay even close to fully connected, you're expected to cough up nearly $1,000 a month. Not for hardware. For fees. And that doesn't even include niche services like Vimeo and Zune Pass, or one-off purchases like eBooks or iTunes downloads. Or, god forbid, food and shelter.

Barrett cites fragmentation as the problem—but I’ll suggest that megasubscriptions might wind up being even more expensive. Of course, the people who matter presumably make so much money that $750/month is irrelevant. I’ve tracked our costs—for limited basic cable, three-disc Netflix (with Blu-ray option), emergency cell phone (Virgin Mobile prepaid), AT&T combo of 1.5Mbps DSL and unlimited-U.S. landline—and we’re at about $125, but anyone who considers themselves Connected would call us Luddites or worse. And, you know, $125/month is still a sizable sum if you have limited income. (Add in our newspaper and magazine subscriptions, and I suspect we’re close to the $200 mark.)

There’s a similar take in Nicholas Carr’s “Information wants to be free my ass” and “…continued,” the latter on February 9, 2010, the former taking off directly from the Gizmodo piece. In the followup post, Carr quotes Jenna Wortham in the New York Times reporting that a Census Bureau reports Americans averaging $903/year in 2008 on “services like cable television, Internet connectivity and video games,” a figure expected to reach $997 by the end of 2010—and that figure excludes cell phones and data plans. Indeed, the average combined landline/cellular phone bill was itself up to $1,127 in 2008.

Why the internet will fail (from 1995)

Here’s a truly odd one, posted at Three Word Chant! on February 24, 2010. The writer links to Clifford Stoll’s “The Internet? Bah!” from 1995 at Newsweek—an essay that was clearly Stoll’s commentary, not Newsweek’s opinion, and never said the internet would fail, only that it “isn’t, and will never be, nirvana.” Here are the writer’s “two favorite parts” that apparently show how idiotic Newsweek was:

The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.

Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Intenet. Uh, sure.

The writer notes, “If Newsweek is as good at maintaining the journalism industry as they are at fortune telling, they should be around for a long time.”

Well…consider what Stoll actually says in those two segments and the reality in 1995 and beyond. I would argue that Stoll, for all his deliberate contrariness, is right in all cases:

·         Online databases do not really take the place of a well-edited daily newspaper, even though many people use them as a substitute.

·         It is absolutely the case that “no CD-ROM can take the place of a competent teacher,” and it’s fair to say that massive intrusion of technology into education has not, so far, yielded educational nirvana.

·         While computer networks may change the way government works in many details, I’m not sure there have been fundamental changes—and if there have, they certainly haven’t been all to the good.

·         Negroponte’s predictions were ludicrous for 1995. He predicted short-term changes that just didn’t happen. Indeed, where newspapers are concerned, they’re still fairly ludicrous.

Here’s an interesting paragraph from Stoll’s contrarian essay (he was pushing Silicon Snake Oil at the time, and that was a badly flawed book—although probably not as badly flawed as Negroponte’s Being Digital):

Then there are those pushing computers into schools. We’re told that multimedia will make schoolwork easy and fun. Students will happily learn from animated characters while taught by expertly tailored software.Who needs teachers when you’ve got computer-aided education? Bah. These expensive toys are difficult to use in classrooms and require extensive teacher training. Sure, kids love videogames–but think of your own experience: can you recall even one educational filmstrip of decades past? I’ll bet you remember the two or three great teachers who made a difference in your life.

I’m guessing Stoll feels no need to apologize for that paragraph 16 years later. I suggest he’s still partly right about social networks not fully substituting for face-to-face conversations.

Linked Data: my challenge

That’s the title of a March 22, 2010 commentary at electronic museum by Mike Ellis, an Eduserv employee who was “head of Web for the National Museum of Science and Industry, UK” from 2000 through 2007. Ellis wants to open up data for broader use—he’s blogged about it, written papers about it, spoken about it. “I’ve gone so far as to believe that if it doesn’t have an API, it doesn’t—or shouldn’t—exist.” And he finds himself “sitting on the sidelines sniping gently at Linked Data since it apparently replaced the Semantic Web as The Next Big Thing. I remained cynical about the SW all the way through, and as of right now I remain cynical about Linked Data as well.”

Why? For some of the same reasons I’m skeptical of Web 3.0 or the Semantic Web or Linked Data as being revolutionary in any real sense, even though it (they?) can and will be useful within some projects.

Linked Data runs headlong into one of the things I also blog about all the time here, and the thing I believe in probably more than anything else: simplicity.

If there is one thing I think we should all have learned from RSS, simple API’s, YQL, Yahoo Pipes, Google Docs, etc it is this: for a technology to gain traction it has to be not only accessible, but simple and usable, too.

Since Ellis runs his blog with a CC BY-NC-SA license (and I continue to believe BY-NC is close enough to SA to count), I’ll quote “how I see Linked Data as of right now” in full:

1. It is completely entrenched in a community who are deeply technically focused. They’re nice people, but I’ve had a good bunch of conversations and never once has anyone been able to articulate for me the why or the how of Linked Data, and why it is better than focusing on simple MRD approaches, and in that lack of understanding we have a problem. I’m not the sharpest tool, but I’m not stupid either, and I’ve been trying to understand for a fair amount of time…

2. There are very few (read: almost zero) compelling use-cases for Linked Data. And I don’t mean the TBL “hey, imagine if you could do X” scenario, I mean real use-cases. Things that people have actually built. And no, Twine doesn’t cut it.

3. The entry cost is high – deeply arcane and overly technical, whilst the value remains low. Find me something you can do with Linked Data that you can’t do with an API. If the value was way higher, the cost wouldn’t matter so much. But right now, what do you get if you publish Linked Data? And what do you get if you consume it?

Ellis is one of those who should be deeply involved with Linked Data, but finds that he isn’t. My own take is that expecting ordinary people (including ordinary scientists) to understand triples and turn Word or Excel documents into proper linked data is expecting a lot—too much for most of us. Here’s what Ellis wants to see:

1. Why I should publish Linked Data. The “why” means I want to understand the value returned by the investment of time required, and by this I mean compelling, possibly visual and certainly useful examples

2. How I should do this, and easily. If you need to use the word “ontology” or “triple” or make me understand the deepest horrors of RDF, consider your approach a failed approach

3. Some compelling use-cases which demonstrate that this is better than a simple API/feed based approach

The very first comment (of 28 in all, including Ellis’s responses) may show how far we are from Linked Data making sense to, well, people like me:

I’m still a fan of the original guidelines for Linked data – paraphrased:

Give each thing a ‘permanent’ page on the web.

Put information about that thing on that page.

Put connections from that page to other pages to make it more easily understood.

Wikipedia does this excellently, without having to think about RDF/SW.

For me though, a SPARQL endpoint containing data is not the same as having the data on the web.

The metaphor that works for me is that SPARQL endpoints are to Linked data, as access to an undocumented SQL server are to CSV files.

To which I say: Huh? Wikipedia is in some sense linked data? Whuh? The last two sentence/paragraphs certainly don’t help. Ah, but the final paragraph becomes ever so much clearer as explained by Richard Watson:

This could be true, but this is the whole point of ontology. As long as someone uses an ontology correctly, or devises their own in a meaningful way, then the SPARQL endpoint is documented, in as much as it can be asked to describe all its own concepts.

I believe Richard Watson thinks he has explicated something in a way that people with strong computing, information and technology backgrounds who aren’t already part of the linked data community—oh, say Walt Crawford—will find useful. Another modest little 900-word comment attempts to respond to Ellis’ challenges and says “it’s really rather simple.” It’s so simple that after reading the 900 words, I began to doubt that I understood the English language, but Ellis thought it was useful.

One commenter assures us that there will be—or, actually, is—a service that will allow you to take CSV files, click a button, and have them exported as proper “RDF based Linked Data.” Ellis asks, well, given 10 CSV sources from 10 different places all referencing “John Smith,” how will I know whether they’re all talking about the same “John Smith”? The response is that “Virtuoso’s reasoning will handle the data reconciliation for you via conditional application of rules context for ‘co-reference.’” I’m not entirely satisfied with that answer.

We Live in the Future

I’m a little abashed about this one—only because I noted in a comment on the post that I’d eventually be mentioning it in Cites & Insights, even though it might take a few months. The post with the title above, by David “Medical Librarian” Rothman at, appeared March 23, 2010—and I suppose June 2011 is “a few months” later, if you interpret “a few” loosely.

It’s a neat set of illustrations and comments on how far we’ve come in the past few decades, with a link to the complete set of slides. Rothman begins with shots of the IBM System/3 equipment his father used to announce his birth—via 96-column punch cards (I don’t remember ever using 96-column cards) spelling out “BOY” in punched-out holes. He then notes “the cutting-edge of MEDLINE” for most users in 1972: the classic TI Silent-700 terminal with a dial-up modem (the great cups on the back of the terminal) operating at 10 characters per second—although a few people had blindingly fast 30cps access. At the time, about 150 libraries had MEDLINE access for $6/hour.

Please understand how amazingly fast people thought 30 characters/second was. Please also understand how that compares to today’s speeds:

That’s followed by a chart showing some download speeds in characters per second. Rothman’s “typical cable modem” is a whole lot faster than what I have at home (DSL, effectively about one-quarter as fast), and FIOS and “TWC Wideband” are a whole lot faster yet. I’m pretty happy with 1.5mbps (roughly 200k characters per second) downloading—a mere 20,000 times as fast as most 1972 speeds. Oh, and PUBMED’s available for free to everybody.

Then there’s the discussion that got me involved, as it has elsewhere: Mass storage. He shows a 1979 ad showing really cheap hard disks for the time: 80MB for $12,000 or 300MB for $20,000—or about $667 per megabyte, equating to about $1,900/MB in today’s dollars.

What will $1,900 buy you today in old-fashioned rotating hard disk technology in 2011? At this writing, you can buy name-brand 3TB external hard drives (including cases and power supplies) for $170, so $1,900 will buy about 33 terabytes of storage. That’s 33 million times as much storage per dollar, over the course of three decades. Rothman makes a comparison to flash drives, where at the time he wrote the post a name-brand 4GB flash drive went for $18, which is spectacularly cheaper per megabyte than in 1979. Still, in 2011, that 4GB flash drive probably costs at least $8, or $2/GB, which means the gap between flash drives and old-fashioned hard disks continues to be enormous, given that the Western Digital external drive noted above comes out to six cents per gigabyte.

Rothman also tried to show how much space you’d need to store a laptop’s worth of data using 1973’s IBM 3340 direct access storage units, one of the most important hard disk developments in computing history. It begins to be ludicrous. I remember that several of us gave up on calculating the space, energy and cost requirements for one terabyte of hard disk storage in 1972 terms; let’s just say that companies didn’t casually consider data stores that large, especially not ones fully online. (A March 22, 2010 post at Holy Kaw consists of a photo of a non-cartridge 200MB hard disk pack from 1970—what looks like a dozen platters, pre-Winchester, probably 12” diameter or larger and probably incredibly vulnerable.)

We do indeed live in the future. It’s worth remembering that some times.

Postscript: The Bandwidth of a 747

A little postscript: Peter Murray had a 2006 post that referred to the old internet adage, “Never underestimate the bandwidth of a station wagon full of tapes.” I wondered about the effective bandwidth of a 747 full of Blu-Ray discs (yes, they were around in 2006). Murray did a well-sourced set of measures, concluding that the effective bandwidth of such a 747, flying from JFK to LAX at maximum rated cruising speed, was 37,034.826 GB/s—that’s 37 terabits per second. It got to be an interesting conversation (the post’s at and was updated by “Steveo” on June 2, 2010, this time using an Airbus A380-800F in cargo configuration—with an even more impressive rate: 8.88 petabits per second, or 9,098 terabits. Oh, and if you used dual-disc slim cases, double that: 17.77 PB/s. Incidentally, all the cases would be in cartons, so this isn’t just 374 million Blu-ray discs rammed loose into a cargo plane… (Unclear: Whether 374 million Blu-ray discs would exceed the weight capacity for an A380. Probably so: The maximum payload is 330,000 pounds, and a Blu-ray disc weighs some appreciable portion of an ounce. If a Blu-ray disc weighs an ounce, then you’d only be able to ram 5.28 million of them into an A380 and still take off; if half an ounce, 10.56 million. So, well, that brings the bandwidth down to somewhere between 131 and 263 TB/S—still impressive, but a little less so. In jewel boxes? Probably down to no more than 50 TB/s…)

A second DLTJ post, “Bandwidth of Large Airplanes,” on June 8, 2010 (, noted an error in Steveo’s calculations and did new calculations for the Boing 747-400F, Airbus A380-800F, and Boing 747-8F, the freighter 747. Using slim jewel cases, Murray arrives at 176 Tb/s, 302 Tb/s and 218 Tb/s respectively—which are still three orders of magnitude greater than the fastest data transfer over a network that had been public at that point, a data flow of more than 110 Gb/s. At that point, I got involved again, with a Walt at Random post “Bandwidth of Large Airplanes, Take 2,” thinking about 2TB internal hard disks, using 100-disc spindles (with locking covers) rather than slim jewel boxes for Blu-Ray discs, and wondering whether weight or bulk limited the capacity. I did real-world measurements of the weight of a 100-disc spindle (this assumes that Blu-Ray discs weigh as much as CD-Rs, which may not be true) and used Western Digital’s own specs for the Caviar Black 2TB internal hard disc—and, to simplify calculations, assumed 10packs of the hard discs wrapped in plastic with no real additional weight.

My conclusions? Weight is indeed the limiting factor (by about a 2.3:1 ratio for Blu-ray discs, about an 11:1 factor for hard disks)—and the bandwidth of Blu-ray discs on a 747 is about 232 Tb/s, with 2TB hard disks supporting a mere 163 TB/s.

But weight wait! You can now buy 3TB internal hard disks, and I’d guess they weigh the same as last year’s 2TB hard disks (but have greater areal density). That would make the hard disks the bandwidth champion, at an effective 245 TB/s bandwidth.

I believe we’ve communally established that a 747 configured for freight can provide a bandwidth of at least 160 TB/s, considerably more than 1,000 times as fast as the highest known network throughput. As a couple of commenters have noted, however, the latency really sucks. Still, if you need to move really big quantities of data from one place to another—say, 500TB at a time—Blu-Ray discs and big hard disks still look pretty good. As Eric Lease Morgan noted in a comment, when a person from Google came to visit Notre Dame in 2008 asking for some big data sets, he gave Notre Dame some hard disks and asked them to fill up the disks and mail them back to Google—it was cheaper that way.

This all seemed theoretical and silly when we were posting about it, given the latency issue. But, as I was doing a followup on Walt at Random on the 3TB hard disks (turns out they’re actually a little lighter than last year’s 2TB drives, so the bandwidth is around 250 TB/s), I thought of a real-world use. Let’s say you’re the MPAA and you want to send “screeners” of sixty nominated movies to Oscar voters—and of course you want those movies to be viewed in true HD. You can send them a 3TB hard disk for $5.20 Priority Mail Small Box Flat Rate, for a cost of about $105 total ($100 for the disk, $5.20 for the small box—but add a few bucks to make it an external hard disk) or, probably, a spindle of 60 Blu-Ray discs for not much more (less for the discs, a little more for postage). Or you can stream the movies…if they can take 55 days at 24 hour/day constant 5Mb/s broadband to get them. Which would you choose?

OA publishers: Just use HTML!

That’s Dorothea Salo on March 23, 2010 at the Book of Trogool, and I’d tagged it for an essay on typography that may or may not ever get written. (Given that I’m writing a book that, among other things, deals with simple but effective layout and typography, chances are increasingly “not ever” for such a C&I article.) It’s a post that I would growl about—but only if I read the title and not the essay itself.

Salo’s not saying all OA publishers should use HTML instead of PDF. What she is saying:

If you're not going to put effort into typesetting, chuck PDF. HTML is where it's at for you. Embrace the Web and its pitifully low standards for typography.

Substitute “intelligent layout, thoughtful typeface choices and general care with typography” for “typesetting,” and I agree. Not that I always follow my own advice, but if you’re producing PDFs that are in Times New Roman or Arial with overlong lines and not enough leading—well, you’d be better off dumping the PDF and producing simple HTML. Which isn’t that hard to do; Word’s “filtered HTML” isn’t great, but it can at least be reprocessed using newer styles, where a PDF is pretty much done for.

Salo offers a more cogent discussion:

It does still take more technical savvy to produce decent HTML than to produce a bad PDF from the most typical manuscript formats. Making a print CSS stylesheet for your journal—which is also a good idea, to avoid grumbling from the print-dependent—is also eggheady. If your subject area is math-heavy, you have an entire new suite of problems.

On the whole, though, it's much easier to produce good HTML than good PDF. Moreover, bad PDFs are essentially irredeemable; there's nearly no way (and definitely no easy way) to reflow, re-typeset, or otherwise reformat them. If you go the HTML route, as your skills improve you will (trust me!) learn to fix your bad HTML, and if your content-management system is any good, you'll be able to go back and fix your old articles in a decently automated fashion.

As you rebrand your journal and its look and feel, which you eventually will unless and until the journal dies, you get a bonus: automatic rebranding of your old articles! They never have to look out-of-date, as old-school PDFs often do.

I’m a great believer in PDF—when it serves a legitimate and positive purpose, as in preserving a deliberate set of layout and typeface choices. When that’s not happening—when the PDF is clumsy and seems to represent default options—then the advantages of HTML come into play. (Will the HTML for C&I get better? Probably only if C&I moves to a “Web-first” processing scheme. Stay tuned.)

11 Ideas About Which I May Be Wrong

The title, from an April 7, 2010 post by John Dupuis at Confessions of a Science Librarian, is a little misleading; he’s really pointing to a post with the same name by Joshua Kim at Technology and Learning. He notes that the piece is really about things “you’re going to have to convince me that I’m wrong” about. Kim challenges readers, “What are you wrong about?”—that is, what do you think you’re right about and would like someone to prove you wrong? Dupuis offers three:

The biggest transformation in libraries over the next 10 years will be our relationship to stuff. Crumbling media business models and a movement to open access and more broadly to open content will challenge us to find things worth paying for.

As a corollary to the first point, sometime in the next 10 years I will buy my last print book.

Perhaps the biggest challenge in our relationship to our host institutions will be justifying the expense of transforming what we now have as collection space into various spaces for students. A lot of other constituencies will want that space and that money.

As you might expect, I think Dupuis is wrong on the first, at least for libraries in general. Ten years is way too soon, especially for public libraries but also, I believe, for academic libraries—and the move to OA isn’t happening anywhere near fast enough. Can I convince Dupuis that I’m wrong? Perhaps not, any more than I’m likely to convince Dorothea Salo of my rightness in the areas where we disagree sharply. Both Dupuis and Salo are among that class of colleagues I value particularly highly: We disagree about many things, sometimes in extreme form—but never (or rarely) disagreeably, never (or rarely) stating our own stances as gospel or inevitable, and generally in ways that allow us to learn from one another.

The third? Well, yes, if academic libraries flee from physical collections, the ULs are going to have damn difficult times convincing the host institutions not to swallow up most of the library space. And as for the second, if Dupuis makes that choice, it’s just that: His choice, having little to do with whether print books are still being published.

[Even] Quicker Takes

Doug Johnson wrote “Augmented reality” on February 6, 2010 at The Blue Skunk Blog—a short post asserting that travel guidebooks and the like have been augmenting reality for years. An interesting perspective, but it ignores the chief objection that some of us troglodytes have to real-time augmented reality: It gets in the way of appreciating what’s in front of you. Any time you’re staring at your iWhatever, you’re filtering the live, 3D, sound-enhanced picture going on all around you through that little window that pushes other sorts of stuff at you.

·         Around February 2010, there was a kerfuffle involving a fair number of nerd sites as to whether Windows7 used memory in a way that would yield thrashing on most computers, as pages were being swapped in and out of disk-based virtual storage because there wasn’t enough real memory. I flagged a few items for use, then never got to them; the site claiming that Windows7 was a memory-hog seemed to label anyone who questioned its methodology (including such notoriously useless sites as Industry Standard, ZDNet and ars technica) as “Windows fanboys,” and basically said “we know that what we’re measuring is right, and you’re all just idiots.” And yet, and yet, very few users find that Windows7 has difficulty handling lots of simultaneous applications with high memory requirements—although it does try to make use of all available memory for caching and precaching. I know I’ve never run into disk thrashing, but I rarely have more than six applications running at once (in addition to all the background stuff, of course). As far as I can tell, it was One Dedicated Site (quoting a 14-year-old Windows NT handbook in one case) vs. Everybody Else. It’s quite possible that ODS is right, but…well, I have yet to hear numerous (read “any”) reports of people running out of usable memory in Windows 7.

·         John Scalzi, a science fiction writer and preeminent blogger who also makes a point of publicizing other writers and their work (he’s also currently president of SFWA), wrote “eARCs: Big Fat Publicity Fail” on April 9, 2010 at Whatever. What’s an eARC? An electronic Advance Reader Copy—where you get a card and have to scratch off a lottery-like area to get a code, sign in to the publisher’s website, then type in the code to download the ARC. “This pretty much assures I won’t be reading this particular book.” After all, he has all these other ARCs that arrived in the mail, where all he has to do is open the cover, not go through this rigmarole—and he’s not ready to read full novels on his computer or (nonexistent) ereader or iPad. There’s also the issue of DRM and trust: If the eARC comes with DRM (as previous attempts did), the publisher’s saying “we want you to publicize this upcoming novel, but we don’t trust you not to make the novel available to everybody else for free.” ARCs are, in a way, requests for attention; they need to be as easy as possible. A cynic could contrast Scalzi’s attitude here with his well-known attitude on submissions for his fiction: He won’t submit to any market that requires a printed manuscript (which, until recently, included all three of the “big three” science fiction/fantasy magazines)…even though, you know, printed manuscripts are probably easier for the editors to plow through.

Cites & Insights: Crawford at Large, Volume 11, Number 6, Whole # 141, ISSN 1534-0937, a journal of libraries, policy, technology and media, is written and produced by Walt Crawford.

Comments should be sent to Cites & Insights: Crawford at Large is copyright © 2011 by Walt Crawford: Some rights reserved.

All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit licenses/ by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.