Cites & Insights: Crawford at Large
ISSN 1534-0937
Libraries · Policy · Technology · Media


Selection from Cites & Insights 10, Number 4: April 2010


Trends & Quick Takes

Time it Was…

T. Scott Plutchak’s June 10, 2008 post “The Instability of Information” (at T. Scott) points to Robert Darnton’s New York Review of Books essay “The Library in the New Age.” Plutchak focuses on Darnton’s believe that Google Book Search will not make academic research libraries obsolete—rather, it will make them more important than ever. This isn’t a piece about Google Book Search (I have more than 165 items on that topic tagged, and wonder whether I’ll ever use them!); nor, for that matter, is it about Darnton’s belief in the future of big research libraries (a belief I share).

Rather, it’s about the first half of Darnton’s essay, as noted by Plutchak:

Darnton argues that, contrary to the “common view that we have just entered a new era, the information age,” which he sees as rooted in the long-term view of technological transformations, “every age was an age of information, each in its own way, and that information has always been unstable.”

As a cultural historian with an outstanding reputation, he is well suited to making this claim. Years ago I was fascinated by his book, The Great Cat Massacre and Other Episodes in French Cultural History, in which he shows how our understanding of history is shaped and molded by the ways in which unstable information is passed on and examined. In the NYRB essay, he has a couple of excellent examples to make the case that “news has always been an artifact and that it never corresponded exactly to what actually happened.... News is not what happened, but a story about what happened.”

The common wisdom here in the internet age is that things are radically different from the way they’ve been before. This is the point of view that I criticized in my comments on Everything is Miscellaneous in response to Rothman’s question about what I didn’t like about the book. This predilection to see the present as radically discontinuous from the past isn’t new, of course, and it isn’t restricted to views about information. My peers and I in the late 60s believed that our generation represented a radical break, not just with our parents’, but with every generation that had gone before. We were foolish in this belief because we were ignorant of history.

The point is not that things aren’t changing, or that the world isn’t different today from what it was a couple of decades ago. The point is that this has always been the case, and our tendency to think that the world of our predecessors had a kind of stability that is lacking in the present world is an illusion. Change is continuous and incremental and multivariate and beautifully complex. When we look at the past, or try to understand the present, we break things up into epochs and ages for convenience sake. We label the decades and try to pin them like butterflies to a display board. We categorize and classify time just as we do everything else. But that’s just a way for us to abstract things so that we can find ways to understand and talk about them. Realities are far more complex.

Read those last two paragraphs again, particularly in light of generational generalizations and punditry about the digital future. That golden age of long-term stability is as nonsensical as the concept that suddenly change is massive, overwhelming, predictable and inevitable. We’ve been dealing with change for as long as we’ve been human beings. It’s never been orderly, it’s rarely been inevitable, and there’s a natural tendency to think of that time when we weren’t coping with so much change.

I was reading a column in one of the “big three” science fiction magazines about the increasing difficulties with one particular theme of science fiction and fantasy: The one in which an advanced artifact from the future (or a more advanced civilization) falls into the hands of people from an earlier time, who reverse-engineer it and make incredible progress as a result.

The problem with that is that, within our own history over the past century, it so frequently glosses over gulfs so large that reverse engineering wouldn’t help—and some of the biggest gulfs are pre-internet. For example, what would the best scientific minds of (say) 1929 make of a contemporary GPS, or of a notebook computer using an Intel Core i7 CPU? (GPS doesn’t rely on the internet, and you don’t need the internet to use a Core i7-based notebook.)

In both cases, the most fundamental disconnect would probably be the little rectangular boxes on the circuit boards, each box containing from a few hundred thousand to many million solid-state circuits. (I’m guessing the 1929 scientists would figure out what the circuit board was and the driving voltages and principles—that’s measurement and extrapolation.) What would a scientist from 1929 do with those boxes, though? Reverse-engineer them? Using what for discovery? The circuits are far too small for any optical microscope, even if you could figure out how to disassemble a chip package without destroying the circuits. OK, so wait until the late 1930s, when scanning electron microscopes might be able to trace the circuits. But what are those circuits, in a world where (except for crystal radios) all electronics are based on vacuum tubes? That tiny little intersection is the equivalent of a tube? Sez who? (While the first transistor patents date back to the 1920s, the first manufactured transistors are from the end of the 1940s.)

For the GPS unit, of course, there’s another fundamental disconnect: It only works thanks to a network of satellites. Before the 1970s, a GPS receiver would fall into Arthur C. Clarke’s field of sufficiently advanced technology: It would be magic. (Well, it would be a useless hunk of plastic and metal without those satellites, but still…)

Why did I mention the Intel Core i7? Because I still find it pretty damn close to magic: A chip with eight processing threads (12 in an “extreme” model) running at 2.5GHz to 3.2GHz that uses as little as 45 watts of power.

Tell me you could reverse engineer a notebook running one of those, in a world with the technology and science of 1929. Let’s make it easier: Try to reverse-engineer a 386-based notebook: after all, that takes you back a quarter-century, when times were simpler. Right?

Blame the User

That’s Doug Johnson’s title for a July 24, 2008 post at Blue Skunk Blog—and while his example is specific to his state, it’s not a unique local problem. Here’s the whole thing—it’s short and makes Johnson’s case better than excerpts would:

Our state’s Library Services Department wanted to collect data on school library programs using an online survey tool. Great!

We need a good set of data. We don’t know for certain how many libraries, librarians, resources or computers we have in our fair state’s schools - and whether those numbers are increasing or decreasing. It was embarrassing during legislative testimony to be asked for school library data and to not have such numbers available. The lack did not help our case.

So the intent itself was outstanding.

But the execution was terrible. Irrelevant questions, confusing questions, unreadable formatting, unreasonable tech requirements, malfunctioning website, and just an incredibly daunting length were all “features” of this survey. But school librarians in 42% of schools bravely made the attempt—including our district. Many of us tried working the department to make the survey more useful and meaningful—work which seemed to have been simply ignored.

But this is what put me over: a scolding letter from the department saying...

Please note that of the 383 respondents, only 80 reports were correctly answered. Every library has a dictionary because of the importance of understating the meaning of a word. It’s equally important to understand the intent of the question to obtain comparable data.

So let me understand this... Of the 42% of surveys completed, only 21% of those were completed “correctly?” That is a rate of less than 9% of possible survey returns that the state deems as “correct.”

Uh, might the problem be with the survey and not with the 91% of us who either didn’t complete the survey or got it wrong?

Creating a good survey is a task best left to professionals, not well-meaning amateurs. The validity of the data requires it.

There is a larger issue here as well: When any of us don’t get the response we were anticipating (amount of use of a new resource, attendance at an in-service, number of readers or responses to our blog, etc.), it’s very easy to “blame the user.” Maybe we should be looking at what we are offering instead.

Good intentions do not make up for incompetence.

In the context of surveys, this comes up once in a while within ALA—which used to (and I believe still does) require that any official surveys within the organization go past a unit that checks them for survey design. It slows things down a little, but it also avoids manifestly incompetent surveys. And boy, are there are a lot of manifestly incompetent surveys—not only ones that fail along the lines Johnson notes, but surveys with leading questions and other design flaws that make them fundamentally useless.

Johnson’s broader point is also worth noting: If you’re not getting the results you expect, it may not be the patron’s fault. (No, I do not believe that the patron is always right—but it’s a good starting point.)

In another conversation I chose not to get into, it became clear that enthusiasm does not make up for incompetence. I’m not sure the lesson here is much different. I am sure that, like it or not, competence must be judged (at least partially) by results.

Speaking of which…

What Web 2.0 Teaches Us…

Andy Powell, August 15, 2008, eFoundations. It’s a post that clearly gave Powell trouble, as he prefaces it with a suggestion that “it’s intended to be somewhat tongue-in-cheek and humorous but like most such things, from my perspective at least, I think it contains at least a grain or two of truth.” The post, slightly excerpted…

The advent of desktop publishing software, way back when, showed us that although pretty much anyone could use clip-art and fonts, most people weren’t (and indeed still aren’t) graphic designers. Over the years we’ve mostly got used to calling in the professionals whenever necessary, though there is always a place for do-it-yourselfness.

So, what does Web 2.0 tell us?

·         That anyone can blog but not everyone can write (or even spell-check!)?

·         That anyone can podcast but not everyone is a radio chat-show host?

·         That anyone can make a video but not everyone is a TV presenter?

In short... Web 2.0 technology democratises production but creative talent and presentation skills remain rare commodities…

Seesmic is a good case in point. Seesmic is a kind of video Twitter. It’s a brilliant idea and has been well executed technically. The trouble is, like the video phone, one is left asking, “Do I actually need this?” (by which I mean, “Does video really add anything to what I’m trying to do here?”)…

Take 140 characters from Twitter, turn it into anywhere between 30 seconds and 5 minutes of variable quality audio and video, where the video carries no additional information over the audio and where the audio carries little additional information over the original 140 characters. That’s Seesmic in a nutshell.

Now... maybe I fall into the category of “people who haven’t tried it and therefore don’t get it”? Maybe I’m just plain wrong and within a month I’ll be Seesmic’ing with the best (and worst) of them! Anything is possible - stay tuned to find out...

With apologies to everyone and no-one.

I appreciate that I’m sounding a bit like Andrew Keen. But that’s not my intention. My point is not that amateurs don’t have anything interesting to say—I think they do—and indeed, for the most part I include myself as one. My point is that our desktop use of audio and video in particular tends to highlight an amateurish approach to production…

Reflecting on this for a while, I think the problem is two-fold. Firstly the linear nature of audio and video tends to defy attempts at scanning the content. Fast-forwarding and reversing are difficult at best, as is getting a feel for whether the next 3-5 minutes of audio/video is worth sticking around for (though Slideshare slidecasts offer an interesting counter-example, since the slide transitions do give a nice way of quickly navigating the content). These tasks are much easier with text and most of us have well-honed skills at scanning and appraising textual material pretty quickly (even where that material is just a 140-character tweet). Secondly, the problem is not so much with the video quality (shaky camera work and the like—I’m quite happy with that within reason)—it’s with the audio. Some people’s voices simply become wooden when faced with a microphone and the ‘record’ light, to the point that listening to them is painful…

Powell isn’t writing off the technologies any more than he’s writing off blogging or desktop publishing. And he’s not saying anything I haven’t said before in different ways. Quickest summation: The medium is not the message, and making the medium easier doesn’t improve the content. Or, to put it another way, channels are easy, content is hard—and multimedia content is harder. (See my April 2006 EContent column, “Rich Media is Hard.”)

Doesn’t mean you shouldn’t do it. Does mean you shouldn’t assume you’ll be great at it, and maybe shouldn’t denigrate boring old easy-to-skim text. Which, of course, isn’t all that easy to write.

Checklists for Writing and Publishing

Why is this in Trends & Quick Takes? I haven’t been writing about writing as such lately, and certainly not about creative writing. And I have a general dislike of lists. But there are always exceptions.

David Booker quoted a list of eight “basics” of what Kurt Vonnegut calls Creative Writing 101, in a March 11, 2009 post at The Centered Librarian:

1. Use the time of a total stranger in such a way that he or she will not feel the time was wasted.

2. Give the reader at least one character he or she can root for.

3. Every character should want something, even if it is only a glass of water.

4. Every sentence must do one of two things—reveal character or advance the action.

5. Start as close to the end as possible.

6. Be a sadist. No matter how sweet and innocent your leading characters, make awful things happen to them—in order that the reader may see what they are made of.

7. Write to please just one person. If you open a window and make love to the world, so to speak, your story will get pneumonia.

8. Give your readers as much information as possible as soon as possible. To heck with suspense. Readers should have such complete understanding of what is going on, where and why, that they could finish the story themselves, should cockroaches eat the last few pages.

The greatest American short story writer of my generation was Flannery O’Connor (1925-1964). She broke practically every one of my rules but the first. Great writers tend to do that.

I’m very much not a creative writer (that is, the writing I do doesn’t fall into the “creative writing” category), but I do read a fair amount of fiction and watch TV and movies, which should follow many of the same rules (setting aside #4).

My take? #2 is critical, and is one of the things that causes me to stop watching some movies early (and give up on some TV shows altogether). I suppose that makes me intellectually lazy, but—at least at my age and knowing how many books (etc.) are out there that I want to read—life really is too damn short to spend on wholly-depressing fiction.

Similarly #3: Characters with no motivation make cardboard look lively by comparison. As to others…I’m not sure #6 is always necessary, but I’m sure some conflict is part of any good story. And, of course, you shouldn’t read the eight “rules” without reading that last paragraph.

The other checklist is by Allan Mott, offering “50 reasons no one wants to publish your first book.” David Booker quoted the first five in a March 31, 2009 post at The Centered Librarian, linking to the remainder at bookgasm. I’ll quote a few of the cleaner ones.

2. There’s this thing called punctuation. You might want to look into it.

9. Submitting a manuscript handwritten in your own blood does indicate your passion for the material, but not quite in the way you might have hoped.

11. Iambic pentameter? Really?

14. William Burroughs was a broken-down beatnik junkie genius; you’re a wannabe-hipster asshole imitating a broken-down beatnik junkie genius.

29. Everyone who attempts to load a copy of the manuscript onto their Kindle is found dead three hours later.

33. Writing a book about vegetarian zombies kinda indicates you don’t exactly know why people like zombies in the first place.

38. For the first 20 pages, everyone who reads it is certain it’s the funniest book they’ve ever read. Unfortunately by the 21st, they finally realize you’re actually being serious.

45. A general rule to follow when writing for kids: If you could go to jail for saying it to them in person, you’re better off not putting it into print.

Tempted as I am to quote #39, you’ll have to go read it yourself, particularly since it would seem that there are exceptions… (Coming next issue: Making it Work and vampires!)

Quicker Takes

Reality check 1: From the October 26, 2009 Fortune comes a little chart titled “Still a Juggernaut.” Time Inc. publications tend to fact-check pretty well, so I’d assume this is right. To wit: Market share (installed base) for operating systems as of May 2009: Microsoft Windows, 93%; Mac OS, 3%; Linux, 2%. Market share (sales) for Office-type suites in 2008: Microsoft, 94%; Adobe, 4%; Apple, 1%; Other, 1%.

·         When the mourning’s over: The November 2009 Sound & Vision notes that Toshiba has introduced the BDX2000—a $250 Blu-ray Disc player. After Toshiba gave up on HD DVD (being, notoriously, the only player in the DVD Consortium that actually put any money into the format), it brought out an upscaling DVD player with an ad campaign suggesting that it yielded results just about as good as Blu-ray Disc. That was nonsense, of course…and now Toshiba’s joined the crowd.

·     Speaking of Blu-Ray Disc: I put up a blog post noting the availability of two BD players for under $80 on Black Friday. Those were one-day or one-weekend specials and were minor brands. But since then, you can buy a name-brand BD player for $150—from almost any major brand.

·     Slowly catching up, I was going to say something about Leslie Johnston’s take on a ReadWriteWeb August 2008 column on everything moving into the web—a column rife with RWW’s assured, deterministic, “everything will be” attitude (not “most people will do most of their computing in the cloud,” but “the browser is going to swallow up the desktop” very quickly and, presumably, for everybody). Johnston was commenting on a comment about us (explicitly “we all”) shifting from “being librarians” to “being daytraders”—both because the RWW writer has no idea what librarians actually do, but also because the assumption that we’d stop managing information is absurd. But, as I read the RWW column I just have this dreamy feeling, that I’m in some Victorian novel—or maybe in a lazy summer afternoon in Dayton, Ohio, in 1903. It’s so predictable, so inevitable, so universal, so…drearily simplistic and, effectively, old-fashioned in its monolithic future. (Not Johnston’s post, August 19, 2008 at Digital Eccentric: that’s actually quite good.)

·     Ars Technica had a snarky item in September 2008 noting SanDisk’s “slotMusic” format and how it was doomed to failure—because it was (and is) a physical medium for music and, you know, nobody buys music in physical form. (In 2008, that “nobody” was still around 80% of all music sales; in 2010, it’s still a majority, although probably not for long.) The slotMusic thingies were going to be 1GB microSD flashcards with an album on them in 320K MP3 format; you could load more stuff onto the rest of the chip if you chose. It’s snarky—and assumes nobody has phones or MP3 players that have microSD cards. Wrong even in 2008, wronger in 2010. The slotMusic format was never going to be a universal medium—but it didn’t have to be. Despite pretty much universal derision from techies, the initial release didn’t do badly—and all SanDisk Sansa MP3 players have included microSD slots for years now. There still are some slotMusic releases, but those microSD slots (apart from making a great cheap way to turn a 2GB MP3 player into a 4 GB player, as I did last year) are also useful for something else: slotRadio. Same form factor—but with 1000 songs on a 4GB card for $40 (frequently discounted to $30). They’re “radio” format (you can skip songs as often as you want or switch playlists, but you can’t select individual songs), but that’s still three or four cents a song, enough of a bargain for the ten (as of this writing) slotRadio cards to make a certain amount of sense. (Each card features a genre—country, 80s & 90s, rock, oldies, hip-hop, or 60 hours of classical.) A niche product, but niche products can be profitable.

·     Remember Jones Day v. BlockShopper? Probably not. It was an odd lawsuit in 2008 in which a law firm sued a little website, BlockShopper, that shows who purchases properties in specific city neighborhoods—public information, that is. Two lawyers from the firm purchased properties in one of the covered neighborhoods—and the law firm claimed that inclusion of its trademarked name and linking to the lawyers’ bios on the firm’s website was trademark infringement. Sadly, the case wasn’t thrown out of court; it was settled, in a manner that makes the BlockShopper information less readable. The significant result: It’s easier for other companies to try to interfere with fully legitimate web stories, at least those stories that include links.

·     I’m no fan of Jeff Jarvis, but give the man credit for honesty, as quoted in a February 16, 2009 post on Reflections of a Newsosaur. The blogger (Alan D. Mutter) is asking “What would Jeff Jarvis do”—or, rather, given Jarvis’ “deeply held belief” that content should be free, why did it cost $15 to $27 for his new (then) book What Would Google Do?—and almost $10 for a “video infomercial” on the book? Jarvis’ answer: “I’m a hypocrite. I didn’t put this book up as a purely digital, searchable, linkable entity—I didn’t eat my own dog food—because I got an advance from the publisher, and other services. Dog’s gotta eat. I couldn’t pass it up.” But, you know, the rest of us are supposed to suck it up and put on shows, since content has to be free. Right? (At least one commenter wasn’t buying the “Dog’s gotta eat” argument for someone like Jarvis. “No, he just needs the money because he likes money.” A bunch of commenters unloaded on Mutter for what I took as a humorous post—and one or two, familiar with Jarvis’ record, unloaded on him.)

·         Recognizing your real audience: As I’ve noted elsewhere, I only read Fortune because, for reasons I don’t comprehend, Time Inc. threw in a three-year subscription (for this roughly-fortnightly magazine) along with a cheap ($30) three-year subscription to Money. (Not quite as bad as the time I got a letter inviting me to subscribe to Time for a year at a special professional rate of…whatever I wanted to pay. I was tempted to send a check for $0.01, and have no doubt they would have accepted it, but I don’t take magazines I don’t have time to read…) Anyway, there’s a full-page ad in the Marketplace section of the December 7, 2009 issue; I can’t imagine the advertiser would buy a full page of a large-circ magazine unless they thought there were lots of potential buyers among subscribers. The product advertised? Verari Systems servers and “containers” (big boxes’o’servers). The smaller product insert, the Bladerack 2 XL, starts at $124.999 (hey! it’s under $125K!), with up to 72 Intel Xeon-based servers and up to 1.3PB (that’s petabytes) of capacity. The larger insert is for the Forest Container—up to 2,880 processors and up to 26PB storage capacity. Starting at a mere $749,999—under $750 thousand! (The big type: “So energy efficient you may wonder if it’s plugged in”—and yes, the Forest Container is painted to look like a forest.) Maybe I’ll add it to my Amazon wish list…I bet the Forest Container would run Word2007 really fast. And I could save all my drafts in 26PB…several million times over.

·     Tim Spalding of LibraryThing posted “Review integrity, reviewer freedom and pay-for-review marketing” on March 25, 2009 at LibraryThing Blog. He cites “bottom feeders” in the area of book-based social networking: “Top of my list are companies that charge hopeful authors for positive reviews, which are then owned by the company, edited by them and posted mechanically on multiple social networks and commercial sites over the web, on Twitter and so forth.” He’s encountered an outfit that charges $425 for reviews—which are then posted to LibraryThing, Google Books, Fetchbook and Worldcat.org. “This organization has posted 94 reviews—$39,950 in theory—and wouldn’t you know, all of them were five-star reviews!*” The asterisk points to a footnote that he’s avoiding publicity for the reviews and that he’s removed them all. On the other hand, it’s only reasonable for publishers to give away books in hopes of getting reviews (that’s nearly universal: Very few book reviewers pay for the books they review!) and isn’t even against saying “If you take the book, you need to review it”—as long as the requirement isn’t for a positive review. He’s added a careful clause to LibraryThing’s terms of use—forbidding reviews that come from “positive-review-only” giveaway programs and all paid book reviews. One commenter objects to the second half, but—I believe—based on a misconception: When you’re paid by a journal to review a book, that’s very different ethically from being paid by a publisher or author to review a book. Where does that leave Kirkus Discoveries, which very pointedly carries the “Kirkus” name and charges $400, which guarantees a review? Well, Kirkus doesn’t guarantee a positive review and they’re targeting self-published and independent authors, but it’s at best a gray area…the more so because the $400 gets the author the review, and the author then decides whether the review should appear on the Kirkus Discoveries website. (The Kirkus Discoveries newsletter “highlights the best submissions,” so it’s fair to assume there are very few damning reviews.) There are other open paid-review services—for example “An Honest Read” (yes, authors can kill negative reviews) and ForeWord sells reviews for $99 or, under the Clarion name, for $305. As you might guess, much as I’d love to have any reviews of some of my self-published books, I’m with Spalding on this one, and I’ll certainly never pay for such a review.

·     Peter Bromberg (I think) at Library Garden and David Booker at The Centered Librarian both had a little fun (on April 3, 2009) with a study from the University of Melbourne showing that, as Bromberg put it, “not working makes you a better worker.” Or, more specifically, that “WILB”—researcher Brent Cocker’s acronym for “workplace Internet leisure browsing”—helps to sharpen workers’ concentration. The study, of 300 people, purports to show that “people who use the Internet for personal reasons at work are about 9 percent more productive that those who do not.” (That might be true: The others are too busy playing Solitaire.) Bromberg’s take: “If your boss still has a problem with your wilbful behavior, you can claim, ‘I just have a bursty style, not a busy style, which means that although it might appear to the untrained eye that I’m never actually working, you’ll notice that all my work actually gets done.’ If this line is delivered correctly, it will create a moment of confusion as your boss ponders the busy/bursty conundrum, giving you a small window of opportunity to slip away for a donut break.” Booker treated it more seriously, noting that the study deals with workers spending less than 20% of their time on “short and unobtrusive” web breaks—and notes that, at a former place of work, the policy manual required spending at least an hour a day “browsing the web” to keep pace with innovation.

·     I don’t think this will fit neatly anywhere else, and it’s too lovely not to point out: Patrick H. Alexander’s “What Just Ain’t So”—published April 6, 2009 in the “Views” department of Inside Higher Ed. (www.inside­highered.com/views/2009/04/06/Alexander). Alexander, editor-in-chief of the Pennsylvania State University Press, deals with manuscript reviewing for scholarly monographs and cites some problematic forms of peer review. His examples are “the New York Times book review”—where the reviewer does a “book review” instead of a peer manuscript review; the “why-didn’t-you-write-a-different-book?” review (which, post-publication, may be the most offensive form of book review); the slashing review; and, “perhaps the most frustrating review and most impervious to fixing”: the “intellectual comb over,” the peer review that clearly fails to actually review the manuscript. It’s a 2,100-word piece (as compared to the 126 words in this bullet prior to “It’s a”) and a delightful read. The comments are also interesting and challenging.

Cites & Insights: Crawford at Large, Volume 10, Number 4, Whole Issue 127, ISSN 1534-0937, a journal of libraries, policy, technology and media, is written and produced by Walt Crawford, Editorial Director of the Library Leadership Network.

Opinions herein may not represent those of LYRASIS.

Comments should be sent to waltcrawford@gmail.com. Cites & Insights: Crawford at Large is copyright © 2010 by Walt Crawford: Some rights reserved.

All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/ licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

URL: citesandinsights.info/civ10i4.pdf