Cites & Insights: Crawford at Large
ISSN 1534-0937
Libraries · Policy · Technology · Media


Selection from Cites & Insights 12, Number 3: April 2012


 

The Middle

As Long As It Works…

Keep using it. That’s a fitting intro for this episode of The Middle, another segment of catching up with old T&QT items. It’s also the title of a July 11, 2010 post by K. Manilla at Motho ke motho ka botho. Which may require a little explanation:

The name of this blog, which is probably why you are here, is the Setswana translation of the ubuntu precept, that “a person is a person because of ubuntu.”

Beyond that, it’s probably worth mentioning that I have been a Linux user since late 2005, and started out with Ubuntu, like many other people. In fact, I still volunteer as a moderator on the Ubuntu forums.

It’s a charming post, especially for a tech conservative like me. Most of it:

Maybe it’s a joke and maybe it’s not, but I occasionally get notes from people reminding me that 1996 is over, and it’s time to toss most of the computers I own into the rubbish bin.

And of course, I ignore them, mostly because the people who write them are obviously juveniles (their inability to type in words longer than two or three letters is usually a clue), or just hoping for an equally acid response. But I’ve worked with enough trolls to know not to feed them, so those notes usually go straight to the electronic graveyard.

The last one, just within this past day, included a link to this rather snotty article on msn.com, reminding the world that things like fax machines and CB radios — along with any sort of disk drive, which is probably why it was sent to me—are not only obsolete, but very uncool.

I don’t believe I commented on that Dan Tynan article, “Ten technologies that should be extinct (but aren’t).” It’s a piece of work: an “if there’s a digital alternative, the old technology must go away now!” triumphalist cry, denouncing not only telegrams and typewriters but also landlines turntables, cash registers (after all, cash itself is a dinosaur: Tynan says so) and disc drives (with, Gaia help me, a quote from Rob Enderle).

At the outset, I have to warn you that I am impervious to the slur “uncool.” I wear boring, uninteresting clothes to work each day that I got from boring, uninteresting sales in boring, uninteresting shops, and I did that on purpose because I have my own philosophies on cool … and they go beyond the computers I work with…

But all obvious shortcomings aside—and also acknowledging that I don’t know Dan Tynan from Adam—I don’t see that it matters how “old” any particular technology is, so long as you are satisfied using it to do the job. Mr. Tynan’s snide comments about typewriters or turntables are completely meaningless to the people who prefer those devices, and no amount of heckling will convince them otherwise…

The bottom line is this: Mr. Tynan — or any modern tech pop writer, for that matter—can giggle all he wants about Western Union telegrams or instant cameras, but chances are the people who use those things don’t really care what Mr. Tynan or his friends think. They use them because they do the job, and because they’re happy with them.

And that’s the way the world should work, really. I say so long as the technology works, and you’re comfortable using it, then go forth and pursue happiness and freedom in any way possible. Ride a bicycle to work. Write a letter with a pen. Talk face to face with your neighbor—all those things are quite obsolete too, I should think.

The fact is, if you stop worrying about the technology you use for the job, you can spend more time focusing on the job. And if the job is anything at all that you remotely enjoy, then it won’t matter to you what technology you use. And the same goes for floppy disks, which I still have lots and lots of … and use with surprising frequency.

Emphasis added, because that’s the heart of the whole thing. Almost all commenters agreed—but that’s not unusual.

Paperless offices—a rant

The title is on Richard Watson’s July 15, 2010 post at What’s Next: Top Trends, and it’s probably included in different form in his latest futurist book. He’s asserting that while paper consumption in offices increased from 1990 to 2001, it’s decreased since then—and he’s not sure that’s a good thing. Excerpts:

Generation Y, the generation born roughly at the same time as the Personal Computer, has started working in offices and these workers are comfortable reading things on screens and storing or retrieving information digitally. Moreover, digital information can be tagged, searched and stored in more than one place so Gen Y are fully aware of the advantages of digital paper and digital filing. All well and good you might think but I’m not so sure.

One of the great advantages of paper over pixels is that paper provides greater sensory stimulus. Some studies have suggested that a lack of sensory stimulation not only leads to increased stress but that memory and thinking are also adversely affected.

For example, one study found that after two days of complete isolation, the memory capacity of volunteers had declined by 36%. More worryingly, all of the subjects became more suggestible. This was a fairly extreme study but surely a similar principal could apply to physical offices versus virtual offices or information held on paper versus information held on computer (i.e. digital files or interactive screens actually reduce the amount of interaction with ideas).

Now I’m not suggesting that digital information can’t sometimes be stimulating but I am saying that physical information (especially paper files, books, newspapers and so on) is easier on the eye. Physical paper is faster to scan and easier to annotate… Paperless offices are clearly a good idea on many levels but I wonder what the effects will be over the longer term?

Am I ready to cheer this futurist’s conservative take in this case? I’m not sure. For one thing, looking at overall use of paper doesn’t show whether people are using print where it’s most appropriate. My own use of paper as an output medium has decreased substantially in the past few years, because I no longer print out articles in full if I plan to comment on them. At most, I print out the first page. I might print stuff out if I really wanted to remember it—but there’s a lot of stuff, and even more in typical office life, where remembering the whole thing is pointless and possibly even harmful.

I don’t think we’re headed toward paperless offices. What I think we’re seeing is a trend toward paper-less offices: Offices where less paper is used than in the past, where you only print out multipage memos and reports that you need to think about deeply. I don’t think that’s a bad thing.

Comments?

The item I tagged is a tiny little boing boing post on July 16, 2010—posted by Xeni Jardin, but other than three words it’s all a quote from Gene Weingarten’s Washington Post column of July 18, 2010. (How is it possible that Jardin’s quoting Weingarten two days before his column appears? I can only guess that it’s a Sunday column posted in advance on the website—that, or Jardin’s psychic.) To my considerable astonishment given the way these things usually work, the link to the column still works 20 months later.

Weingarten is lamenting changes in the newspaper workplace, starting with the confusion of new job titles:

Every few days at The Washington Post, staffers get a notice like this: “Please welcome Dylan Feldman-Suarez, who will be joining the fact-integration team as a multiplatform idea triage specialist, reporting to the deputy director of word-flow management and video branding strategy. Dylan comes to us from the social media utilization division of Sikorsky Helicopters.”

He liked the old way better:

On deadline, drunks with cigars wrote stories that were edited by constipated but knowledgeable people, then printed on paper by enormous machines operated by people with stupid hats and dirty faces.

Based on the good old days at the San Francisco Chronicle, some of us bemused readers assumed that it was the proofreaders who were drunk, not necessarily the reporters—and the quality of proofreading has actually improved over the years (at the paper, although some magazines and online sites seemed to have abandoned copyediting and proofreading entirely). Anyway, this column notes the changes when a newspaper’s website enters the picture (go read it yourself: It’s fun and not all that long, and it’s really about headlines and how going online screws them up). What Jardin picked up on was Weingarten’s quick take on online comments:

I basically like “comments,” though they can seem a little jarring: spit-flecked rants that are appended to a product that at least tries for a measure of objectivity and dignity. It’s as though when you order a sirloin steak, it comes with a side of maggots.

Looking at comment streams at many newspaper websites, far too many other websites, online news and other sites (I shudder to even name some of them—no, youtube’s not the worst) and lots of other places, it’s hard to disagree, although there are lots of exceptions.

So there’s a one-paragraph boing boing item, all but three words of it a quote. Accompanied, of course, by the heart of it: Nearly 100 comments about comments. Jardin kicks off by saying that boing boing commenters aren’t like that—”But anyone who’s spent any time on the internet knows exactly what this guy’s talking about. boing boing does exercise strong moderation, as does Whatever, a site where the comments are generally interesting and literate. In this case? Given that it’s a metastream (website comments about website comments), it’s fine reading, with once in a great while a semiserious point included. One of the best of those semiserious comments is by JakeGould:

The comments on most mainstream sites are dreadful. It’s like someone brought a laptop to a newsstand/corner store and let every chucklehead who is waiting in line for Lotto tickets to air their opinion.

And, at the point that people were creating maggot memes, this gem by Antinous: “The plural of maggots is not data.”

As to comments attached to the column itself? It’s a miracle that the column’s still available. Clicking on the comments link results in an animated thing saying it’s going to get them…and it never does. Supposedly, they were worth reading.

The Internet Makes Us Cocky, Not Stupid

A great title for a relatively short item, by Heather Horn on July 26, 2010 at The Atlantic’s website. She’s citing an LA Times article by Christopher Chabris and Daniel Simons—and, oh look, a second miracle: that July 25, 2010 article is still available.

Chabris and Simons are commenting on Nicholas Carr’s The Shallows: How Internet Alarmism Is Selling Books For Me (I may have this subtitle wrong) and other digital alarmists, such as those claiming that Google’s making us stupid, and their title is a tipoff: “Digital alarmists are wrong.” (Reading the Times article online makes me say Google’s making me annoyed, rather than stupid, as there are not only five banner ads by Google but also fourteen text ads interrupting the article.)

Chabris and Simons, both psychology professors, suggest that the alarmists are less able to concentrate now than they were 10-15 years ago “simply because they are 10 to 15 years older.” I think that may be facile (but then, I’m 66, so I would think that, wouldn’t I?), but I’m inclined to buy this paragraph:

The appeals to neural plasticity, backed by studies showing that traumatic injuries can reorganize the brain, are largely irrelevant. The basic plan of the brain’s “wiring” is determined by genetic programs and biochemical interactions that do most of their work long before a child discovers Facebook and Twitter. There is simply no experimental evidence to show that living with new technologies fundamentally changes brain organization in a way that affects one’s ability to focus. Of course, the brain changes any time we form a memory or learn a new skill, but new skills build on our existing capacities without fundamentally changing them. We will no more lose our ability to pay attention than we will lose our ability to listen, see or speak.

Then things get a little trickier, as the writers seem to take on sustained concentration itself:

[T]he notion that prolonged focus and deep reading mark the best path to wisdom and insight is just an assumption, one that may be an accidental consequence of the printing press predating the computer. To book authors like us it seems a heretical notion, but it is possible that spending 10 or more hours engrossed in a single text might not be the optimal regimen for building brainpower.

I find their example—chess grandmasters who now flicker through hundreds of games rapidly rather than studying chess books for hours—somewhat irrelevant, and I’m not convinced that you can generally substitute quick overviews for deep reading, at least not if you really want to know a subject. But I’ve never been convinced that using the internet is somehow changing my brain or making it impossible to read long texts; it’s just another choice.

What Horn seizes on, more than the original article, comes near the end:

The more different ways technology gives us to multitask, the more chances we have to succumb to an illusion of attention—the idea that we are paying attention to and processing more information than we really are. Each time we text while we are driving and do not get into an accident, we become more convinced that we can do two (or three or four …) things at once, when in reality almost no one can multitask successfully and we are all at greater risk when we do so. Our capacity to learn, understand and multitask hasn’t changed with the onslaught of technology, but our confidence in our own knowledge and abilities have.

So Google is not making us stupid, PowerPoint is not destroying literature, and the Internet is not really changing our brains. But they may well be making us think we’re smarter than we really are, and that is a dangerous thing.

In this case, while Horn doesn’t add a lot to the original article through her commentary (she uses some of the same selections I do), she adds value by shifting the focus slightly through the headline itself. (Not that there’s anything wrong with excerpting interesting articles…)

In Other News, Wired is Still Wired

I had two items from Wired.com tagged here. One was a fairly long item claiming that a “scientific” survey had found that most iPad owners were “selfish elite,” arrogant, wealthy and disinclined to care about others, while most iPad critics were geeks and salt of the earth—or something like that. I found it amusing because Wired gives so much press to iAnything—and because the characterization of even the earliest iPad owners was so over-the-top. It was, I suspected, just a ploy to get lots of comments…and, of course, it succeeded. Looking back at it now, it’s not worth linking to or commenting on. (The two early iPad owners I know best are about as humanitarian and altruistic as anybody I know. They’re also reasonably well off, to be sure.)

The second is a short item that is wholly recursive: As far as I can see, it makes no sense except as an example of its apparent topic. It’s by Charlie Sorrel, posted July 29, 2010, and it’s entitled “The Cult of Apple: When Even a Battery Charger is Big News.” Sorrel claims that the Apple Battery Charger has “been all over the internet.”

It’s a nice charger, to be sure: it minimizes “vampire draw” by shutting off the power when the batteries are charged. It ships with six batteries which should last up to ten years and it has the usual Apple polish in the form of coded flashing or steady amber and green LEDs. But does this really warrant the amount of coverage that is being given to a battery charger? After all, there are countless chargers out there that are better featured, or simpler, and certainly cheaper.

What this insane news coverage really tells us is that, despite the endless whining comments to the contrary, Apple news is big news. People read it, people want it, and people click on it. Sure, Apple benefits from the almost continual din of free publicity, but so do the people publishing the news. And so do you, the reader: From the amount of interest in any Apple news, it’s obvious that it is in demand.

Really? There was that much coverage for a $30 charger that only holds two batteries—one that Sorrel admitted he’d  probably buy? Sorrel certainly added to the media coverage—with a big ol’ picture of a tiny little charger. I find his justification for doing so transparent in its use of “benefit”—which means “provides more chances to shove lots of ads in front of your face.” I suppose that benefits the reader. I’m not quite sure how.

Do the Wave!

A cluster of items from August 2010 with a common theme: Google Wave and why it never amounted to much. Perhaps worth mentioning a couple of years later as a reminder that Google has never been infallible, even when the company was clearly excited about a new service (and even back when it was still possible to take “do no evil” seriously, although that may be a stretch).

Update on Google Wave

This one’s From The Source: Google Official Blog, posted August 4, 2010 by Urs Hölzle. Extensive excerpts (the central three paragraphs of a five-paragraph post):

Last year at Google I/O, when we launched our developer preview of Google Wave, a web app for real time communication and collaboration, it set a high bar for what was possible in a web browser. We showed character-by-character live typing, and the ability to drag-and-drop files from the desktop, even “playback” the history of changes—all within a browser. Developers in the audience stood and cheered. Some even waved their laptops.

We were equally jazzed about Google Wave internally, even though we weren’t quite sure how users would respond to this radically different kind of communication. The use cases we’ve seen show the power of this technology: sharing images and other media in real time; improving spell-checking by understanding not just an individual word, but also the context of each word; and enabling third-party developers to build new tools like consumer gadgets for travel, or robots to check code.

But despite these wins, and numerous loyal fans, Wave has not seen the user adoption we would have liked. We don’t plan to continue developing Wave as a standalone product, but we will maintain the site at least through the end of the year and extend the technology for use in other Google projects. The central parts of the code, as well as the protocols that have driven many of Wave’s innovations, like drag-and-drop and character-by-character live typing, are already available as open source, so customers and partners can continue the innovation we began. In addition, we will work on tools so that users can easily “liberate” their content from Wave…

That’s straightforward. Google thought it was a breakthrough, there was lots of enthusiasm…and it didn’t go anywhere. Some of us were so disinterested in things like “character-by-character live typing” that we went out of our way to avoid Wave; others just didn’t see the point in most real-world applications.

Why didn’t Google Wave boot up?

That’s Dave Winer’s question at Scripting News on August 5, 2010—in a post illustrated by a big picture of Julia Child for no apparent reason. Winer identifies himself as a specialist in “the kind of software that Google Wave is”—and cites blogging, RSS and podcasting as examples. As he notes, there have been more failures than successes in the field.

So there’s no shame, as far as I’m concerned, in trying to launch a network of computer users, and having it not boot up.

Why didn’t Wave build?

Here’s the problem—when I signed on to Wave, I didn’t see anything interesting. It was up to me, the user, to figure out how to sell it. But I didn’t understand what it was, or what its capabilities were, and I was busy, always. Even so I would have put the time in if it looked interesting, but it didn’t.

But he cites the invitational nature of Wave as a bigger problem.

I assume they were worried about how the system would perform if they got too many users. It’s as if, starting a baseball season, you worry about where you’re going to put the World Series trophy. It’s not something you need to worry about. You might even say you jinx your prospects for success if you put that in the front of your mind.

He offers five key characteristics of what he saw in Wave: Hard to understand; nothing happening; my friends aren’t there; if they wanted to come, I’d have to get them invites; why should I bother? He contrasts that with his early use of Twitter: Easy to understand; stuff already happening; some friends were there; anyone could join; no real reason to bother—but it seemed worth writing about. [Emphasis added, some items reworded.]

He’s not offering sure-fire formulas: He doesn’t have one and I don’t believe he thinks there is one. “Even if everything is right, the net might not boot up.” As he notes, it took a few tries to get podcasting going (assuming it still is) and there were a lot of community blogging sites before Blogger. “Sometimes it’s just the timing.”

One of the modest number of comments strikes me as particularly relevant, from “pickme2”:

Invited my friends, we all played, waited for extensions that never arrived and eventually (actually very quickly), attrition rode our little community’s wave.

Google fails again

That’s Phil Bradley’s title for an August 5, 2010 post at Phil Bradley’s weblog, and Bradley says this is a late admission of “what everyone has known for a very long time--Google Wave has tanked.” He says “Google didn’t actually know what it was for” and that it’s “just another reminder to everyone—Google is actually an astonishingly inept and incompetent company.” That’s Bradley’s opinion; I might say “Google’s willing to throw lots of things out and hope that a few of them stick really well.”

He notes the history: Wave started out with 6,000 developers, then opened up to 100,000 people for testing, but didn’t become openly available until March 2010, by which time it was almost certainly too late. He notes that Gmail also rolled out slowly—but it’s a different kind of product, for person-to-person communication.

Social networking tools are by their very nature, social. Which means lots of people have to play around with them. They morph and change over time as users start to do different things and they help assist the development of the product. Google doesn’t like that, because Google thinks that it knows best. The idea that they might be wrong doesn’t really occur to them, and I do actually find it quite shocking that they’re pulling Wave quite so early—it’s not even been 6 months yet! Rather than say ‘look, this isn’t working as we thought, what shall we do to change it and improve it?’ Google has done what Google always does—closes the door and walks away.

Bradley cites some of Google’s other apparent failures: Orkut (still big in Brazil), Lively (who?), Google Answers and a bunch of others (e.g. Knol, which Google shut down much more recently). Oh yes, and Google’s irritating attempt to court librarians…

There’s more to the post and it’s worth reading in the original, even if you don’t agree that Google itself (that is, the search engine) is “rubbish” or “a poor product.” [Admission: Bing is my primary web search engine with DuckDuckGo as an alternate, switching to Google when I need 100 results per page.]

Bradley’s tougher on Google than I would be, but he spends a lot more time looking at search engines than I do and has much more expertise in that field. (I may be handicapped by having lived in Mountain View and worked half a mile from the Googleplex for years: I’ve been acquainted with some Google people, and “Google has too much money and too few brains” strikes me as harsh, just as the suggestion that Google hasn’t been innovative since 1999 is, I believe, wrong.) He concludes:

I look at any Google innovation with considerable skepticism now, and I’m not going to put any work into anything that they produce because they may well can it in a few months, and all that work has gone down the drain. That’s the other downside of their breathtaking incompetence—I simply don’t trust them an inch, and never will, and I’m far from the only one! That’s not only bad news for Google, it’s bad news for the entire industry.

Whew. I agree that it makes sense to look at Google innovations with “considerable skepticism,” but I’d say exactly the same about innovations from Apple, Yahoo! (have there been any?), Microsoft, AOL, Facebook, Twitter….

The comments are interesting and worth reading, some high-fiving Bradley, some disagreeing. His response to one comment that takes him to task for calling Google (search) “rubbish” is interesting and fairly persuasive. In part:

Google gives different results according to capitalisation or not of Boolean operators. fish AND chips gives different results to fish and chips. Ditto for or/OR

Search functionality works differently depending on capitalisation of the syntax, so Site: gives different results to site:

There’s no consistency with syntax either, so in one case we do site:.ac.uk, but filetype:.pdf doesn’t work.

Can Google do proper proximity searching? No.

Can Google do phonetic searching? No.

Can Google do cluster searching? No.

Can Google do regional searching? No.

Can Google even get a basic search which uses a minus sign to give you a smaller set of results each and every time? No.

Why Google Wave Crashed and Burned

Also on August 5, 2010, John Hudson’s critique posted at the Atlantic wire. This one’s a metapost, citing four reasons for Wave’s failure from four other writers:

·         It was a solution looking for a problem—quoting Rob Diana at Regular Geek, but that’s probably the most common thing I heard at the time.

·         No one could explain it—quoting “tech guru” John Gruber.

·         They never nurtured a core fan base—quoting the David Winer discussion excerpted earlier here.

·         Companies couldn’t use it—quoting “Scott at Information Overload” (actually a post at Informationoverlord, not at all the same thing)

Not much to add here. Hudson calls Google Wave a “much-ballyhooed e-mail and instant messaging application” and I don’t think that’s what it was at all. Which may be indicative of just how problematic Wave was, pretty much from the start.

Google Wave: why we didn’t use it

Given ars technica’s bizarre dating practices, I can say this “Ars Staff” piece appeared “about a year ago”—but diigo says I tagged it on August 6, 2010, so let’s give it that date.

The ideas in Wave were undeniably cool, the vision was ambitious, and Google backed it. So why did no one use it?

We looked to our own experiences of using Wave for clues as to what went wrong, and we found plenty.

What follows are commentaries by eight different writers and editors based on their actual attempts to use Wave, which may make this the most useful piece of commentary in this roundup. The first and longest comes from Jon Stokes, who “dove right in” as soon as it was available because he thought it would be great for role-playing games, although he was “immediately hit by how slow and wonky the interface was.” The primary interface “sin”? “It crammed a multiple-window-based desktop metaphor into a single browser window.” He kept trying, with little success. He did run into the “feature” that turned me off to Wave without even trying it:

The other problem—and this was a huge issue and a common complaint—was that everyone could watch you type. The live typing was a core part of the Wave protocol, and the developers considered it a critical Wave feature that everyone should just either get over or learn to love. So there was never going to be any way to turn it off and enable a kind of “draft preview” that would let you send complete, IM-style messages. This was a major buzzkill; few people are comfortable in an informal chat where others can watch them type.

Stokes still thinks email “needs to be reinvented, but not quite so radically.”

Others contribute variously useful or odd perspectives. Chris Foresman thinks the problem was the lack of alternative interfaces (really?)—clients comparable to Tweetdeck for Twitter. “With only one confusing interface to choose from, Wave just couldn’t garner the mass appeal it needed to supplant more firmly entrenched forms of communication.” I’m having trouble buying that as the primary problem: Do most Facebook users use anything except the awful Facebook interface? (That may be an ignorant question.) Ryan Paul faults the initial lack of support for existing services: “Wave users can really only use Wave to communicate with other Wave users—it can’t serve as a bridge to conventional e-mail and instant messaging.”

There’s more. Since this group of people does a lot of collaborative work, it’s a good case study. Which may make it worth quoting the entirety of the final comment, from Clint Ecker, a project manager/programmer:

Why Wave failed? The very genesis of this article holds a clue: conceived over IRC, sent out via mass e-mail, and collaboratively composed, edited, and compiled in a locally hosted Etherpad. This speaks volumes about how traditional tools are working a lot better for people than Google ever imagined, despite their problems.

Really? Use existing tools because they work and you’re familiar with them? What a notion!

Let’s Celebrate Google’s Biggest Failures!

Gotta love that exclamation point on this August 5, 2010 essay at Search Engine Land by Danny Sullivan. He quotes Eric Schmidt’s comment regarding the Wave closure: “We celebrate our failures.” Sullivan’s take: “When it comes to failures, Google’s celebrating more than you might realize.” He summarizes “important Google products that haven’t made the cut, over time,” starting with Google Wave and working backward.

For each product, I’ve also pulled a “celebratory failure quote.” I don’t mean for that to be as snarky as it seems. It’s meant to illustrate the difference between how Schmidt’s statement sounds and what his company actually tells the world.

I agree. Google’s a company that’s not afraid to take risks and does seem to embrace the idea that along the way, there will be failures. Maybe that’s “celebrating” those failures. But in its statements to the world, Google rarely sounds like it’s celebrating these missteps. It doesn’t really document anything that was learned. It just seems to say as little as possible to move on.

Sullivan’s take on Google Wave: “perhaps one of the most heavily hyped products that Google’s put out, only to have it fall on its face.” Otherwise, he’s mostly quoting large portions of the same post I quoted earlier.

The essay offers similar Google kissoff quotes on several other failures, some of which I’d entirely forgotten: Google SearchWiki, Google Audio Ads, Google Video (a long discussion), Dodgeball, Jaiku, Google Notebook, Google Catalogs (one of the more bizarre ones, as I remember—a “way to search through consumer catalogs”), Google Print Ads, Google Page Creator and Google Answers. He lists a few “next?” cases such as Orkut, Knol, Sidewiki and Buzz, and discusses some successes. Google has since shut down Google Labs, home for several of its interesting failures.

Rethinking failure: Google Wave

This post, by Nicole Dettmar on August 19, 2010 at eagledawg, isn’t really a commentary on Wave and why it failed. She notes Slate’s The Wrong Stuff series, including an interview with Google’s Peter Norvig on the virtues of failed experiments. And then she brings it back home:

From my limited time and perspective in the [library] field thus far I see a lot of the library field as fearing and avoiding failure at almost all costs. Perfectionism can sometimes run so rampant that it squelches any hint of innovation in its path, yet it is innovation that leads to experiments in the first place.

Are libraries so NASA-caliber that failure can never be an option? No. Mark Funk reminded us in 2008 that “We Have Always Done It That Way” isn’t an answer, it’s an excuse. At the same time library science journals seem to follow suit with not publishing about failure often as other journals do in not publishing when drug experiments failed.

I can understand why: it takes a lot of extra time and effort that many librarians do not have to write for publication, and who wants that to highlight a failure? Is there an opportunity for a Wrong Stuff resource of library-related errors and experiments gone wrong so we’re not all reinventing the wheel in isolation from one another? The publish button in WordPress makes the process pretty painless!

This may be as good a place as any to end the discussion of Google Wave and The Middle, with a digression based on Dettmar’s closing paragraphs.

I do not believe “library science journals” are unwilling to publish articles about failures. I would guess most professional peer-reviewed journals in the library field would be delighted to publish well-written, professional articles on failures that have further purpose. I do believe that such articles are rarely submitted to journals.

As for her suggestion in the last paragraph: Been there. Tried that. Tried it more than once. With no success. None. This is hardly surprising. It is human nature and institutional nature. No librarian interested in keeping their job is going to publish an article about how the library did it wrong without getting clearance from the director—and most directors aren’t likely to welcome the chance to air their errors. There are exceptions; there have been a few (precious few) cases where missteps and failed experiments have been documented. But it’s likely to stay rare, in this and almost any other field. (Some librarians are trying this again. I wish them well in the effort; maybe this time it will be different.)

Sneak preview: I currently have two dozen items tagged toward an essay on libraries and failure, so I will be discussing that topic (but not Wave) in the future. Probably not in the next issue, for reasons discussed in another article in this issue, but possibly in the one after that.

Cites & Insights: Crawford at Large, Volume 12, Number 3, Whole # 147, ISSN 1534-0937, a journal of libraries, policy, technology and media, is written and produced irregularly by Walt Crawford.

Comments should be sent to waltcrawford@gmail.com. Cites & Insights: Crawford at Large is copyright © 2012 by Walt Crawford: Some rights reserved.

All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

URL: citesandinsights.info/civ12i3.pdf