Cites & Insights: Crawford at Large
ISSN 1534-0937
Libraries · Policy · Technology · Media

Selection from Cites & Insights 7, Number 3: March 2007

Trends & Quick Takes

The Snark Factor

Some readers may wonder why something winds up in Trends & Quick Takes rather than My Back Pages or vice-versa. For both of you, the heading above should be a tipoff. My Back Pages is mostly snark, as is the final feature in so many magazines. It’s stuff I find impossible to take seriously or that’s a little too revealing for its own good. Snark certainly appears elsewhere, but it’s rarely the primary reason for discussing something.

At one point, I believe Trends & Quick Takes was mostly about trendspotting. That point is somewhere in the past. Now it’s a catchall for brief essays that don’t fit into one of the current categories (or that fits into a semi-dormant category like The Good Stuff). Stuff that appears here should be at least marginally worth thinking about. If I argue strenuously against something, it’s because I think the argument matters. Otherwise, it would go in My Back Pages.

Now that C&I has passed the Crawford Test (Tom Wilson’s name for my informal measure that an ejournal lasting at least six years can be considered a “lasting title”), I don’t consider Cites & Insights experimental (as I said while back). That doesn’t mean it’s not continually changing or that I’m not making it up as I go along. Any good magazine or journal changes over time. Why should this peculiar publication be any different?

Speaking of peculiar publications, I should once again salute Ex Libris and Marylaine Block. When C&I began, there were four more-or-less comparable publications: Ex Libris (a weekly consisting of one relatively brief essay), Library Juice (also weekly but typically including multiple segments), NewBreed Librarian (bimonthly, with several essays in each issue) and FOS Newsletter (monthly, with several essays). All were labors of love. All published worthwhile material. All were free online. C&I, uniquely, was designed to be printed.

NewBreed Librarian’s gone. Library Juice is a blog and a book publisher but the ejournal’s gone. FOS Newsletter became SPARC Open Access Newsletter and is now formally part of SPARC. Two independents remain: Ex Libris (not always weekly but still going strong) and Cites & Insights (which has modest sponsorship that comes without editorial control). I trust Ex Libris works to increase Marylaine Block’s profile. I know Cites & Insights makes a difference, sometimes a big difference. I can’t suggest that anyone else should follow in either of our footsteps. Blogs are a lot easier and formal publications look better on your vita.

Finding the Good User-Generated Stuff

Jon Pareles wrote “2006, brought to you by you” in the Music section of the New York Times on December 10, 2006; Nicholas Carr commented on it in “Lost in the shitstream” at Rough type on the same day. Both discuss user-generated content—all those blogs, most of what’s on YouTube, the “homemade art” on MySpace (and many other sites).

Pareles prefers “self-expression” to “user-generated content” and notes:

It’s homemade art independently distributed and inventively promoted. It’s borrowed art that has been warped, wrecked, mocked and sometimes improved… It’s word of mouth that can reach the entire world.

It’s often inept, but every so often it’s inspired, or at least worth a mouse click…

He cites two “stars” from web self-expression: “video diarist” Lonelygirl15 (a fictional creation, really an actress being paid lousy wages) and a power-pop band “whose treadmill choreography earned far more plays than its albums.” It’s free labor—people supply all that raw material for nothing. But what do you do with it all?

In utopian terms the great abundance of self-expression puts an end to the old, supposedly wrongheaded gatekeeping mechanisms: hit-driven recording companies, hidebound movie studios, timid broadcast radio stations, trend-seeking media coverage. But toss out those old obstacles to creativity and, lo and behold, people begin to crave a new set of filters.

Pareles mentions that user-generated content isn’t exactly new. Remember “America’s Funniest Home Video”? Now, thanks to the web, it can reach wider audiences more easily and doesn’t have to be ludicrous to “succeed.” The article discusses ways some “traditional” creators are leveraging new tools—encouraging users to remix their songs, for example. He also notes that culture has never been monolithic and that contemporary fragmentation only goes so far. And there are various kinds of new filters.

The open question is whether those new, quirky, homemade filters will find better art than the old, crassly commercial ones. The most-played songs from unsigned bands on MySpace…tend to be as sappy as anything on the radio; the most-viewed videos on YouTube are novelty bits, and proudly dorky…

The promise of all the self-expression online is that genius will reach the public with fewer obstacles, bypassing the entrenched media. The reality is that genius has a bigger junk pile to climb out of than ever, one that requires just as much hustle and ingenuity as the old distribution system.

Carr’s bias can be guessed from the s-word in his post title. It’s a brief post raising questions about the “demand side” for culture:

As the flood of free, immediately and universally accessible user-generated and -filtered content grows, will the audience for well-crafted work shrink? Will we all readjust our tastes and expectations to the easy pleasures of the shitstream? I don't mean to sound an overly baleful note here. It would be a mistake (thank goodness) to think the motivations of the artist and the craftsman can be reduced to a set of signals from the marketplace. But it would also be a mistake to think those motivations exist outside the influence of those signals. Even in the sphere of culture, demand drives supply.

By my lights, Carr is way too baleful, both because traditional media isn’t disappearing and because there are all sorts of filters. My immediate response was to jot down “Pandora”—and that’s a good response. Pandora is not a user filter. It does not rely on the supposed wisdom of the crowd. It does work, extremely well in my experience. So, for that matter, do some “wisdom of the crowd” filters. I don’t believe Netflix’ collaborative recommendations dumb down movie selections; if anything, quite the opposite.

Of course demand drives supply to some extent—but with lower-friction alternative distribution systems, niche suppliers can find niche demand more readily than in the past. This, I believe, is a good thing. That belief that tempers my impatience and occasional headshaking over the vast quantities of garbage on the web.

Comments on Carr’s post are at least as interesting as the post itself. John Baschab was first up, noting that as he gets older, “my scarcest resource (by far) is time,” so that brands and filters begin to matter more. “When I was a teenager I had what seemed to be endless amounts of time to discuss stereos, music, cars and other such pursuits. Not so now, and so brand-as-crutch is the order of the day.”

Seth Finkelstein points out that, for text resources at least and in the larger web arena, there are new gatekeepers (filters)—and they work pretty much the same as the old ones did and do. “Captnswing” makes the excellent point that Sturgeon’s Law was with us even before the Gutenberg Bible—and that, historically, wider access to information and lowered barriers to entry for publication have been a total benefit for mankind. That, I believe, is a good and true point.

Maybe the bottom line is that old media aren’t going away. Old media continue to filter in the ways that old media filter, absolutely blocking some forms of genius while promoting both good content and pablum that fits a formula. A variety of filtering techniques for new media are emerging. Those techniques seem no more likely to block true innovation and, because specialized audiences can yield specialized filters, somewhat more likely to encourage diverse creation. But then, I’ve always been a Candide at heart—and I know I could never do something like Cites & Insights, reaching 1,200 to 24,000 people, via traditional media.

Remix Culture?

Barbara Fister’s “Money doesn’t talk—it silences” (December 20, 2006 on ACRLog) relates to “Finding the good stuff…” above, but only indirectly. She’s concerned about a new venture reported by the Wall Street Journal. The new venture, Attributor, takes your text and scans the web to see whether your stuff has been copied—so you can charge for the use or demand that it be taken down.

Quoting from the WSJ piece:

Attributor appears to go further than existing techniques for weeding out unauthorized uses of content online . . . [Company execs] claim to have cracked the thorny computer-science problem of scouring the entire Web by using undisclosed technology to efficiently process and comb through chunks of content. The company says it will have over 10 billion Web pages in its index before the end of this month

Fister’s comments (excerpted):

So if I understand this, they copy web pages to see if they’ve been copied. And this kind of indexing, unlike the Google library project, doesn’t violate anything because media companies might make money from it—and the heck with innocent bystanders whose work is copied into this massive database without permission. They aren’t in business, so their rights don’t matter. Right?

What corporations don’t seem to understand is that a lot of the people involved in remix culture aren’t interested in monetizing “intellectual property”…

The secret of Web 2.0 is that it revels in creation without worrying about artificially limiting access by charging a toll. This outpouring of creativity challenges the standard wisdom that the only incentive for creators is cash…

When will big media catch up to the idea that someone posting thirty seconds of Jon Stewart on YouTube isn’t in it for the money—but drives audience to the show? If they could figure out that we’re not all into monetizing, and stop spending so much money trying to make us stop, they could relax and reap the benefits of fans growing their market….

Some of us don’t want to maximize revenue—we just wanna have fun.

This is a little tricky, as is “remix culture” in general. There’s nothing wrong with me saying C&I has a BY-NC license, which means you can take it, “remix” it, whatever, as long as I get credit and you don’t charge for it. Attributor would be of no use to me. As for a 30-second clip of Jon Stewart, that should count as fair use. But saying “we’re not all into monetizing” doesn’t automatically mean you get to “remix” anything that suits your fancy. There’s plenty of suitably licensed material for those who “wanna have fun,” and nobody’s stopping actual creators from not monetizing their work. You do not get to say how other creators should behave. You do not get to say “I want to reuse your creation without payment even though you don’t want me to, and it’s OK because I’m just having fun.” That’s not remixing, it’s expropriation. (Yes, I’m a copyright centrist who believes creators have rights but not unlimited rights; hasn’t that been obvious to all but control freaks?)


I first heard of this term in a January 3, 2007 post with that title by George Needham at It’s all good. There’s a new site, Placeblogger (www.placeblogger. com), edited by Lisa Williams and “brought to you by the Center for Citizen Media, Pressthink, and H2otown.” H2otown is one of the first placeblogs, coming from Watertown, MA. Here’s part of how Placeblogger defines placeblogs:

Placeblogs [are] about the lived experience of a place. That experience may be news, or it may simply be about that part of our lives that isn't news but creates the texture of our daily lives: our commute, where we eat, conversations with our neighbors, the irritations and delights of living in a particular place among particular people…

Placeblogs spring from a fiercely non-generic America that's not about big-box retailers or the type of polarizing discussion about politics, culture, and the economy that's the product of journalism that happens at the 30,000 foot level. Often, they are a delightful and vivid look at cities, towns, and neighborhoods from an insider's point of view…

The site’s home page is an odd mix of posts about local weblogs, headlines from such blogs, featured blogs and other stuff; I found it bemusing. There’s a Google Group for placebloggers. The most useful aspect of the site, though, is probably its directory of place blogs (the site uses both “place blogs” and “placeblogs”). There’s a feed that provides the first 200 characters of text from place blogs, which would probably be a very busy feed; the FAQ notes the reason for the 200-character (roughly 35-word) limitation: “we are serious about sending visitors to local sites, not keeping them here at Placeblogger.”

As of February 19, 2007, the directory included blogs from 197 countries, including 4 from Niue, 7 from Brazil, 82 from Canada—and 1,646 from the U.S. (Originally, it was limited to the U.S.; no doubt some listings from some countries, particularly English-speaking countries, will grow rapidly as word of the site spreads. Australia has 17; the UK, 25. I’d be surprised if those weren’t both triple-digit numbers by year’s end.) Digging deeper, U.S. blogs organized by state include, for example, nine in Alaska, 27 in Ohio, 64 in Washington, 88 in Texas, 95 in Florida, 111 in New York—and 161 in California.

The state listing isn’t quite alphabetical (it’s probably alphabetic by two-letter code but displays as names), but every state’s represented; the fewest blogs are in South Dakota (two) and Nebraska and Wyoming (six each). It’s an inclusive list; Las Vegas and Reno each show five blogs, and in Reno’s case I’d suggest that two or maybe three really qualify.

It’s an interesting concept. Should a true placeblog be multiauthor (two of Reno’s are personal blogs)? Should the term encompass local wikis as well (the subtitle at Placeblogger suggests that it should: “towards an annotated world with blogs, wikis, forums, maps…”)?

Needham’s take:

Wouldn't it make sense for libraries to be prominent in such placeblogs, or maybe even to start one if it doesn't already exist? Imagine how much the library staff could learn about its community by participating actively in this!

I’m inclined to agree—and wonder whether library blogs could be considered placeblogs in some cases. Of course, each placeblog is a distinct entity, and there may be placeblogs that have the wrong “feel” for the local library (e.g., a placeblog run by realtors might not only feel wrong, it might be out of the question), but as the directory and concept spread, it’s worth considering for almost any public library.

Mark Lindner comments:

For once, a potential online community that seems like we should be in it from the start. Certainly public libraries, but even academics, especially state institutions, should be involved.

I'm not always the quickest guy, but I don't immediately see any downside to libraries being in this space.

Eric Hellman comments on a top-ranked place blogs (Baristanet), which covers his own town, and offers notes based on reading that blog since its inception:

A placeblog fills the same role as have “local rags” and “corner stores.” Think of a placeblog as a police blotter with innuendo, gossip and an underpinning of real reporting… The weekly “open thread” is quite popular, and there are regular commenters who are really quite rude. Baristanet is always up-to-the minute when there are power outages, trees falling, houses burning, bears roaming, police cars chasing, houses selling, developments proposing, Sopranos filming or locals (such as Yogi Berra, Stephen Colbert) or ex-locals (Tom Cruise) making news.

Baristanet is financially viable because it provides a very focused audience for local advertisers. I doubt anyone is making a huge amount of money on Baristanet, but I bet it is attractive compared to freelancing…

It seems to me that a vital role for libraries is to start thinking about the best way to archive these placeblogs, just as they play a vital role in preserving local newspapers. Baristanet is a vibrant portrait of the life of my community in a way that my local newspaper has ceased to be.

I think this is a trend worth watching and maybe participating in. Hellman’s final paragraph is worth thinking about in terms of your library’s role in local history. I’m not ready to start a Mountain View placeblog—but then, we still have a local weekly, at least.

Marylaine Block wrote an excellent essay on “The Library’s Place in Place Blogging.” It’s the February 16, 2007 edition of Ex Libris, #297. Go read it.

Maybe More than the Long Tail Deserves

There was a flurry of activity last summer connected to Chris Anderson’s “Long Tail” stuff—from his nonsensical claim that “the era of the blockbuster is so over” and typical Wired overgeneralization to a batch of reviews and comments once the inevitable book actually appeared. Since libraries, magazines and book publishers have been in the “long tail” business for many decades (without needing to give it a hip name!), I won’t spend too much space here.

A New Yorker review by John Cassidy lauds Anderson but notes that Alvin Toffler said much the same thing in 1980 (and was wrong, saying “no more mass production…no more mass entertainment”) and points out that in this “post-blockbuster” era, seven of the ten all-time top-grossing movies came out since 2000, as have four of the top-selling novels. The reviewer concludes (correctly, I believe) that blockbusters and niche products will continue to coexist (as they always have, at least since blockbusters were possible)—and notes Anderson’s blind spot, his failure to recognize that online commerce (access to “the long tail”) is dominated by oligopolies.

Tim Wu is tougher in his Slate review (July 21, 2006): It’s titled “The Wrong Tail” and subtitled “How to turn a powerful idea into a dubious theory of everything.” As you’d expect from a Wired editor, even if you grant that Anderson’s concept is original or applies to as many media as he claims, Anderson feels the need to carry it further. Wu says that when you finish the book, “there’s one question you won’t be able to answer: When, exactly, doesn’t the Long Tail matter?” As with so many business books, The Long Tail “commits the sin of overreaching”—including claims that offshoring is “the Long Tail of labor,” online universities are “the Long Tail of education” and there’s even a “Long Tail of national security.” Most of which is just silly. Wu gets it right: The power-law curve (using the nonWired name) matters to business “1) where the price of carrying additional inventory approaches zero and 2) where consumers have strong and heterogeneous preferences.” I’ll suggest a third: Where it’s plausible to make a large number of discernibly different products. 175,000 book titles in a year? No problem—especially with PoD. 175,000 different brands of dish detergent? Big problem: That’s at least 174,000 more than could be differentiated. Wu mentions the oil industry: the power-law curve just doesn’t apply. Even in information technology, the need for standardization frequently counts for more than heterogeneity. You don’t want a choice of 10,000 different routers or ten different incompatible USB connection options; you want one that will work.

Nicholas Carr cites Lee Gomes at the Wall Street Journal in an indirect argument with Chris Anderson. Gomes claims the Long Tail’s effects have been overstated. There was a critical change between the original Wired article and the book. The article says, “More than half of Amazon’s book sales come from outside its top 130,000 titles.” The book says, “About a quarter of Amazon’s book sales come from outside its top 130,000 titles.” When asked directly, Anderson admits that there are no current examples of sales of “misses” exceeding sales of “hits”—according to Gomes, Anderson says that won’t happen for at least a decade at either Netflix or Amazon. So what? Well, Carr notes that Anderson’s comparison (130,000 is roughly the stock of a typical Barnes & Noble) is flawed: “There have always been specialized bookstores, selling everything from religious and spiritual books to textbooks to foreign-language books to used and out-of-print books to poetry books… And there have always been small presses…selling books directly, through the mail.” Unless we know how the power law functioned before Amazon, claims that the internet changed everything (in this case) are suspect. It’s a new channel but not necessarily a change in buying patterns.

Does the power-law curve function in most media? Sure it does. That’s neither new nor particularly surprising. What’s somewhat new is that the curve can keep trailing off to the right—the “long tail”—in TV (with more cable channels and short video on the web) and, more effectively, in movies thanks to NetFlix. For magazines, there’s nothing new here, although even smaller niches can be served entirely online. (According to some experts, most future growth in magazines will be in niche titles, but those have always represented more than 99% of magazine titles—there just aren’t 2,500 mass-market magazines.) For books, it’s not clear whether the internet makes the “long tail” more important. It is clear that most books have been niche books ever since thousands and tens of thousands of books came out each year. As for sound recordings, we have Big Media to thank for distorting the music scene in the 1960s through 1990s, dropping most artists to make way for bigger and bigger promotion budgets for a few Big Acts. Local clubs always acted against this “fat head” tendency (you could leave out the space there) and the internet acts further to restore a normal state, where thousands and tens of thousands of musicians work at smaller scale. Meanwhile, to be sure, Chris Anderson has the kind of Bestseller that supposedly doesn’t exist in a “long tail economy.” I’m sure he’s taking that irony to the bank.

Quicker Takes

The headnote for this edition talks about the handful of freely available unsponsored ejournals in the library field that existed in 2001, and the even smaller handful that continues in 2007. I didn’t discuss sponsored free ejournals but those also come and go. I’m sad to say that one of the best, D-Lib Magazine, seems to be in jeopardy. D-Lib has been around for more than a decade. The January/February 2007 issue where “Current and future status of D-Lib Magazine” appears is Volume 13, Number 1/2. In the past, D-Lib has operated through government grants from DARPA and NSF, and more recently contributions by CNRI. That appears not to be a sustainable model. The magazine’s moved to a bimonthly schedule for the next few issues, “at which point we will critically evaluate the options available to us.” D-Lib is trying to raise $100,000 and may consider advertising or author charges. I’d hate to see D-Lib disappear. It’s true that no publication can sustain itself without funding, whether “e” or print, particularly when editing is involved—and it’s pretty clear that there aren’t too many crazy people like Walt Crawford around (and I wouldn’t be willing to take on something like D-Lib as a pro bono effort: there’s too much involved).

Ø    It’s always a breath of fresh air to encounter a cautionary note in Technology Review. One such appeared December 1, 2006: “What comes after Web 2.0?” by Wade Roush. The first paragraph’s a tipoff: “Many researchers and entrepreneurs are working on Internet-based knowledge-organizing technologies that stretch traditional definitions of the Web. Lately, some have been calling the technologies ‘Web 3.0.’ But really, they’re closer to ‘Web 2.1.’” Roush discusses the Semantic Web and notes the “gargantuan effort that would be required to tag all the Web’s data with metadata” as well as narrower projects such as FOAF (, Piggy Bank ( and the Amazon Mechanical Turk, “a kind of high-tech temp agency” that pays low rates to do simple tasks computers can’t handle. He concludes: “Most of these projects are so far from producing practical tools—let alone services that could be commercialized—that it’s premature to say they represent a ‘third generation’ of Web technology.”

Ø    A long time ago (December 7, 2005—the printout was mislaid), Wired News had a piece by Dan Goodin that probably still applies: “Old rips: May they rest in peace.” It starts with an undergrad who recently converted his CD collection (5,000 songs worth) to music files—for the third time. Why? “After spending years painstakingly compiling the perfect music library, he came to realize that the sound quality of the computer files left plenty to be desired.” He started with 128K MP3 then moved to 128K AAC, the format Apple uses for iTunes. But when he listened to CDs created from those files, he realized that something was wrong—even the more-efficient AAC loses too much sound quality at 128K. The article touts “Lame,” a particular MP3 encoding algorithm. I don’t know much about Lame; I do wonder whether the primary benefit isn’t using a higher bitrate. I’ve now reripped almost everything at 320K using MusicMatch’s Frauenhofer MP3 compression….and that was after initially using 196K. I know I can tell the difference between 128K and 196K (and between 128K and the original CD). I think I can sometimes tell 196K from 320K. I’m certain 320K always sounds at least as good (to me) as the original CD. (“At least as good”? Yes. There is a theoretical basis for believing that a copy could sound better than the original—in addition to euphonic distortion—but that’s another topic covered previously, and I’m still not sure I believe it.)

Ø    Paul Boutin wrote “Where’s my Google PC?” at Slate on July 3, 2006. He believes it’s coming—either a supercheap PC that’s essentially a smart terminal to a Google-based set of services, or just a complete set of services with storage to match. But consider the tease: “It’s coming. It’ll be great. You’ll hate it.” That’s oxymoronic, of course: It can’t be “great” for you if you hate it. Boutin thinks it’s “the future of computing” at one level—well, heck, he spent more than a decade “working on and around network-based computers and thin clients,” so he’s biased toward their advantages. He even claims network computing is faster, one of those “it depends” claims. He’s pretty sure Google plans to build the “world’s best network computer” and he’s definitely hot for the idea. But he also knows why they won’t work, even if he doesn’t understand. “First, there’s the inexplicable human urge to own stuff and have it in your possession.” What makes that inexplicable? People do like to control their data. Boutin may think that’s silly, but that’s his problem. Second, a network computer only makes sense if you have a “fast, flawless network connection.” Then there’s the killer: “Are you going to let someone else handle all your data?” He will, “but that’s my blind faith in statistics.” Well, that and a long career-based bias toward network computing. His closing sentence says more than enough for me: “How about it: Would you trust Google to protect your e-mail, your tax documents, and your family photos?” Scratch that future, at least in my opinion.

Ø    An interesting set of findings from Statistics Canada about internet users, cited on on August 2, 2006. Heavy internet users are avid consumers of other media, spending about the same time watching TV as nonusers—and spending more time reading books. But heavy internet users “spent substantially less time in social contact with others.” Maybe the term should be “browsing alone” rather than “bowling alone.” Oddly, they seem to enjoy social events, clubs and organizations more than non-users—but they don’t socialize much. Heavy users spend “considerably less time on paid work and domestic activities” and “less time to sleeping, relaxing, resting and thinking” than nonusers or light users. I leave you to add your own comments—and I’m guessing these findings would apply reasonably well in Canada’s smaller (but far more populous) neighbor to the south as well.

Ø    I’m not going to spend a lot of time on the podcasting study from Pew Internet, but it is worth noting a key finding and the significance of error margins. To wit: In a February-April 2006 study, 7% of internet users said they’d downloaded at least one podcast; a smaller August 2006 study showed 12%. One percent of users report downloading podcasts on a typical day. Now consider the margins of error: Plus or minus 2% for February-April, 3.5% for August. Did podcast usage increase between February-April and August? Not in a statistically significant manner, as far as I can see: the probable actual percentages overlap at 8.5% to 9%. If you said “around a tenth of internet users apparently use podcasts, but only about a tenth of that tenth do so frequently,” I’d buy that. Any claims of rapid growth require more study.

Ø    I ran into a lovely, lovely piece by Paul Di Filippo on his experience writing a commissioned article for Wired Magazine: “The joy of corporate journalism, by J. Ives Turnkey.” You’ll find it at preface.pdf; the source page, www.pauldifilippo. com/articles.php, also shows the article Di Filippo turned in. You’ll have to go to Wired to read the article as published. I’m a biased source here: I know Di Filippo from his sharply-written items in the Magazine of Fantasy and Science Fiction and elsewhere, so I’m predisposed to believe he turned in a good piece (reading it does nothing to dispel that belief). He tells a complex story about how writing gets turned into “supreme Wiredness,” the homogeneous gee-whiz style that makes me happy my freebie subscription’s about to expire. He cites six general trends in the many forced edits: All references to “the little people” were eliminated; ambiguity was minimized; facts were cloaked in “hipness”; the past was dismissed as unimportant; quotidian matters were de-emphasized (“boredom does not exist in the Wired cosmos”); drama was injected into basically undramatic situations. Sure sounds like the Wired I know and despair of, the magazine in which the future’s always clear and techno-thrilling, the past doesn’t matter and the only people are People Who Matter.

Cites & Insights: Crawford at Large, Volume 7, Number 3, Whole Issue 87, ISSN 1534-0937, a journal of libraries, policy, technology and media, is written and produced by Walt Crawford, a senior analyst at OCLC.

Cites & Insights is sponsored by YBP Library Services,

Opinions herein may not represent those of OCLC or YBP Library Services.

Comments should be sent to Comments specifically intended for publication should go to Cites & Insights: Crawford at Large is copyright © 2007 by Walt Crawford: Some rights reserved.

All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.