Cites & Insights: Crawford at Large
ISSN 1534-0937
Libraries · Policy · Technology · Media


Selection from Cites & Insights 6, Number 5: April 2006


Bibs & Blather

Diamond Anniversary

Seventyfive issues. Not a bad run for this odd publication. I’m delighted to note that Cites & Insights is included in a new portal for free ejournals, Open J-Gate (www.openj-gate.com). Thanks to lbr for the tip.

This special issue contains “seventyfive facets”—little essays averaging just under 300 words each. Most of them (40) are new. A dozen are excerpted from Walt at Random posts. Nine are brief excerpts from old speeches (at least 13 years old); those have the year of the speech in the heading. Fourteen are excerpted from “disContent” columns that appeared in EContent; I’m so far behind on updated republication of those columns that I thought it was reasonable to use smaller chunks of a few that may be relevant to libraries and librarians.

Perspectives

Seventyfive Facets

1. Attack of the zombie copy

A lovely, lovely piece by Erin Kissane at A list apart, posted October 24, 2005 (www.alistapart.com/articles/ zombiecopy). It’s about, well, zombie sentences, such as those that begin:

Leveraging world class infrastructure strengths, mature quality processes and industry benchmarked people management processes…

With a start like this, how could a sentence fail to be undead? That and three other examples were taken from live websites, and that’s not too surprising. As Kissane notes, “the corruption has spread even beyond the vasty deep of the internet: the back of the milk carton in my refrigerator reads ‘Few beverages can beat milk in terms of a total nutrition package.’”

Kissane passes along a course of action when you’re attacked by zombie content: Kill the modifiers, “Determine what manner of monster you’re dealing with” (get some sense of the actual sentence), “Hit ‘em in the head, right between the eyes” (see whether there is any content in all that mush, and revive if it you can). The advice comes from Dr. Herbert West of Miskatonic University—and if you’re not familiar with H. P. Lovecraft and the Cthulhu mythos, a little exploration will show why West should be well qualified to deal with the undead.

Can you get from “Every executive knows that constantly delivering superior customer value is an imperative to veritably creating shareholder value” to “If you want to make lots of money, you have to please your customers more than the other guy does”? Maybe, with the advice here, you’ll find your way. But what can you do with this (also a real-world sample):

Incorporating our corporate culture into our business processes and customer needs, we continue to leverage our exceptional and effective work practices, improve operational effectiveness to meet business objectives and create win-win situations for our employees and shareholders.

After some analysis, Kissane concludes it’s hopeless: “Time to destroy it and start over.” Or is that your library’s mission statement?

2. Information and artifact [1989]

Some modernists assert that the only proper role of the future library is to provide facts on demand, and that libraries that fail to transform themselves will be left behind in a rush to online databases.

Surely any librarian who deals with sound recordings must distinguish between facts and information on one hand, and information and artifacts on the other. The facts of a performance are the score and the performers, possibly also the venue. Those facts are readily available and important to some musicologists, but they don’t really constitute the performance. The recording itself is not simply information; it is also entertainment and enlightenment.

…We’re a long way from the day when a patron can access not only the online catalog from home, but also the text of a book in a form that most patrons would find pleasant or even acceptable for reading. I don’t expect to see that as commonplace before I retire. But we’ll surely see that a long time before we’ll see dial-up access to CD-quality renditions of musical performances. [“User interface situations for online music catalogs,” preconference on online catalogs, Music Library Association annual conference, March 15, 1989, Cleveland. Note “dial-up”—I was wrong on potential online access to CD-quality renditions, but that requires broadband, not even plausible for home use in 1989.]

3. Best (and worst) gadgets of 2005

This one, by Robert Strohmeyer at Wired News, downloaded December 30, 2005, is fun—and odd, because it’s not quite clear which are “best” and “worst.” I guess the 5-point rating and descriptive terms mean anything 3.5 or above is a “best,” anything 2 or below is a “worst,” and those in the middle are…in the middle.

“Bests” include the Microsoft Xbox 360, Apple’s video iPod, Sony’s PSP, the Sonos Digital Music System (highest-rated of all, which is…peculiar) and, barely making it with 3.5, the Palm Treo 650 and RIM BlackBerry 8700c Electron.

Worsts? Motorola’s Rokr E1 (there seems to be general agreement about this “iTunes phone”), Gizmondo (who?), and Nintendo Game Boy Micro.

In the middle: Apple iPod nano (particularly its vulnerability to scratches and tendency to crack), Sony Ericsson W800i music phone, Samsung YH-999 portable media player—and the oft-praised Sling Media Slingbox, which may let you transmit your favorite TV shows to any (one) PC on the internet (but only in real time), but at a dismal 320x240-pixel resolution.

OK. They’re all gadgets and described as such. Who am I to judge?

4. Attitudes toward Public Libraries 2006

ALA commissioned a survey of 1000+ adults regarding public libraries, completed in 2006; they’ve done this before (most recently in 2002). The survey’s available (PDF, 13pp.) at ALA’s website, with the questions as asked and the results.

It’s not a discouraging set of results, although in some areas public libraries don’t do quite as well as in OCLC’s “Perceptions” online survey. Some highlights:

Ø    37% used public libraries six or more times last year, including 25% 11 or more times; another 29% used public libraries one to five times last year. That’s close enough to Perceptions’ 73% “at least once a year” and 31% “at least monthly.” Any way you cut it, at least two-thirds of adults use public libraries at least annually (also true in 2002)–and around a quarter of them at least monthly. Those are great numbers for a public institution.

Ø    81% of respondents who visited libraries took out books. People go to libraries for books: That was pretty obvious in the Perceptions study as well. Next highest: Consult a librarian (54%), check availability via computer (50%), use reference resources (45%).

Ø    People mostly use libraries for education and entertainment. When forced to choose one, figure 32% education, 25% entertainment.

Ø    70% are extremely (26%) or very (44%) satisfied with their public libraries; only 5% are only a little or not at all satisfied. 70% high satisfaction for a tax-funded public good: That’s worth treasuring! (OCLC’s study showed 80% favorable.)

Ø    More than a third of respondents put public library benefits “at the top of the list” of tax-supported services, including schools, parks and roads! (53% put them in the middle.)

Ø    While these are somewhat leading questions, people find lots of things about public libraries very important or somewhat important. Most impressively: services are free (95%), a place where I can learn for a lifetime (94%), provided information for school and work (87%), enhances my education (88%), a source of cultural programs (82%), and a community center (81%). Library as place, library as collection of free books–people appreciate what public libraries have been doing well for a long time.

Ø    As stingy as people can be (18% wouldn’t answer this question), 52% think public libraries should have at least $41 per capita funding, with a surprising 19% putting that at $100 or more. 68% support increased public library funding in their own communities.

Ø    There’s no question that people appreciate space-related benefits of libraries (84% important for two space-related questions) and the free resources and lifelong learning (96% and 95%).

Ø    “Some people think libraries will no longer exist in the future, because of all of the information available on the internet. Other people think libraries will still be needed despite all of the information available on the internet. Do you think libraries will no longer exist in the future, or do you think they will still be needed?” 92% said “libraries will still be needed.”

According to survey analysis, the more frequent the user, the more satisfied they are with libraries–and use of library services has grown in almost every category, specifically including “taking out books” (the largest increase since the 2002 survey).

My take? Reaching out to new audiences in new ways is wonderful–but if there’s a resources crunch, Sunday hours, evening hours at least two or three days a week, and a strong book budget just might better serve that two-thirds of Americans who use public libraries, who appreciate them as community spaces, who mostly check out books, who do so more now than they did four years ago, and who are willing to pay more for their public libraries. [Walt at Random 2/24/06]

5. Best and worst punditry of 2005

This one—Wired News, Joanna Glasner, downloaded December 29, 2005—is about forecasting, and it’s surprising how many short-term forecast are wrong. Sometimes they’re uncanny: “Nobody will make money on Wi-Fi, but it will become ubiquitous anyway.” That’s from Robert X. Cringely (consistently spelled “Cringley” here).

The unescapable Rob Enderle predicted that Google might acquire AOL and Novell; Glasner calls this “pretty close” based on Google’s 5% stake in AOL. You could also call that 95% wrong, or 98% since Google hasn’t touched Novell. Enderle also said low heat and noise would replace performance as key drivers for desktop PCs (wrong) and email users would start to question whether email’s benefits outweigh spam hassles (also wrong), along with the true-but-obvious prediction that LCD display prices would drop dramatically.

How about Michael Robertson (Linux)? He predicted that Wal-Mart’s $500 Linux notebook would be just the beginning: by the end of the year, “every NFL city will have a store you can walk into and buy a Linux desktop or laptop.” That might be true, if you find sufficiently obscure stores—but Wal-Mart discontinued the Linux notebook. Robertson also said Windows Media Center would suffer the blue screen of death (wrong) and that Longhorn (Vista) would be delayed to 2007 (too early to tell). IDC analysts forecast a 2% decline in semiconductor revenues; apparently, revenues grew 7%—but they got a 10% PC market growth about right.

6. Ads around content: Pushing the limit

[Following a discussion and analysis of the portion of a screen actually made up of content at various websites, as opposed to advertising and overhead—at the time, as low as 24% on a 1280x1024 screen for ZDNet and Salon.]

I’m not anti-advertising by any means. One strength of local print newspapers is their local advertising, which serves important purposes in maintaining communities. I enjoy creative TV ads. Relevant ads enrich many of the magazines I receive. I’ve written articles based entirely on the ads in past issues of computer magazines.

It’s a matter of balance and approach. The “good old days” of 20% ads on network television didn’t seem bothersome; today’s 25%-33% seems high. Give me 65% of the right kind of ads in a specialized magazine and it won’t bother me a bit; push 35 minutes of ads per hour on a radio station, and I’ll tune to NPR.

When I’m trying to read content on a Web site, the site becomes annoyingly content-free if editorial content is less than 40% of the page and hopeless at anything under one-third. I suspect that 20% to 25% represents a high water mark for reasonable ad placement—but only if those ads don’t directly interfere with the stories. Push too many ads, push the ads too directly into my face, or spread out your thin editorial content over too many screens—and I’m gone. So, I suspect, are other busy users. Once gone, it’s hard to get us back. [Concluding paragraphs of “disContent” column from EContent 24:4, June 2001]

7. Best tech moments of 2005

That’s the title for Kevin Poulsen’s Wired News roundup of the “top 10” (downloaded December 27, 2005). Skipping a couple of arcane items (Michael Robertson hires DVD Jon, for example), it’s hardly surprising that a Wired outlet would pick “the $100 laptop”—whether it ever emerges as a useful machine and whether third-world children need laptops more than food and medicine are irrelevant. The blogging of Katrina: OK—but it’s unclear to what extent bloggers spread truth or rumors from New Orleans.

The animated raunch hidden in Grand Theft Auto: San Andreas—that’s one of the best tech moments? Judge John Jones’ rousing decision not only expelling “intelligent design” from Dover biology classes but raking its proponents over the coals—sure, but that’s law and science, not “tech” as I understand it.

Lost opens the hatch, finds an Apple II.” That’s a top tech moment? Doesn’t say much for technology in 2005, does it? The broadcast flag being “defeated”—yes, for now. The fact that a 93-year-old retired telegraph operator could transmit a message faster than a 13-year-old SMS text messager is a sideshow, at best, but the closer is good: “NASA rovers survive a full Martian year.” Geez. Out of ten “best tech moments,” I’d say at most 2.5 would come into play in any good year for technology. At least one of these appears to fit into the “worst” category, though.

8. CDs and DVDs: Apples and kumquats

Alan Wexelblat at Copyfight posted “Death of the CD?” on April 9, 2005. He raises a question I’ve thought about, albeit not in those terms, as follows:

I’m traveling this week back and forth to Portland. In the airports are a series of shops advertising “$20/2.” Reading the fine print shows that you can buy two DVDs or CDs for USD 20. This is, in my mind, a sign of the impending death of the CD.

Look at the difference: with the CD you get some music tracks, maybe some liner notes if you’re lucky, and… um, well, that’s about it.

Or, for the same $10 you can get a couple hours of video, plus commentary, alternate tracks, possibly multiple languages, maybe a behind-the-scenes or other feature….Explain to me again why you’d buy a CD?

Two of the essential differences:

Ø    CDs are malleable–any CD with the “Compact Disc Audio Disc” imprint must not have copy protection (according to Philips), so can be ripped to MP3 or a lossless codec, have tracks combined with other tracks to make custom CD-Rs, have tracks downloaded to portable players, etc., etc. You can’t do anything with the music on a music DVD except listen to it on a DVD player (unless you’re a hacker and don’t mind violating DMCA).

Ø    We (many of us) listen to certain songs or pieces of music hundreds, maybe thousands of times. Very few people watch a movie more than a few times (possibly excepting some kid’s movies).

The medium-to-medium comparison just doesn’t work: DVDs and CDs serve fundamentally different purposes. [Walt at Random, April 11, 2005]

9. Can I get a phone that’s designed for making telephone calls?

According to Media Life for March 6, 2006, an RBC Capital Markets poll of some 1,000 people found that about 75% said they had no interest in watching TV on their cell phones. Anyone surprised by that? But there’s more: 70% don’t anticipate using their cell phones for musical entertainment. These findings are mildly distressing to mobile carriers, who expect to make billions of dollars by selling overpriced video and music downloads.

I wonder about the other side of the poll. I’m guessing there’s a significant population (30%? 40%?) with a different need: A cell phone that just makes phone calls but does that very well.

I have a related gripe. We had to replace our cordless phone at home; the old one gave out. We got one that Consumer Reports rates highly. As with all of the new units we’ve seen, it has a handset that’s styled like a candy-bar style cell phone, where the old one had a handset similar to a traditional handset—you know, curved and all.

The new one’s fine if you get the tiny slots lined up just right with your ear and mouth. Otherwise…hello? I can’t hear you very well… (Our moderately old Motorola V60 just-a-cell-phone is better: It’s a clamshell design, which simulates the curve of a traditional handset.)

We now make most of our outgoing calls in another room, on a (gasp) corded phone that probably cost us $15 eight or ten years ago. The handset is lighter, easier to hold, and curved so that you always know where the mouth and ear should be. Can’t walk around the house talking—but we can carry on conversations intelligibly. What a concept.

10. Establishing a context: overall screen design [1989]

The crucial importance of a coherent, predictable user interface is that it allows the patron to become expert smoothly and rapidly, and to extend past experience to cover future needs. Once that happens, the user interface essentially fades into insignificance, which is as it should be. A good user interface rapidly becomes transparent to the patron, so that the patron can spend his or her time and energy working with needed information, not relearning the online catalog.

The goal of a good online catalog is not to entertain the patron or to impress the patron with the quality of the user interface. The goal of a good online catalog is to stay out of the way. [“User interface situations for online music catalogs,” preconference on online catalogs, Music Library Association annual conference, March 15, 1989, Cleveland.]

11. Citizenship, the purple pill, and libraries

That’s the title of Pat Max’s “On my mind” in the February 2006 American Libraries. Max says that librarians should “think about what it is that we do best and how we might best make a contribution to our various constituencies/communities,” as a preferable path to two others: “Turn out the lights; lock the doors” or “Focus on increasingly effective electronic systems and search techniques.

So far, so good. Yes, libraries need to look at their missions again. Yes, they need to determine how they contribute to their communities and constituencies. (Don’t be surprised if “providing free books and a public space” turn out to be the most important contributions for most public libraries, with some “lifelong learning” stuff thrown in for good measure.)

I agree with most of the column, certainly including his third suggestion, “Management of libraries should reflect the idea of democratic citizenship, not the current practices of CEOs.” Libraries are not businesses: That point seemingly needs to be made over and over again.

My problem? Max’s assertion that “Turn out the lights; lock the doors” is “where we are headed if we do not decide to take some form of action.” He later refers to “the fate of the abandoned mall”—but aren’t malls being repopulated as citizens return to the city?

I buy the desirability, even necessity, of paying attention to what it is your library does best in your community as a civic organization. I think most public libraries and librarians do honor learning and citizenship, and do pay attention to their communities. I don’t buy that “we are headed” for the death of public libraries based on what I see and hear about today’s public libraries. They can be better—but they’re generally good and appreciated now.

12. Sampling the circle of gifts

The Internet and the Web support new media—not generally to replace old media but to provide new ways to communicate, new ways to tell stories. Despite the commercialism of the Web, it also provides new tools for the circle of gifts. That works two ways:

Ø    Some new forms would be impossible or ridiculous without the Web and the Internet. In some cases, these new forms—these new media—are naturals for the circle of gifts.

Ø    Replicating some old media in the Web environment makes the circle of gifts more plausible. A free print newsletter requires significant underwriting for printing and postage; a free electronic newsletter requires almost no direct financial support….

Traditionally and currently, free goods have less impact than priced goods. Part of the relationship between a magazine and its readers comes from the express choice that readers make through subscriptions, even though subscriptions may be a trivial portion of the magazine’s costs. It’s harder to build that relationship when there’s no price.

Counterbalancing that problem is the freedom that comes from zero pricing. Creators of Weblogs, online newsletters, lists, and the like, don’t need to spend much time fretting over the size of their audience or whether it has the right demographics. They do need enough publicity to reach an audience, but if the “right size” is 50 people in 20 countries, that may be good enough. Creators can experiment more freely, changing courses to suit their needs and preferences; if those experiments make sense, the appropriate readership will follow.

Participants in the circle of gifts do it because people matter; we’re all in this together. The circle of gifts leavens a largely capitalist society—and contemporary technology can make that circle more effective.

And, as so many participants in the circle of gifts have said, it’s fun. [Portions of “disContent” column from EContent 24:9, November 2001]

13. Does music have a genome?

“That song sounds familiar” heads the Los Angeles Times article (February 3, 2006) story by Steven Barrie-Anthony. We grow up with music that helps identify us—“But time passes, classrooms fade to cubicles, and a vast landscape of new music turns foreign and unexplored.” Pandora may change all that.

What’s Pandora? A streaming internet radio service (Pandora.com), either $36/year or free with ads, that lets you set up your own stations—with a difference. You enter one or more of your favorite songs or musicians and Pandora starts streaming songs that are “similar.” One user interviewed in the story says that in an hour “he heard more new music he liked than he had in the last decade.” You don’t just listen; you can fine-tune the station by signaling thumb’s up or thumb’s down on specific songs. (Two thumb’s downs for an artist excludes the artist—unless you’d previously explicitly included the artist.) You can define up to a hundred stations. You can’t legally save the streams for later use on portable devices. Quality is 128K, better than FM but not CD-quality. You can share stations with others and mark any song as a “favorite,” and there are easy ways to buy songs or albums. You can’t call up songs or skip too many songs in an hour: that would violate Pandora’s licenses.

The theory behind Pandora is fascinating. The Music Genome Project, behind Pandora, is “a 6-year-old effort by a group of musicians to identify the hundreds of traits and qualities that form the building blocks of music,” then map each song, creating its “genome.” So, for example, if you like the Raspberries and Todd Rundgren, you might like Dwight Twilley. So far, the musicians that do this stuff—and get paid $15 to $17.50 an hour—have classified about 300,000 songs by 10,000 artists. The project invites CDs from unknowns; it’s another way for them to reach people who like “that kind” of music.

As the article points out, this can have odd effects, as when an “electroclash” band used as a starting point results in songs by Lindsay Lohan. (One email was from someone who “just found out that I apparently like Enrique Iglesias. It was a really good song. Shameful.”)

One professor quoted in the story comes up with what I regard as a narrow-minded perspective. He thinks Pandora is reactionary, running “counter to the democratizing trend of the Internet” because it uses experts (the musicians) instead of collaborative filtering. “Pandora will succeed only if its centralized system proves superior to the wisdom of the crowd.” [Emphasis added.] That’s nonsense. There’s no reason collaborative filtering and something like the Music Genome Project must be mutually exclusive.

I suspect I’m more of a Pandora person. It’s not always right—and I don’t really listen to music long enough to give it a fair test—but it’s pretty good. And, of course, it works on the song level.

Here’s a little of what I said about Pandora at Walt at Random:

My [first] “station” started with Randy Newman (surprise, surprise), to which I quickly added Tom Paxton and James Taylor. I only listened for about 20 minutes–but damned if it wasn’t hard to move away from the station. Sure, some of the songs were from the artists I chose. But the others were, with one exception, right on the money–and they were all songs and artists I would not have known about.

That’s still my favorite station; right now it’s playing “Canned goods” by Greg Brown. Who? I defined two other stations, with mixed success. Consider:

My second Pandora station, Mitchell Scaggs Cooder. Right now it’s playing “Ooh Baby” by Gilbert O’Sullivan–and I think I can see why. This station tests my own likes, since I only like most of Joni Mitchell, maybe 1/3 of Boz Scaggs, and some unknown but large fraction of Ry Cooder. Hmm. “Back on the Road,” Earth Wind & Fire. Makes sense, and I’d never make that connection. I see how people find Pandora a trifle addicting…

Try something outrageous. See what happens. The price is right.

14. Books are widgets?, or,
all publishers are not identical

Junger at Pop goes the library reported on a conference session at which “Pamela Redmond Satran, author and contributing editor at Parenting magazine, gave us the real deal on publishing fiction and non-fiction.”

The problem with “the real deal” is when it gets cast as universal. Take this sentence: “To publish non-fiction, you need to approach an agent with a proposal (and it is nearly impossible to get published without an agent).

I’ve never had an agent. ALA Editions not only doesn’t require an agent, I think they prefer not working through one. I suspect the same is true for other library publishers–and I wouldn’t be surprised if it’s true for most niche publishers.

After all, an agent normally gets paid by taking a slice of those huge advances you’re going to get for your book. You’re not going to get a huge advance from a library publisher, or at least I never have.

Would I be rolling in dough if I’d hired an “independent agent,” presumably one who gets paid up front instead of taking a percentage? I’m guessing not. [Adapted from Walt at Random, April 14, 2005]

15. Failed tech trends for 2005

Loyd Case’s ExtremeTech story (December 28, 2005) admits that ExtremeTech is as guilty of hype as anyone. He lists ten failed trends. Some are fairly arcane, but here goes:

The BTX motherboard moves stuff around to reduce noise and improve cooling. So far, it’s only showing up from a few companies. HDTV on a PC seems like a natural, but Big Media is doing its best to keep that from happening, at least for anything but broadcast TV. Digital audio on portable players gives up too much sound quality to save space.

64-bit home computing: Now there’s a disappointment. Yes, XP Pro 64 exists, but drivers are few. Movies on high-def. optical discs were supposed to arrive last year—and didn’t. One lesson: “Consumers don’t want multiple standards.” (That’s also one reason that DVD burners haven’t caught on as rapidly as expected: the DVD-R/DVD+R confusion.) One continuing problem: There’s no agreement on the “advanced” DRM for high-def. discs.

Attempts to copy protect music CDs continue to work badly, and Case says, “It’s not going to work.” I agree (after reading Ed Felten’s explanations). Going back to portable digital audio, iPod’s market share was supposed to decline last year—and it’s surprising that it hasn’t, given Apple’s demonstrated ability to reduce your rights in the music you’ve already “purchased.”

Then there’s the digital home. “All the technology ingredients exist today” but “no one has come up with a compelling reason.”

Getting back to geekier issues, apparently lots of people believed Gmail was the “coming of a new email paradigm”—but it isn’t, and most people don’t use Gmail accounts as their primary email accounts.

Finally, there’s SLI—and I don’t understand that one well enough to comment.

16. Revenge of the indies:
Looking for the next Netflix

[This extract included mostly as an indication that the only new thing about “the Long Tail” is the term and its exploitation by a Wired editor. “Last count” is now 50,000.]

Netflix encourages independent films, at least those with enough backing to produce DVDs (which doesn’t cost much these days), in three key ways:

Ø    Netflix offers everything. The company buys almost every DVD on the market, with multiple copies of most DVDs. At last count, some 10,000 different discs were available.

Ø    The Netflix collaborative filtering and recommendation models encourage independent films by treating them seriously.

Ø    Independent films appear to be treated with respect. Good ones have glowing reviews not outweighed by ads and phony adulation—there are no ads on the site. In essence, Netflix provides a more level playing field.

That doesn’t mean smaller films get as many viewers as big ones, but it does mean they’ll reach an audience. Netflix shows how many people have rated each film; those numbers can be revealing. As this was written, Shrek had 38,000 ratings and A Knight’s Tale had 30,000—but Kingdom Come had 3,235; What’s Cooking 1,806; and The Closet 2,031...

Netflix isn’t a boutique operation. They’re just as happy to send you Independence Day as Urbania. We would still be renting from our local independent video/DVD store, as we like the personal touch—but that store fell prey to increased rents and Blockbuster’s special deals, as have most independent rental outlets. If you can’t save the indie store, at least you can help the indie producer: Netflix seems to be succeeding at that, whether by plan or by chance.

What’s next? How can we encourage a greater range of voices in other media? What combinations of old and new technology can make this work? Endlessly repeating the same news stories through hundreds of syndicated outlets just increases the media concentration. There must be better ways. [Portions of “disContent” from EContent 25:8, August 2002]

17. Five blogs I believe deserve more attention

My criteria for this somewhat random, definitely incomplete, no insult intended to those I missed sample: On the feed I use, Bloglines shows fewer than 100 subscribers; Technorati shows fewer than 30 sites linking to this one; and I believe these people have interesting things to say.

Ø    A wandering eyre (wanderingeyre.com), “a bibliophile’s musings on books, libraries, the world, life, and anything else that comes to mind—although “Jane” (Michelle Boule) might remember to click on Categories more often when posting. (448 Uncategorized? Oh well, I write a random blog, so…)

Ø    Biblioblather (biblioblatherblog.blogspot.com) by “lislemck” in San Diego.

Ø    Blisspix.net (blisspix.net) by Fiona Bradley, one of several Australian library blogs that probably deserve more attention in the U.S.

Ø    The gypsy librarian (gypsylibrarian.blogspot. com) by Angel. Hmm. That’s two in Houston. “I am hoping to use this as a tool to reflect and learn more about being a librarian and educator. I will likely feature items about librarianship as well as things I read in my other areas of academic interest or of interest as a reader.” Angel has another blog, The itinerant librarian, for other matters.

Ø    Libraryola (www.zammarelli.com/chris/libraryola) Chris Zammarelli, “the sounds of library science.”

18. Chaos in the marketplace [1989]

OS/2 was introduced at the same time as IBM’s PS/2, leading many people to confuse the two concepts. Don’t make that mistake. PS/2 computers are doing OK; OS/2 is doing miserably. Bill Gates of Microsoft proclaimed a couple of years ago that 80% of us would be using OS/2 in 1990. Short of a governmental coup and squads of OS/2 soldiers armed with RAM chips and tommyguns, that outcome seems unlikely.

[After discussing several other terms such as DR-DOS, EE, “OS/3”] OOPS is what Bill Gates will say to Microsoft stockholders when 1990 sales of OS/2 are announced. It also stands for Object Oriented Programming System, a very different way of looking at programming that is generally at the heart of new user interfaces and will probably influence most PC programming in the future…

[Diskettes] I’ll skip over the four sizes and umpteen capacities of floppies; that’s too depressing to discuss, except to say that your best bet when ordering a new MS-DOS computer is to have two diskette drives installed: one 5¼" high-density drive (1.2MB capacity) and one 3.5" double-sided drive (1.44MB capacity). That will handle most of what comes your way; I refuse to even think about the Zenith 2" diskettes.

[Video “standards”—HGC, CGA, MCGA, EGA, VGA] There is little question that the future belongs to VGA [and SVGA]… The square pixel makes software development more coherent; the compatibility preserves all existing software. [“Microcomputer choices,” Online ’89, November 8, 1989, Chicago.]

19. Five more blogs I believe deserve more attention and that I frequently disagree with

I frequently disagree with these folks, sometimes vehemently—which may be more reason they deserve attention. I also read these folks and think they’re saying things worth listening to, whether I agree or not. The first one listed violates one of the criteria I used for the other five blogs: It has a lot more than 30 sites linking to it! Never mind…

Ø    blyberg.net (www.blyberg.net) by John Blyberg: “herein are thoughts and the occasional {foo} from an Ann Arbor District Library geek.”

Ø    Library voice (libraryvoice.com) by Chad F. Boeninger, “online musings of a librarian, father, aspiring musician, & amateur techie.”

Ø    Quædam cuiusdam (www.wallandbinkley.com/ quaedam/) by Peter Binkley, “mild opinions, tentatively offered, on library tech.”

Ø    Tom keays (www.tomkeays.com/blog/) by Tom Keays.

Ø    Digitize everything (www.digiwik.org/digitize-everything) by Michael Yunkin

20. The joys of not posting

Twice this morning, making the usual morning sweep of Bloglines, email, and LISNews, I wrote responses to something I read. Once, I considered copying a link and writing a discursive response here.

The first two times, I finished writing what I had to say, looked at it, and clicked away from the comment page–not posting or submitting the comment. In the third case, I didn’t even bother to prepare a draft, then delete it.

Details of the situations are unimportant. Suffice it to say that, in one case, I caught a whiff of “poor, poor, pitiful me” in the response, laughed at myself, and moved on. In the other, I realized that I was responding to an anonymous coward who was doing a good job of trolling–and just moved on.

There’s a lot to be said for responses not posted, and blog essays never blogged. Writing it down is great as a safety valve. Submitting it for anyone else to see is frequently pointless (and sometimes dangerous). Back before ubiquitous “communications” paths, the safety valve was just writing down something and crumpling it up, and the danger of overcommunication was limited by the difficulty of reaching beyond your friends. [Excerpted from Walt at Random, April 19, 2005]

21. Formal definitions for bloggers

Jon Garfunkel “cleaves out” three levels of a definition in an October 14, 2005 Civilities post (civilities.net/Bloggers-Definitions), excerpted here:

i) The loose definition: Any person who engages in public writings/conversations primarily via online media…

ii) The strict definition: Any person who keeps and updates a weblog or “blog.” In its leanest definition a blog is a regularly-updated website [that] organizes content in reverse-chronological order…

iii) The tight definition: Any person who meets the strict definition and also self-publishes it (by themselves or with a group)… [Excludes people who contribute a blog for a larger website.]

Garfunkel notes that the vast majority of bloggers meet the tight definition—but the loose definition is “often used in conversations and published accounts to talk about the actions, rights, or other aspects of the group.” Thus, for example, one assertion about the power of blogging confuses bloggers with Freepers (contributors to the Free Republic site). He also notes that some “tight” bloggers are suspicious of “strict” blogs—those who blog for newspapers, companies, etc. He offers additional commentary, worth reading on its own.

I find the loose definition unfortunate, although I suppose it’s better than “netizen” (almost anything would be). Most of my public writing is done online (although by no means all), but most of that is not done on a blog, but in this ejournal. I got into a mild kerfuffle at LISNews over that issue, with another poster advocating the loose definition as a fait accompli. I think that’s wrong, both in philosophy (it erases useful distinctions) and in fact (I haven’t run into many people who think all online writing is blogging). Let’s take it a little further: Are LISNews and /. blogs? I don’t believe they are, even though they organize content in reverse-chronological order. But that’s a tougher discussion.

22. Contented readers and non-print magazines

A year ago, I discussed the strength that magazines gain from their relationship with readers and the quandary of whether Web-based “magazines” could succeed. My key question: “How can an online artifact establish the same relationships as a good magazine?” I asked another question in passing: “Is it possible for a nonprint magazine to succeed?”

Pure digiphiles would say those are silly questions: Anything you can do in the real world, you can do better digitally. Some print magazines are pushing the question by offering paid online versions that claim to be precise replicas of the print versions. But by now most thoughtful people should be aware that content and physical carrier are related in complex ways….

When you subscribe to a magazine, you begin a relationship. You pay a modest sum in advance. The publisher sends you an interesting package at regular intervals. If you like the package, you may pay more attention to the ads that really pay for the magazine—and you keep renewing your subscription. The publisher can show demographic data to advertisers and guarantee a certain minimum exposure; advertisers can work in a medium that minimizes “viewer” dissatisfaction and maximizes the possibility that messages—sometimes detailed messages—will get through. Everyone wins….

Experimenters have tried to produce magazines in almost every new medium. There were magazines on videocassette, which sank without a trace. I suspect there were magazines on vinyl records, and there were certainly audio CD magazines. Media extras work in some cases, particularly with as inexpensive, light and durable a medium as CD. Several music magazines include an audio CD with each issue and several computing and game magazines include CD-ROMs with each issue. But those are extras; the core content is the print magazine…. [and, after discussing a DVD-based magazine that failed after eight “quarterly” or “bimonthly” issues over three years]:

As before, I invite examples of successful commercial non-print magazines, either online or in other media…Nobody responded to the January 2002 challenge; maybe 2003 will be different. [Portions of “disContent” from EContent 26:1, January 2003. Three years later, no examples have been received.]

23. Free and legal media

Tom Merritt posted “Free and easy publishing on the web” on November 4, 2005 at Cnet.com. He writes about how easy it was to do a podcast—not only to make it but to get it hosted. He focuses on Ourmedia (ourmedia.org), which promises to “host your files, of any size and any amount, forever.” Forever is a long time, to be sure.

In case there’s some question, loads of nontext material is legally available on the web—not only original creations at Ourmedia and Creative Commons (which has its own directories), but also Internet Archive’s Prelinger and other public domain archives, and many more. When I tried it, Ourmedia was so slow as to be difficult to use (even for something as simple as browsing the “top 100 images”—if that’s not canned for easy delivery, why offer it?).

I won’t say free music, images, and video at these and other sites will replace traditional media. (Neither does Merritt; as he notes, “you still have to produce good content for anyone to want it.”) The neat thing about all this is that people who are creative but uninterested in playing the industry game—which is tougher for other media than for print—have outlets. They probably won’t get rich, they may not reach millions of people, but they can get started. Free.

24. Freedom to tinker’s predictions for 2006

I say Freedom to tinker rather than Ed Felten because Alex Halderman participated. There are 22 predictions in all (at www.freedom-to-tinker.com/?p-953); I’ll mention a dozen (rewording some), noting that all 22 are interesting.

Some are, while worth stating, so nearly certain that they’re hard to discuss: DRM technology will still fail to prevent widespread infringement. “In a related development, pigs will still fail to fly.” Watermark-based DRM will “make an abortive comeback” but is still fundamentally infeasible. Copyright issues will still be stalemated in Washington. Push technology will return—and most people still won’t like it. “Digital home” products will founder.

A second category is less certain, although I tend to agree. The RIAA will quietly reduce the number of end-user lawsuits. Planned incompatibility will be as criticized as planned obsolescence. HD-DVD and Blu-ray “will look increasingly like the second coming of the Laserdisc,” not the DVD. Social networking services “will morph into something actually useful.” It will become trendy to say the internet is broken—particularly among those pushing bad public policy.

Then there are ones where I haven’t the slightest notion, but respect Felten’s track record. The Google Book Search case will settle. A name-brand database vendor will go bust, unable to compete against open source. And broadcasters will start simulcasting free TV over the internet, while other efforts to distribute approved video over the internet “will disappoint.”

25. The future? It ain’t here yet!

Mary Ellen Bates wonders “whether and how I’ll adapt to the New Infosphere” in this “info pro” column in the January/February 2006 EContent. She harks back to the Firesign Theatre’s “The future…you may already be there”—when the future involved a computerized president and “errant PDP-10 microcompu­ters.” (How many people remember when the PDP-10 was a “microcomputer”? In computing power, certainly—but remember the size?)

Bates isn’t quite ready for ubiquitous computing. She offers some useful examples, but also some examples where a lot of us may not desire ubiquitous computing. As for what some people seem to feel is “necessary” today—“Do I really need to monitor the news every ten minutes? Will my life change if I’m not responding to email every quarter hour?” She recognizes the downside of living in the present: failing to take the long view. “Some issues require more than just information; they require contemplation and time to simply let the matter percolate for a while.” Her close, reason by itself to go read the column in full:

I wonder whether this is the future of always-on Web access: instant access to quick information but less time to ruminate, ponder, and reflect. The future is coming faster than it used to, and I wonder if we’re ready for it.

26. Technology and library resources [1989]

The first thing to remember about new technologies and media is that they don’t all succeed. It’s a lot easier to promote an innovation than it is to build the innovation—and it’s a lot easier to build an innovation than it is to make it successful. Recent history is littered with the skeletons of thrilling new technologies that never made much difference—and with media that never succeeded or that succeeded only briefly.

Second, and perhaps more important than the first: Successful new technologies and media usually complement older technologies and media, at least at first. It’s fairly rare for a new technology or medium to replace an older one rapidly, unless the older one was seriously flawed. There are exceptions, but that’s the way to bet.

For any medium, we must ask three major questions. First, will resources that use a given medium survive well into the future?... Second, will the medium in general continue to be active, or will resources in the medium become orphans?... Third, if a given medium goes into decline, will libraries still be able to provide access to the orphaned materials? [Speech at California Library Association, November 12, 1989, Oakland]

27. The Google Search subpoena in perspective

That’s the title of a Seth Finkelstein piece you really should read if you’ve read about the government subpoena for a chunk of Google Search searches, the one Google’s fighting. This piece, written January 26, 2006, is not at Infothought; it’s at Google Blogoscoped, blog.outer-court.com/archive/2006-01-26-n76.html

He concludes that the subpoena relates to the Supreme Court’s remand of COPA to a lower court: “For us to assume, without proof, that filters are less effective than COPA would usurp the District Court’s factfinding role.” So the DoJ’s expert witness wants to review URLs available through search engines to estimate the prevalence of sites harmful to minors and measure the effectiveness of content filters in screening out those sites.

That may be a problematic quest—but it’s better than “the previous state-of-the-art in research evidence here,” typing the words “free” and “porn” into Google and getting an absurdly high and totally worthless number.

Almost certainly, this subpoena has nothing to do with investigating terrorism or undermining confidentiality. Finkelstein notes that, pragmatically, such efforts “would be surrounded by secrecy,” not carried out in open court. Thus, the acquiescence of Yahoo! and MSN (after assuring no personally identifiable information remained) probably wasn’t a privacy issue, and if Google finally acquiesces, it won’t be there either. Finkelstein notes a beneficial outcome of what may otherwise be much ado about nothing: it’s raised public awareness of overall issues with personal data stored by search engines.

28. That wasn’t what I checked out!

So a kid checks out a Disney videocassette. Goes home. Sees more than the kid expected: “hard core pornography.” Parent complains to library. Library person says it’s difficult to sabotage a videocassette that way. I thought the news story (a real one) was curious on two grounds:

Ø    The mother chose to call the media and police, not the library–and still hasn’t returned the tape. She talks about “documenting” that this actually happened. To what end?

Ø    The library person’s assertion that it’s difficult to sabotage a videocassette this way. Hmm. Tape over the open record-protection slot: two inches of adhesive tape and two seconds. Put the tape in a VCR. Record over what’s there. I believe most blank VHS tapes include an instruction sheet mentioning that you break the tab out of the record-protect slot to prevent accidental rerecording; it doesn’t take a rocket scientist to figure out how to enable recording on a prerecorded cassette. Any idiot could do this; some idiot apparently did.

That’s one rarely-mentioned advantage of DVDs over videocassettes. Unless someone went to the trouble of producing a phony DVD and managing to print a label side that was indistinguishable from the commercial release (possible, but a hassle), you can be reasonably certain that what you see is what you’ll get on the screen: There’s no way to “rerecord” a manufactured DVD. (I suspect that you could tell the difference between a faked DVD-R and a pressed DVD visually; I know that’s true for CD-Rs–but I haven’t used DVD-Rs, so can’t say for certain.)

This sort of thing doesn’t apparently happen very often, although it could with any videocassette rental outlet or library, because there aren’t that many sickos out there with this particular bent. Or maybe it does happen, but most people don’t make a big media/police deal out of it.

In any case, there’s not a thing the library could do to prevent it, other than getting rid of all its videocassettes… [Walt at Random, April 22, 2005]

29. How can you live without the internet?

Very well, thank you—or that’s what roughly a third of U.S. families say. I think they’re right.

Antone Gonzalves posted “U.S. hitting a ceiling on internet households” at TechWeb (seen via Yahoo! News) on February 24, 2006. It shows the results of a new survey of 1,000 households, presumably as accurate or inaccurate as any other survey of that size.

The vast majority of U.S. households that are not online have no interest in the Web, an indication that Internet penetration has stalled… about 36 percent of U.S. households were not online, and only 2 percent intended to subscribe to an Internet service this year.

Cost was a factor in very few cases. Of those not connected, 31% didn’t need access at home because they have it at work. 18% said they’re not interested in anything on the Web. Only 8% said they weren’t sure how to use the web. 39% chose “other reason,” which the pollsters say is “usually the consumer’s way of saying they’re not interested.” (The pollsters didn’t offer “have access at my public library” as an option, apparently. Why am I not surprised?)

So how does the research director at the polling outfit spin this? If you’re not online you can’t contribute to the “national dialogue on the Web,” which has become “a forum for sharing ideas and opinions on many issues affecting individuals and the nation as a whole.” These households “don’t have easy access to information that could help them find better jobs and prices on goods and services.” To put it bluntly, “you are economically disadvantaged” if you’re not online.

Someone point me to actual “national dialogues” as opposed to millions of dangling conversations among small groups. Tell me it’s wrong to shop locally and keep your friends and neighbors employed. Convince me that there aren’t tens of millions of people who want to live where they live and who wouldn’t be going online to job-hunt regardless.

Two-thirds is pretty good penetration. My read is that most of these people are making decisions that make sense for them at this point. And that pollsters should spend less time deriding those who answer their surveys for not giving the right answers.

30. Building partnerships: Adding dialogue to professional writing

[While this column was addressed to econtent companies, I suspect it speaks to the virtues of adding patron feedback and advice to library systems—with appropriate controls and filters.]

…I am suggesting that the right kind of user-generated content can enrich and augment the best professional content, particularly when it results in a dialog that adds light rather than heat to a topic….

Continuing substantive dialogues make sense for substantive stories—not news summaries, gossip, or all the other “content” designed for Web readers’ supposed brief attention spans.

Reader contributions that seek to expand on or respond to substantive stories must be signed with real, traceable names. That’s nearly universal practice for newspaper and periodical reader contributions, including letters to the editor. A reader may request (and the site may grant) anonymity in the published or posted form for reasons of personal or national security, or to protect whistle-blowing, but not to avoid embarrassment. I believe this is particularly important if content Web sites move to encourage true reader dialogues... People with something serious to say should be willing to stand behind their statements.

Serious commentary should be featured appropriately, with links to the original article and the original author’s further thoughts—if any—directly following the reader’s submission. This may turn into a chain involving several authors… [Portions of “disContent” from EContent 26:2, February 2003.]

31. How do search engines handle decaying sites?

Here’s a fascinating research article that doesn’t fit neatly in any C&I category: “Observed web robot behavior on decaying web subsites,” by Joan A Smith, Frank McCown, and Michael L. Nelson, in D-Lib Magazine 12:2 (February 2006) (www.dlib.org/dlib/ february06/smith/02smith.html)

The researchers set up four web subsites, each with 954 URIs in 30 directories, with random-content pages that look reasonable and a number of small images. Scripts went through subsites following predetermined patterns, causing the subsites to “decay” over time—deleting files and links to those files. At the end of 90 days, all of the subsites were empty.

After the four-month test period (ending 30 days after the subsites were empty), they analyzed the logs, focusing on the behavior of the three search engines that appeared to do full crawls of the sites (Google, Yahoo!, MSN). The report includes lots of graphs on what they found, some of them animated. (One interesting factoid: the Internet Archive and Alexa apparently never crawled the subsites.)

They discovered a “toe-dip” function: a spider hits the top-level directories, then comes back at a later date to traverse the entire site—or at least most of the site. (Most spiders did not traverse entire subsites, although they were neither very large nor very deep.) Search engines showed a slight preference for HTML over PDF; less than one-third of images were crawled at all.

The study has problems (stated clearly) and suggests further study. Those concerned with either how heavily spiders load a web server or how thoroughly sites are represented on the Big Three search engines should read the article and consider implications.

32. Finding the people behind the tools

I’ve been exploring several search engines and metasearch engines… I start with ego searches (what—you don’t?). AllTheWeb and two or three other engines offered my home page as the first hit on “Walt Crawford”—but with a difference. Instead of the meaningless excerpt that displays at Google (taken directly from a home page that is mostly links), AllTheWeb offered a useful description of who I am and what I do at the site.

My immediate reaction: “Where did that come from?” I knew the text didn’t come from the page itself, if only because it calls me a librarian (which, technically, I’m not). It got stranger when I noticed exactly the same text on more than one search engine.

I eventually figured it out: The summary was from Open Directory, picked up when the page was referenced—and, of course, a human being wrote the Open Directory summary. Open Directory uses computers to amplify and collate the work of people. AllTheWeb’s use of Open Directory summaries linked to pages retrieved by searching further amplifies that work. The key is the network of volunteers that classifies and describes Websites for Open Directory.

Automatic classification is one of those computer capabilities that’s frequently predicted and sometimes claimed to exist. You see it at several search engines and metasearch engines, with Northern Light an early example—automatically clustering a set of search results into categories that are created on the fly.

Maybe automatic classification works when you use it. When I’ve tried it, the results are sometimes impressive and sometimes ludicrous, in a manner that human classification could almost never be. Computers work with words, phrases, and context in a limited manner. They cannot now and, I believe, will never understand the meaning and significance of entire paragraphs or Websites. As a result, automatic classification is never more than a rough and erratic approximation.

That’s part of the reason that good article indexes will always be better than full-text searching: Because human beings assign subjects and create summaries, and human indexers can understand rather than merely process…

We need to see the people behind the tools at any content site. Your content doesn’t create itself, and if your site doesn’t involve editing and original writing, then I’d just as soon stick with Google News.

Show us your people. Put bylines on articles. Link those bylines to photos and bios, or at least a sentence or two about the writer. Offer a link so we can find out about your editorial staff and philosophy: Why does content appear here, and why does it appear the way it does? That content may appear on a computer, but the computer’s not responsible; don’t hide behind it. [Portions of “disContent” from EContent 26:3, March 2003.]

33. Know thy patronage

I don’t want to steal any thunder from the book Chrystie R. Hill and Steven M. Cohen are writing, but they’re running a blog to support that effort (librariesbuildcommunities.org), so the February 27, 2006 post with the title above should be fair game. (I’m guessing it’s going to be a dynamite book, and encourage people to add that blog to their aggregator.)

Steven Cohen quotes a newspaper article about ways that local libraries deal with overdue materials and quotes Marilyn Hinshaw (Eastern Oklahoma District Library System) saying there are good reasons libraries around there don’t charge overdue fines. “You have to look at the demographics of a community.” There’s more, but that’s the key. The post goes on:

We’ve heard it time and time again. Don’t base your policies on what other libraries are doing. The make-up of the community will help define library policy.

Note those key words “help define.” I don’t like the concept that a library’s collection and policies should be 100% based on the current desires and politics of the locals (and I don’t believe Steven and Chrystie would advocate such a stance). I think the “long collection” and the need to serve minority interests also come into play. But every good library is distinctive, and every good library does respond to (and involve) its community in its collection building, policymaking, and service decisions.

34. Compact disc: Good for a generation [1989]

Today’s mass publishing medium for high-quality sound is, of course, the CD… We can fairly safely assume that CDs will be an important mass medium for at least a generation and probably longer. People really don’t switch gears that fast; CDs have been with us somewhat longer than people may realize. Philips and Sony introduced the specifications in 1980; they reached the American market in 1983. [Speech at California Library Association, November 12, 1989, Oakland. CDs reached mass-medium status by 1986; I define a “generation” as 20 years. I’ll go so far as to argue that CDs will be important for at least another decade.]

35. Libraries still aren’t businesses

I was reminded of that by a February 27, 2006 post at ACRLog, “iPods and pencils: It’s the user experience age and we’re not ready.” The post is worth reading and raises points worth thinking about, primarily in the context of a column by Andreas Pfeiffer. Pfeiffer argues that “features no longer matter” and offers 10 rules for “experience-based technology.” My first note would be that a columnist stating something doesn’t make it true or universally applicable. My second might be that the “library experience” might not be as important to libraries as filling users’ needs, particularly those users whose needs are difficult to fill.

“More features isn’t better”—I’d agree if you add “necessarily” before that last word (and maybe change “isn’t to “aren’t”).

“Unused features are useless and diminish ease of use”—that’s a design issue, although the first portion is a tautology. (Well, yes, if a feature is never used, it is by definition useless.) As you’d expect, Pfeiffer uses MS Office as an example; I’ve found that to be a fundamentally flawed claim. “There are dozens and dozens of features you will never need or use, but then again there are ones that are handy to have—if you can find them.” The problem here is that the dozens of features you will never use may be the ones that I find essential, and vice-versa. That’s precisely what happened when I had a similar email discussion with someone who said there were only 10 important features in Word: Her list of ten included precisely none that I use more than once a year, and my list of 10 crucial features turned out to be entirely ones she regards as useless and annoying. Here, the blogger has a similar objection: “I’d rather have a feature and not need it—than need a feature and not have it.”

The killer is #10: “Do well what 80% of your users do all the time (and don’t worry about the other 20% who want to do more) and you create a good user experience.” What a recipe for an academic or public library: Ignore special needs. That may make for a profitable business, but it sounds like abandoning librarianship. (This may be unfair to the blogger, who goes on to note some of the inherent complexities in libraries—but doesn’t directly refute #10.)

36. Turning off the TV

I guess this is national turn-off-your-TV week, or something like that, and some people think this is a Great Thing. Go get fresh air, read a book, visit your library…

The local TV critic wrote a column this morning disparaging the “movement.” Oddly enough, I agree with his reasoning. Not because I’m a vidiot, but because I get tired of the blame-somebody-else habit. Your kids watch too much TV? Turn it off. Telling them “Oh, just don’t watch this week” makes it a stunt. Working out a “TV budget”–like a game-playing budget, a phone-time budget, etc.–is a different thing, probably good parenting.

You watch too much TV? Turn it off. Figure out why you watch too much TV. What are you avoiding? What would you actually do if you turned off the TV? What makes it better? If turning it off as a special stunt helps, great–but it misses the point.

Do we ever go for a week without watching TV? You betcha: Any time we’re on vacation. But then, we don’t sit glued in front of the tube every evening hoping something interesting will come on. We watch what we want to watch (and have no TiVo to encourage watching more), and don’t watch when we’re not interested. Right now (at this point in the season), that comes out to about four hours a week (not including DVDs); in the heart of the season, it was six or seven hours a week. Come summer, it will be down to almost nothing.

We also walk 0.5-1.5 miles (each way) to and from a restaurant every Saturday night. We make a point of taking a decent walk on Sundays. We read. We write. We converse. Somehow, having a very nice TV in the living room has never obliged us to turn it on when we come home or leave it on when we’re not watching something we’re actively interested in.

If you can’t stop watching, having a no-watch week won’t solve your problem. Heck, some people read way too many books for a balanced life, but I’ve never heard of a “No-Books Week.” [Excerpted from Walt at Random, April 27, 2005—before Audible started running “Don’t read” ads.]

37. Most predictable stories of 2005

What a great title (Ryan Singel, Wired News, downloaded December 29, 2005)—but “predictable” is always easier in retrospect. The video iPod: Perhaps most obvious because it’s Apple’s stylish entry into an 18-month-old field. Google Maps: Well, sure. Apple suing fans and Google blacklisting News.com—maybe, since both firms are on the secretive side, but the Google blacklist sure didn’t last long.

Yahoo! helping China jail a dissident? It had to be some big online company, I suppose. The Sony BMG rootkit scandal: I don’t think that was predictable, as Sony the electronics company (and cofounder of CDs) should have known better—but Sony the media company has the usual RIAA paranoia. The wrong corporate arm won.

Podcasting’s popularity: I’d love to see numbers on just how popular it actually is, but as an extension of blogging some takeup was obvious. “Government regulation chokes telephony innovation”—whew. The story here is the FCC requiring that voice-over-internet providers find ways to make 911 work. I suppose ignoring essential safety features is “innovation,” but count me out on this one. Finally, corporate data leaks—unfortunately, that was obvious, and came to light thanks to California’s disclosure law (which Congress hasn’t managed to overrule yet, but give them time).

38. Getting to know you

Editorials provide evidence of a magazine or site’s personality: The overall character and style of the content. You need to have a personality; otherwise, your users are ripe for the taking by another outlet with comparable content and a more intriguing presence. If you don’t believe that such an outlet could exist, either you have an extraordinarily narrow niche or, more probably, you’re kidding yourself…

As with personalities, the trick to online personality is to make it evident without slowing down the reader. I can skip over the editorial page and contributor’s page in a print magazine, but they’re there when I want to get a better sense of the magazine. The arrangement of the table of contents and of contents themselves gives more of a clue in print than (usually) online, particularly since tricky online contents tend to hide content or confuse the reader.

How do you do the things you do? Inquiring minds want to know, and discussing some of those issues helps bring us closer. That’s reflexivity: Talking about yourself, or in this case discussing some of the problems and possibilities of being a content site.

Many webloggers do it, all too frequently to excess (where the weblog becomes a blogblog or blog2, a weblog about weblogging). That’s unfortunate, even more so when a good topical blog becomes a blogblog. But some level of reflexivity makes sense…

[Portions of “disContent” column from EContent 26:7, July 2003. Should more libraries feature the personalities that make them more than a big building of books, establish the personality of the library, and maybe show a little reflexivity?]

39. No, we’re NOT all tech junkies now

The story might be startling without “We’re all tech junkies now” (AP via Cnn.com, December 21, 2005): millions of Americans are “showing early signs of addiction to the next wave of high-tech toys.” That “next wave” includes MP3 players, HDTV and DVRs. “Some people freely admit to being high-tech junkies.”

The bill for being “thoroughly plugged in to entertainment and communications” runs to more than $200 a month for one-third of those polled (extrapolated to “households in this country” based on a sample of 1,006). That’s a lot of money—but hey, one attorney quoted spends more than $500 a month and says “he has no choice.” Here’s one of those quotes you have to love: “TVs, cable or DirecTV, cellular phones, high-speed Internet. All of those things are pretty essential in today’s world.” Geez. My wife has a cell phone, we have broadband, we have cable TV—and the monthly bill for all that comes to about $395 less than the attorney’s bill. I’m guessing that loads of premium channels are also “pretty essential”—and, of course, this person has two homes, another essential in today’s world.

A psychologist specializing in “internet addiction” says, “Some people feel the products will improve the quality of their lives. But do we really need to be connected in every way, shape or form?”

Here’s another question: Since when did “millions” become “we’re all” in a nation of more than 300 million people? The poll found about 25% of respondents with portable MP3 players or iPods, about 40% with videogame consoles. Those aren’t majorities; they surely aren’t “all.”

My opinion as to whether use of high-tech gadgets is an addiction? I’m no psychologist, and surely those experts must be as right about everything as other experts. Right? Right…

40. The joys of copyfitting [1]

Some notes about the process of bringing Cites & Insights to fruition. Not the writing part. That’s too tedious and strange to discuss…

Here’s what happens once I conclude that there’s more than enough copy, in roughly the right-size chunks. I start by doing an editorial pass on each article. Editing your own stuff is always chancy, and I don’t claim to do an adequate job. That typically reduces the word count by about 5%. Then, some or all of the following:

Ø    Big cuts: If I know the issue is way too long, some sections that aren’t too timely get held over to the next issue.

Ø    Assembling: I choose an order for the remaining essays, open a new instance of the C&I Word template (which includes the banner and issue area, needing slight editing each time), and insert the files in order (all of them also built with the same template, but only for style handling). Then I see how big it is—e.g., a raw issue might run 27 pages.

Ø    Copyfitting 1: I suspect most of you don’t notice that there are very few cases in C&I where the last line of a paragraph consists of a single word, and no cases where one line of a paragraph is either an orphan or a widow (stranded at the bottom or top of a page). Word handles orphans and widows automatically, if you tell it to. Avoiding stub lines takes some doing. The copyfitting process also involves manipulating long URLs so they don’t cause ugly justification problems by breaking to a new line with very little in the previous line and modifying some headings and subheadings so they’re a little more compact (by changing wording or reducing type size). Yes, I’m an old-media type. This process might bring a 27-page issue down to 25.5 pages.

Ø    Copyfitting 2: I insist on issues that print nicely when duplexed—that is, have an even number of pages and come close to filling each page. I prefer issues closer to 20 pages. So I go through eliminating words, sentences, paragraphs–most of it my own commentary that I could label as self-indulgent or peripheral to the discussion at hand–and doing special copyfitting when there are significant gaps at the bottoms of pages. That process continues until, shazam, the page count suddenly drops to what I want–or until it refuses to, and I have to do something more drastic. In the example here, I’ll probably wind up with 24 pages—but I might push harder and go for 22.

Ø    Final steps: Clicking the make-PDF icon. Checking the PDF for reasonable quality and bookmarks. Saving the Word document. Opening it up, stripping the banner and issue line, switching to one column, replacing the template with my “web” template, inserting the web header, stripping extraneous styles from the template. Saving that “webtemp” document as web/filtered; opening it repeatedly, stripping out all but one story, assigning appropriate properties, and saving as individual web/filtered pieces. Adding to the TOC document, copying the new issue table to the Index document, making sure to change the “Current Issue” link in the navigation line, modifying the “old volumes” summary document. Uploading all the new and changed documents to citesandinsights.info, writing a plain-text notice on Topica, writing an HTML new-issue notice on the C&I Updates blog, then copying-and-pasting that notice in this blog and my LISNews journal. (The next day, I forward the Topica mailing that I receive to a handful of lists and people after stripping the Topica ad.)

Ø    Indexing: The final step and just about the only time I listen to music while working on my PC. Opening a special “ix2006 document, adding index elements for each page and story as seems appropriate (amateur indexing, but better than none), going back and making each element an index entry (there should be a macro for this, but I haven’t spent the time to do one); generating the volume-so-far index and printing it out for use during the year. My least favorite part of each issue, but I do like having an index.

That’s it. Then on to the next issue, after a day or three off and maybe some other writing. [Adapted from Walt at Random, April 29, 2005]

41. Of course rational consumers pay more for some products

I’ve kept a clipped article from Fast Company since November 2003, planning to write a Perspective or Way We Think essay based on it—and on stupid comments I’ve read from economists who seem to think that the only rational purchase decision is to buy the cheapest product that meets minimal requirements. In lieu of that Perspective

The Fast Company article makes the point that “consumers will happily pay a premium for products they really love,” even if they’re price-sensitive in other areas. One student in New York lives out of town to save rent money and buys groceries on sale—but pays $350 for great pairs of shoes. A person buys the cheapest possible computer equipment and cameras—but a top-of-the-line vodka. A couple won’t buy food or cleaning products except on sale, but spent a small fortune remodeling their kitchen, including a $4,000 refrigerator.

The article calls these buying patterns “schizophrenic.” I call them rational—paying for what matters to you. An analyst gets it wrong, assuming that people pay more for “new luxuries” in general, as opposed to the highly individual patterns that really happen. For a marketer, “trading up” is about “stronger emotional response.” For some of us—I think most of us—it’s about respecting our own values.

So of course a fair number of people will pay $6 for smoked turkey breast with chipotle mayonnaise on Asiago-cheese focaccia instead of $3 for a fast-food burger. Of course some of us drive modestly-priced cars, buy sundries at Target, and spend fairly little on clothing—while at the same time taking cruises on luxury lines that cost twice as much as mass-market lines, because they represent better value to those of us who make those choices.

The problem for marketers is that most of us don’t “trade up” in all categories unless we really do have money to burn, and probably not even then. We have to be convinced that there’s a real difference that matters to us. That’s rational economics, using the money you earn to enrich the life you want to lead.

42. Libraries need not fear obsolescence [1989]

Libraries need not fear obsolescence. No, libraries won’t offer the only source of information in the future—any more than they do now. But libraries will continue to offer the wide range of resources that no individual can or should acquire on his or her own. I don’t read the daily newspaper at the library—but I certainly don’t buy every book or magazine that I never need to refer to.

Libraries have never served well as the most current form of information. Libraries serve as the central focus for broader information, for the resources we need from time to time, for the cultural history of the nation and world. They also serve as the central source for enlightenment and entertainment, but certainly not the only source. Libraries, and library organizations, will play a leading role in making sense of new channels of information; we will be central to the process even if we don’t always provide the resources within the library. [Speech at California Library Association, November 12, 1989, Oakland]

43. One e-paper company bites the dust

Gyricon LLC was supposed to be a big player in e-paper; it was even marketing SyncroSign message boards made with its SmartPaper technology. It’s a spinoff of Xerox PARC—and Gyricon officials predicted annual revenues of $100 million. Apparently not: As of December 31, 2005, Gyricon was terminated. Xerox “will refocus its efforts in electronic paper technology through licensing of the underlying intellectual property.”

Gyricon had been at this for a while. When I saw photos showing how wonderful it was, I noticed that the awful resolution didn’t seem to improve from one year to the next. Maybe the technology was inferior. Maybe there’s just not a huge market waiting for this technology. Or, a cynic might say, maybe it’s Xerox PARC failing to capitalize on its research once more—or maybe waiting for others to do it, then claim patent rights. We shall see.

44. Fleeing the internet: Time to call it quits?

Millions of Americans have stopped using the Internet. That’s what a recent Pew study shows—42% non-users, 17% former Internet users who’ve dropped out. They’re fleeing the web, avoiding online, dropping dialup and banning broadband. The Internet? That’s so 20th century!

Is this the beginning of the end? Should econtent providers wise up and switch to print? Or is the study wrong? I’ll argue that the answers are no, maybe, and no—but mostly that the Pew study may be a useful reality check for over-ambitious ebusiness plans….

Have you looked at daily newspaper readership among adults in the U.S.? It’s right around 58%— just about the same as Internet usage. I’m pleasantly surprised it’s that high. Much as I love reading and readers, I’d be surprised if more than 58% of adult Americans buy at least one book a year. The figure for reasonably regular use of public libraries among adults is a little higher (roughly two-thirds), but then public libraries are free at point of use.

What percentage of American adults buy at least one CD a year or go to one or more concerts? More than 58%? What percentage of households has sound systems that are anything more than boomboxes? I’ll bet the penetration of stereo systems is lower than the penetration of personal computers. Why should the Internet be different?

If your business plan counted on getting a small piece of an ever-growing pie, with more and more people spending more and more time online, you may be in trouble. Maybe U.S. Internet use will pick up again; maybe it won’t. But saturation is predictable, whether at 58%, 65%, or some other figure—probably a figure well below 100%....

If you’re offering something that’s better than what’s available offline, and you’re aiming at people likely to stick with the Internet, you shouldn’t need to worry. But “better than offline” doesn’t equate to “because it’s online.” That fantasy is dead. [Portions of “disContent” column from EContent 26:11, November 2003. It now appears that saturation may be 65%.]

45. Only one winner per category?

A December 22, 2005 post at Science library pad (scilib.typepad.com/science_library_pad/) is entitled “Metcalfe’s law and library site communities.” Metcalfe’s law, wildly overstated in the original version, says the value of a network equals approximately the square of the number of users of the system. Even the more modest underlying idea—the idea that adding more nodes to a network automatically increases its value—involves a lot of questionable assumptions.

Never mind. I’m taking issue with the blog entry’s application of Metcalfe’s law to social software systems. Because Amazon gets loads of user book reviews, the blogger seems to say, no other book review system will be successful. Deli.cio.us has captured the “social bookmarking” space, so no other application can gain headway. “There are only so many genres of applications, and once one choice dominates within a genre, it is very hard for any (even much better) choice to gain traction. Microsoft Word / PowerPoint / Excel etc., anyone?”

Which, oddly enough, is a perfect set of counterexamples (PowerPoint excepted). Based on the “only one winner” concept, we all use Lotus 1-2-3 for spreadsheets and WordStar for word processing. Each choice dominated the field.

I’ll assert the opposite: Most categories in life have more than one ongoing success story. That’s true for blog tracking; it’s true for automobiles; it’s true for web search engines; it’s true for most areas.

Wouldn’t life be boring if there could be only one success per category?

46. Reasonable people

So I decide to give Business 2.0 another try. And get to the “Wheels” section of the April 2005 issue, with a review of the Mercedes-Benz CL65 AMG. And these sentences:

The CL65 AMG is, in fact, everyone’s kind of car. There is not a single aspect to the vehicle that a reasonable person could find fault with.

Bwahahah….Let’s see now:

Ø    Fuel economy: 12mpg city, 19mpg highway. I find a lot of fault with that, since the car I drive (not a hybrid) gets better than twice that mileage in both cases. Maybe the writer’s world will never run out of fossil fuel; must be nice to live there.

Ø    $186,520: Almost precisely 10 times what we paid two months ago for my wife’s brand-new top-of-the-line Civic EX. Enough difference to pay for 16 high-end cruises or a vacation home in many parts of the country.

Ø    …for a two-door coupe that weighs 4654 pounds and is 196.6 inches long: A big, heavy, beast of a car with wide doors combined with rough access to the rear seat. The review doesn’t comment on turning radius, but I have my suspicions…

Ø    The speedometer goes to 220, but the top speed is electronically limited to 155 mph. The point being, I presume, that this overpowered beast (604HP) could go at an even more absurd rate of speed if it wasn’t “locked down” to something over twice the top speed limit in the U.S.

Not mentioned in the review, of course: It’s a Mercedes-Benz, which means you’ll spend a fortune on service, given the high servicing costs of the brand.

I guess I’m just unreasonable. I’m not going to shame anyone else for buying this car–heck, it gets better gas mileage than a Hummer, at least–but nothing to find fault with? In your dreams. [Walt at Random, May 2, 2005]

47. OR as a default operator?

Lislemck had an interesting post at Biblioblather on December 15, 2005, “AND not OR strikes again.”

Specifically, the Millenium client—not the OPAC—has OR as a default operator for keyword searches. So if you search for “medieval warfare” (the example used), you get loads of stuff about medieval life in general and warfare in general, probably swamping stuff about warfare in medieval times. A couple of web search engines used OR as a default operator years ago; I don’t believe that any do anymore, since with large indexes it’s a recipe for disaster.

One has to wonder. Why would any contemporary search system default to an operator that yields lots of results but tends toward irrelevant results—and that, worse still, makes the results even worse if you add more words?

Small irony. In the real world, of course, “And not or” means almost precisely the opposite of what it does in Boolean operations—that is, the old AND the new, not the old OR the new. The mantra means broadening choices in the real world—and narrowing them in searching. Isn’t English wonderful?

48. Losing it: A contrarian’s thoughts on digital content retention

Not only can’t we retain everything, maybe we shouldn’t try. What would future researchers do with billions of petabytes of everyday digital content?

I believe forgetting is a critical part of a healthy life. Despite the proliferation of reality TV, I don’t think I’m alone in that I don’t want to record everything I’ve seen or done, online or (particularly) offline, and I find the idea more than a little creepy. I don’t want a camera as part of my clothing, capturing whatever I see so that I can print out anything “interesting” and, presumably, data miners can find what they consider interesting.

The mind has its own ways of mining previous partly-remembered experiences, and part of that process is forgetting as much as 99% of everything we encounter, either because it’s irrelevant or because we’d just as soon forget it. Maybe total recall is a great idea for some but count me out. And I doubt that total recall is a good idea for most people or the world in general.

I wonder whether society doesn’t have the same need to forget almost everything. It’s certainly true that deep social history requires research into the minutiae of earlier times, more than the official history gathered from mainstream news and major players. But those new-breed Civil War historians I discussed last September didn’t have access to every conversation or every letter from 1861 through 1865. I’ll bet a lot more than 99% of all daily accounts disappeared—indeed, that 99% of what might have been in letters was never written down.

Maybe we need to lose almost all of the digital content produced during any given year. Without that forgetting, we may wind up drowning in so much data that there’s no time left for thought, wisdom and creativity. [Portions of “disContent” from EContent 27:3, March 2004.]

49. Questia’s still around

Troy Williams had grand visions for his Internet startup seven years ago.

Despite struggling through three rounds of layoffs and millions of dollars of debt, he still does.

That’s the start of a December 28, 2005 Houston Chronicle story, saying the firm is once again “rebuilding.” Questia launched in January 2001 with a claim it would grow to “more than 250,000 titles by 2003.” By May 2001 it had laid off half its employees—and the 50,000 books promised by February 2001 turned into 35,000 in May. In August, it was claiming a big TV ad blitz and calling itself “the Online Library”—with an “expansion” to 60,000 items, 20,000 of which were journal articles. That November, another 50% cut brought the 280-person company down to 68—and some of us wondered (as one librarian wrote on Web4Lib) why on earth anyone would pay $20 per month to get what they could get for free from libraries. In January 2002, the Houston Chronicle said Questia was down to 28 workers—and provided no severance to the 40 laid off.

Questia did run ads suggesting that college libraries were irrelevant—back when it was marketing to college students. In late 2003, what was left of the firm “expanded” its target to high school students—with a collection of 45,000 books and 300,000 articles and Troy Williams’ continuing hubris: “Very soon, it will be unthinkable for a student to research and write a paper without using the Questia service.” There were 32 employees at that point.

Now? Williams says “we were just ahead of our time.” Questia’s back up to 70 employees and claims “65,000 active subscribers”—but that includes 300 high school subscriptions and could mean almost anything. Oh, and now Williams blames the terrorists for making it difficult to raise money. He’s changed one tune: He “doesn’t want to compete with brick-and-mortar libraries” and says “We’re not trying to undermine libraries.” Somehow, that doesn’t seem like a danger.

50. Recommendations for public access computer configuration

That’s not the full title of this very good two-page guide by Lori Bowen Ayre, available at www.galecia.com/weblog/mt/archives/Recommendations _for_multipurpose_PAC_configuration.pdf. The full title: Recommendations for multipurpose public access computer configuration using Windows. You can download the guide and print as many copies as you like, as long as you’re not selling them: It has a Creative Commons BY-NC-ND license (I won’t explain the ND part).

Ayre notes that library public-access computers may be the only computer available to some people, so it should have a familiar interface and (ideally) offer decent productivity applications. It should also allow users to save files to USB devices (“or floppy,” but that’s becoming hopeless) and, for that matter, to the computer during a work session. “Library use of Internet filters should be transparent and manageable”—and patrons should be informed when websites have been blocked.

The second page offers a table of recommended software and some excellent configuration recommendations, some of which you may not think of—e.g., “file extensions should be set to display.” Ayre provides a lot of useful advice in a single two-sided sheet; excellent work.

51. Re-evaluating web evaluation

Another piece from the January/February 2006 Online, this one Greg R. Notess’ “on the net” column. It’s a good three-page discussion of a point I’ve seen raised elsewhere, and better than most such discussions. The point: Librarians and academics should not dismiss web resources as valueless—but they must push for students to evaluate web resources. And, for that matter, to evaluate print resources, which aren’t inherently any more trustworthy than web resources (especially in an age of cheap-and-easy high-quality print publishing).

Yes, libraries have resources (both print and electronic) that aren’t available on the open web; yes, those resources need to be highlighted. But an overemphasis on those resources “could backfire”:

If users find better information online (however they may define “better”), then information professionals lose credibility when we insist that library and print resources are always better.

Notess notes that he loves print resources, “but I would never trust everything printed on paper any more than I trust everything online.” What students need—better, what citizens need—are effective ways to evaluate all resources. Notess offers some methods; it’s not an exhaustive list, but it’s a valuable addition to a vital discussion.

52. The death of print, Xanadu and other nightmares [1992]

First, the new is not always better than the old…

Second, you should be on your guard when something is described as inevitable—particularly if the inevitable development seems undesirable or questionable to you. Almost nothing in the affairs of humanity is inevitable…

Third, it’s as pointless and harmful to treat all libraries identically as it is to treat all library users identically…

Fourth, data is not the same as information; information is not the same as knowledge; knowledge is not the same as understanding; and understanding is not the same as wisdom. Beyond that, libraries, books, magazines and daily newspapers play many roles beyond simply providing data, information, or knowledge.

Fifth, predictions tend to be self-fulfilling if enough people make them or accept them. To a great extent, we get the futures that we work for; to an even greater extent, we get the futures that we settle for.

Sixth and last, the problem with paradigm shifts…is that they seem to assume going from one stable situation to another (but very different) stable situation—after which we can stop thinking about it. The reality is change: sometimes faster, sometimes slower, always complex and somewhat unpredictable. Libraries and librarians should constantly be redefining themselves in small way; that’s very different than some wholesale redefinition of who we are and what we do. [Arizona State Library Association, October 15, 1992, Phoenix.]

53. The terminology game

I’ve had a Web4Lib post from John Kupersmith sitting around since September 2004. It was meant to serve as the basis for a Perspective, but that never happened. The theme was choosing the “best” terminology for key concepts in online catalogs. For example: What do you call a search that retrieves titles based on words within titles (and nowhere else)? Title? Title word? Title keyword? Keyword(s) in title? Or just Keyword?

My own take: If there’s a keyword index (retrieving items via words appearing in many different fields), that should be the only place “keyword” is used. I would say that: It’s what we do in Eureka, after long discussion and user feedback. But what does “Title” mean? For us, it triggers a browse based on the portion of the title keyed; for some system, it’s a title word search or an auto-truncated title (phrase) search.

I’m not sure there are “right” answers. I was a little taken aback by one objection to using “Command line” for a search option that accepts old-fashioned, well, command-line searching: It’s meaningless to users unfamiliar with that type of system. Exactly—and such users should not be doing command-line searching. For catalogs that must serve experts as well as novices, it’s hard to justify taking away power search capabilities because they don’t make sense to novices, and those who can use command-line searching effectively are likely to understand the term.

But those are my comments. John Kupersmith maintains a first-rate web resource, Library terms that users understand (www.jkup.net/terms.html). He’s gathered loads of evidence (usability tests and the like) and offers it along with suggestions for doing your own testing. Maybe the same answers don’t and shouldn’t apply to every system—and I’ll maintain that “the Google approach” is not appropriate for all users and uses of online catalog.

54. Who do you trust?

We’re learning to distrust so much associated with the internet, and I’m not just talking about hoaxes, spoofs and error-laden content—none of those is unique to digital content. I mean spam, scams, viruses, worms, and most of all phishing and spyware, activities that use the nature of the internet to betray our trust, invade our privacy and drain our bank accounts.

Trust (or a loss of it) impacts a variety of industries—not just econtent sites. Knowledgeable users, suppliers, and partners operate at a continuous level of paranoia and distrust; those who haven’t learned to distrust will before long….

Visitors will ask, “Who am I dealing with at this site?”… If a visitor is looking for econtent on a topic, they’ll think, “What reason do I have to believe that this site is a trustworthy resource?”

If the site in question is purely econtent with no traditional arm, the name alone will have little meaning. “Bestcamerainfo.com” claims that it’s a trustworthy source for information on cameras, but provides no credentials to back that claim…

The fastest way for a user to establish site trust is by checking with traditional media and reliable links. If I read in Consumer Reports that www.choicetrust. com is a trustworthy place to order a Comprehensive Loss Underwriting Exchange (CLUE) report on your insurability, I’ll assume I can use my credit card at www.choicetrust.com without negative consequences. On the other hand, if I get email that purports to be from a consumer magazine or a site that looks a lot like Consumer Reports but just doesn’t quite seem right that says, “Go to cheaperclue.com for a cheaper CLUE report,” I’m unlikely to trust that recommendation…

Gaining trust becomes more difficult as increasingly clever Internet fraud makes us all less trusting. Losing trust is easy….

It’s one thing to lose trust. It’s another to betray it. If you deposit spyware on my computer when I visit your site, you’ve betrayed my trust. If you gather information on me and provide it to third parties without a clear opt-in provision, you’ve betrayed my trust. Once lost, trust is hard to regain. Once betrayed, trust may never return. [Portions of “disContent” from EContent 27:12, December 2004.]

55. That’s not the song I remember

Speaking of music, here’s a shocker: When you buy a TV series on DVD, the music on the DVDs may not be what you remember from the broadcasts. It could be your memory—but it could be another ridiculous twist in music protection.

Turns out that, while music licensing for films covers all subsequent releases, most licensing for TV shows (at least until recently) just covered the show and any reruns and syndication. DVD? That’s a new medium—and the publishers want a new cut. And, by the way, there are no standard fees. According to a Hollywood Reporter story (November 15, 2005), fees for song usage range from $1,500 to $15,000, “with superstar tracks reaching up to $20,000-$25,000.” For one song. That’s for broadcast; the home video fee “is equal to or greater than those quoted.”

The result? Some shows take a lot longer to come out. Some cost more. WKRP in Cincinnati may never make it to DVD: licensing may cost too much. Quantum Leap substituted music on Season 2. So have a number of other shows (including Northern Exposure).

56. Print in 2020: Musing about projections

The gist of a British Library press release: a projection that 10% of UK research monographs will be print-only in 2020, with the remainder being e-only or print and electronic.

As limited to research monographs, the projection doesn’t surprise me at all. I thought a little about the more general case (since a couple of people misread it as “all publishing”), looking at U.S./worldwide publishing and making up plausible numbers based on what I’ve seen of publishing statistics.

Note that these are all hypothetical numbers and are not claimed to be projections!

Consider the following hypotheticals:

Ø    Trade books (what most of us usually buy at bookstores and borrow at libraries): Let’s assume 100,000 titles will be published this year, the average trade book is 150 pages long, the number of titles will grow at 1% a year, and that in 2020 a full 80% of current trade books will be print only. In other words, in public library/bookstore terms, “most books will still be print only.” Not a prediction!

Ø    Other books (reference, research monographs, textbooks, etc.): Say 50,000 titles in 2005, averaging 200 pages, growing at 2% a year, and that 20% will be print only in 2020. I think these are all reasonable projections.

Ø    Refereed scholarly journals: Say 30,000 such journals in 2005, averaging 1,000 pages per journal per year, growing at 5% annually–and that only 10% (mostly humanities) will be print-only in 2020. Heck, I’d go with a 5% projection.

Ø    Magazines and other periodicals: Say 200,000 such periodicals in 2005, again averaging 1,000 pages per periodical per year, growing at 2% annually. Since full-text aggregators are now making many popular magazines available in e-form, although the bulk of circulation continues to be print (and, I believe, will still be print), let’s say 25% will be print-only in 2020.

Add those all up (and while all of the growth factors and current page and title numbers are made up, they’re all plausible based on what I understand about the publishing industry)–and you get this situation in 2020: 25% of all publishing (where “all” excludes newspapers and the like, an unfortunate exclusion) would be print-only–but most “regular” books would be print-only.

Change the assumptions and see what happens:

Ø    Consider words rather than pages. An educated estimate: the average trade-book page is around 300 words, the average specialty-book page around 400 words, the average refereed journal page around 700 words, the average magazine page around 600 words.

Ø    Assume different growth rates: 6% annual growth for refereed “journal equivalents,” many of them overlays on article databases, and only 1% for other periodicals.

Ø    Assume 95% of refereed journals are available in some electronic form by 2020 (probably a good assumption) and 80% of other magazines have most or all of their content in some e-form as well as print.

That yields an overall print-only percentage of 18%. While still leaving most copies of most magazines and most trade books as print publications. I don’t find the estimate at all unlikely, and I don’t think it would in any way signal “the death of print.” [Adapted from Walt at random, July 1, 2005]

57. Trends & issues in library & information science

Two megatrends: A concern for the impact of technology, and a continued focus on the user. What a great idea: Libraries focusing on the user!

Specific trends: Increasing demand for and provision of end-user access to online information services; increased use of networks and telecommunications; growth in computer-based information revolves around [one particular] technology; increasing focus on collection management to better meet the general goals of institutions as well as the specific needs of users (what I’ve called “the long collection”); increasing concern with reaching out to new user groups; and a focus on the promotion of literacy.

That sounds great—and this ERIC digest says that’s what the professional literature shows to be happening. Well, I did change one particular element to “[one particular]” to make this summary consistent with today’s hot trends.

That’s because “[one particular]” is CD-ROM—and the ERIC digest dates from 1990, based on examining professional literature from October 1, 1988 to September 30, 1990.

Make of that what you will.

58. The aggravations of aggregation

The problems with feeds, other than setting up and maintaining them, are fourfold:

First, some sites have reported that aggressive aggregators poll feeds so often that they overload the servers, though this problem is improving….

Second, you won’t always know who’s signed up or how many are getting your feeds—although Bloglines, for one, does show the number of subscriptions for a feed and lists users who choose to make their subscriptions public.

Third, fed items may not look like the originals. Some markup makes it through; some doesn’t. Context may be limited to a header showing the name of your site (as a link back to the site) and a sentence or two about it.

Fourth, people won’t see the ads that support your site unless your items intrigue them enough to click back to the site…

Is it worth it? Probably, at least for many of you… My guess is you’ll be at a competitive disadvantage if you don’t offer some feeds—and you gain some tech-savvy people who think enough of you to want your content… When you do add feeds, sign up for at least one aggregator and monitor your own feeds—you may be surprised at how they come through. Don’t be too cute with headlines and summaries: If people feel tricked into clicking through, they’ll unsubscribe you. Don’t be surprised if feeds add to your indirect usage rather than substituting for email/list users (that’s what I’ve found so far). [Portions of “disContent” from EContent 28:1/2, January/February 2005]

59. Vaporware of 2005

Leander Kahney offers Wired News’ (and readers’) picks for the most vaporous products of 2005—“tech products that were promised last year but never delivered”—in a February 6, 2006 story. I’ll skip five gaming-related awards (Team Fortress 2, Legend of Zelda: Twilight Princess, StarCraft Ghost, the Phantom game console, and #1—Duke Nukem Forever, promised for six years now) in favor of the more general “honors.”

High-def TiVo and TiVoToGo for the Mac came in #10; tension between TiVo and big media probably explains the delay in high-definition TiVo. (As for the second: When you’re 2% of the market, you can only complain so loudly.)

I’ve never heard of the AlphaGrip “ergonomic” keyboard/trackball, still missing in action six years after announcement—but it’s so radical that you have to wonder just how wonderful it would be for typing.

High-def. discs—either Blu-Ray or HD-DVD—are certainly delayed past original introduction plans, possibly because companies know that format competition will be disastrous. There’s an HD-DVD player on the market; too bad there aren’t really any discs.

Microsoft Vista and IE7? Vista has been promised as “late 2006” for quite a while—but IE7, although it’s out in beta, is a little late.

The Google award may be silly, but it’s how I feel: all those “beta” offerings don’t give you much confidence in stability or reliability.

An interesting list. If I had to guess, I’d guess Vista will only be a little late, there might yet be a last-minute Blu-Ray/HD-DVD merger (or HD-DVD might fold), and Google will keep “beta” services a lot longer than some might consider reasonable.

60. “Giving away” a trillion ebooks [1992]

Those of you who deal with Internet/BITNET can hardly have escaped mention of Project Gutenberg… I do want to say something about the English language…and particularly the phrase “given away.” This summer, a PG missive proudly announced that they had already “given away” 2.6 billion electronic texts, a step on their path to one trillion such texts in the next nine years. Wow…2.6 billion! That’s pretty impressive. To you or to me, that would presumably mean that there have been 2.6 billion occasions on which someone has actually made use of, or at least taken possession of, PG e-texts. Right? Don’t you usually assume that “given away” implies “to a willing recipient”—as opposed to “thrown away” or “littered” or “strewn across the landscape”?

…What this claim actually means is quite simple: Project Gutenberg had posted 26 e-texts at that point. PG projects that, by the year 2001, some hundred million people, or devices, or some such thing will have access to the Internet/BITNET and whatever grows out of it. Thus, presto chango, multiply 26 times a hundred million, and you get 2.6 billion.

What? You mean that 100 million people aren’t currently linked to BITNET/Internet—perhaps a tenth that many, at best? And it might just be that the overwhelming majority of those users haven’t the slightest interest in downloading e-text versions of widely-available books, books they can buy for $4 or $5 in easy-to-read paperback editions? Well, that’s beside the point; in the true virtual world of e-text distribution, it’s still 2.6 billion strong. [Arizona State Library Association, October 15, 1992, Phoenix.]

61. Walt’s fearless technology predictions for 2006

Will appear together with my comprehensive roundup of 2005 predictions and how they worked out, my complete updateding of the ebook market, and my list of ten library-related blogs that people shouldn’t waste their time on.

The issue containing these gems will also include an authoritative rating of library schools based on interviews with deans, faculty and former students.

That issue will be a unique print issue, delivered by porcine air express to all paid subscribers at the special $10,000/year Insider’s Rate.

62. Balanced copyright and digital audiobooks

Alan Wexelblat posted “Lending? To whom?” on Copyfight, August 26, 2005. The portion of the post that I found troubling from a balanced-copyright position (and as a library supporter/person):

It’s Friday, so it must be stupid ideas time again. AP story (here on SiliconValley.com) to the effect that some libraries are “lending” audiobooks via download. The period of lending is controlled via DRM, which locks you out of the file if you run over your time.

This strikes me as a pinnacle of absurdity—lending libraries impose time limits on physical volumes because my possession of the book prevents another patron from reading it. Downloads… um, DON’T. All the patrons could download the same book and no one’s having a copy on their hard disk would impede another’s listening pleasure.

If you believe copyright is irrelevant in a digital world, then this argument makes perfectly good sense. Or, for that matter, if you believe that creators/distributors of digital resources don’t deserve compensation even remotely similar to that provided for creators/distributors of physical resources, then fine.

Otherwise, I don’t see the argument. This lending model is precisely that: A lending model. The library’s paid for the right to have one copy of the audio ebook in use at any one time. How is that different than lending a book?

I suppose libraries could only license audio ebooks on an “unlimited simultaneous circulation” basis. I’m guessing the costs would be just a trifle higher, at least if authors/publishers have anything to say about it, since that would push the inherent friction between library models and copyright/royalty models into extreme visibility.

Some authors hate the idea of library circulation because they believe–wrongly, in my opinion–that they’re being robbed of royalties for additional copies. (As opposed to gaining new readers and popularity…) In some countries, libraries are required to pay (directly or indirectly) a fee each time an item is circulated. That fee isn’t as high as a standard royalty payment, to be sure; it’s a compromise between American first-sale rights and an absolute hardline “every use must fully compensate the creator” policy.

Without such a fee, I don’t see how it’s fair to creators/distributors to argue that libraries should be able to distribute an unlimited number of copies of anything–be it audio ebook, regular ebook, or whatever–while paying for one such copy.

I’m not wild about any DRM–but Wexelblat’s post reminds me that there may be areas where DRM is essential because people don’t believe good faith and fair dealing are issues in the “digital world.” Unfortunately, that makes it easier for Big Media to argue for extreme DRM, where everything not expressly permitted is forbidden. [Walt at Random, August 26, 2005]

63. What could ALA do?

That’s the title of a long, thoughtful post by Meredith Farkas at Information wants to be free on March 2, 2006. As of March 10 (mid-afternoon), it’s gathered 45 comments for a total of more than 12,000 words—and it’s only part of a larger complex discussion on the merits of ALA membership and whether ALA needs change or transformation.

Here’s Farkas’ short list (just part or all of the topic sentences, not the whole paragraphs): Officially and publicly recognize that there is not currently a shortage of librarians in entry-level positions. Reach out more to new librarians in the profession. Start appreciating your speakers. Start using some of the social tools your patrons are using. Raise the accreditation standards for library schools.

Become more transparent and human. Start sending (people) literature and email for the things (they’re) actually members of. Get a Web site that doesn’t suck. Start having more free online educational opportunities for members. [A reference to a list at Leslie Burger’s blog.] A publication entitled “ALA for New Members” to be sent to every new member.

That’s the starting point, although the discussion started earlier. Since then, Michael Golrick has posted some good “ALA 101” posts about the nature of the organization, several of those commenting have done their own posts, it’s become clear that some people just don’t like ALA—and also fairly clear that a lot of us, no matter what generation, are confused.

I’m not going to say where I agree with Farkas and where I disagree. In some cases, I don’t have an opinion. I can’t honestly say that I continue to be 100% delighted with my ALA and LITA memberships, especially since there’s a March 3, 2006 post at Walt at Random that falsifies such a claim. (In this case, my distress is as much with “my division” as with ALA.)

I think the complex discussion is important. I believe different participants will come to different conclusions. I suspect it might have some impact on ALA, or maybe that’s wishful thinking. Farkas’ post is a good place to start exploring the topic.

64. Contemplation and content: Getting under their skins

Does your site get under people’s skins? Do people click away with something to think about—something in your econtent that deserves contemplation? That’s a tough question for most econtent sites. It might even be considered unfair. After all, people want headlines, brief explanations, quick takes, and surface analysis…If your aim is to be viewed as a content site, it probably helps to provide something worthy of lingering over—something that will generate a more lasting impression.

Memorable, thought provoking, resonant: while not synonyms, these words describe content that sticks with people—content that gets under users’ skins. And consider the word contemplation. That may be the ultimate goal for the best content on your site: to show up in the contemplative thoughts of some readers. People still do contemplate, you know—or at least some of us left coast aging hippies do. A whole group of “slow” movements around the world testify to a desire to get back in touch with ourselves, with our natural rhythms, with what’s under the surface. We’re trying to regain our humanity, at least once in a while, to move away from a frenetic state of content overload. [Portions of “disContent” from EContent 28:3, March 2005.]

65. Whatever happened to the
 Information Commons?

The American Library Association's Office for Information Technology Policy (OITP) is engaged in a number of initiatives to promote and support the information commons, info-commons.org was one such initiative.

Initally it was planned as an irregular online publication with articles exploring the information commons model of intellectual "property." In April 2003 the format was changed to a blog. The site closed in January 2006.

These archives preserve the first phase of the project. Neither the news items nor the commons-blog are archived here.

That’s all that seems to be left (along with three early issues of the online publication). Not only is commons-blog defunct, it’s disappeared: The links are dead. I don’t remember when the last post actually appeared on the blog. It was still active in April-June 2005, as evidenced by its position in Group 1 of “Investigating the biblioblogosphere,” with 19 posts (and no comments) during that period.

I wrote skeptically of the whole “information commons” concept in a September 2004 perspective on Nancy Kranich’s The information commons: A public policy report. I thought the initiative covered too much ground and lumped disparate elements under one umbrella. I was not impressed by a hearing during ALA, which struck me as being a quintessential tradition-ALA “in crowd and everyone else” event. Other Information Commons events enforced that sense, as they were “Gatherings of the Gurus to the Mountaintop,” as I put it in a snarky letter to a colleague.

The quoted paragraphs (from ALA’s website) say that OITP has a number of initiatives to “promote and support” the information commons. Maybe so. Certainly, disappearing the archives of a blog strikes me as an interesting act for a library association—and for the whole commons concept.

66. This is going on your permanent record

Remember the threats back in school days? Some idle prank, some small indiscretion, and there you were in front of the vice principal or counselor, hearing that dread threat: your life would be forever scarred by that black mark on your permanent record.

Remember the feeling of liberation when you realized that there was, in fact, no permanent record? That your elementary school GPA and behavior demerits really didn’t matter much in high school, no college would go back to anything prior to high school, and very few employers will even ask for your college transcripts, much less the infamous permanent record? Don’t be too smug, and maybe feel a little sorry for the tech-savvy kids growing up these days. They do have a permanent record of sorts, and so do you. It’s called the Internet…

Some variation of Murphy’s Law almost guarantees that the rant that was on your site for one day (before cooler heads prevailed) is stored somewhere. It’s a different law than the one that assures that some day you’ll put a Word document on the open web with all its change history and tracking intact, including snarky internal comments that you knew would be removed in the final draft…

The other side of permanence is the informal semi-private actions you took years ago that have now become easier to find and much more public. Usenet postings? Google Groups makes them easy to discover. Other lists? If you think your juvenile comments from 1985 have long since disappeared, you’re probably wrong…

From another perspective, it can be very satisfying when someone says, “I never said that!”—and you shoot them a link to the exact place where they did “say that.” “I didn’t mean it that way” is still a universal (if weak) defense, but at least the actual words are harder to hide and easier to locate. [Portions of “disContent” from EContent 28:7/8, July/August 2005.]

67. Whatever happened to the Information Commons, redux

When one blog closes, another wiki opens?

Possibly. Here’s what it says at www.info­commons.ca/wiki/:

The Information Commons Wiki (IC Wiki) is developed and maintained by the Information Commons Interest Group (ICIG) of the Canadian Library Association in order to foster a better understanding of the issues that affect the Information Commons. All are invited to share ideas, experiences and all relevant documentation for the Information Commons. As this wiki is very young, please feel free to add content to a page and create new ones.

How does this fledgling wiki define “information commons”? “The phrase Information Commons refers to our shared knowledge-base and the processes that facilitate or hinder its use.” ICIG is cosponsor of a CLA preconference on copyright in libraries.

68. Abandoning “library”?

The latest Informed Librarian has a “Guest Forum” contribution “Reading the Tea Leaves” by Chris Olson. Olson looks at the OCLC Perceptions report and finds things there that somehow eluded me. “People say that they use the library less, that they read less…”

Odd. The report I read shows that 69% of U.S. respondents had either increased their use of libraries or stayed about the same over the past few years, and that 73% expect to either use libraries more in the future or use them the same amount. (The figures for Kids These Days, the ones who’ve abandoned libraries and print: 74% and 88% respectively.)

“That they read less”? Maybe I didn’t read the report carefully enough, but I see nothing in the report that says people are reading less. Never mind; my reading skills may be impaired. Olson also accurately reports that people equate libraries with books. And that most people feel that they can find information on their own.

Olson later has one of those sentences that tends to stop me in my tracks: “Libraries are no longer the sole keepers of information or providers of access.” That’s like saying that the U.S. is no longer the only democracy or capitalist country in the world. “No longer” implies that libraries ever had that role [repetitive rant deleted here].

So what’s Olson’s conclusion? “Anyone who can change their brand name or drop the word ‘library’ from it, should consider doing so if they want to be perceived as offering something other than books.” Oh, and they should make sure that branding stays away from any association with libraries or books…

Olson doesn’t say “Any special library or corporate library.” Olson says “Anyone.”

Hmm. 80% of survey respondents view libraries favorably. As libraries. Even as collections of books.

Now, if you really believe that your library is an “information service,” then maybe Olson’s advice makes sense. For many special/corporate libraries, that’s a reasonably accurate definition. For, oh, 99% of public libraries and, I would argue, most academic libraries as well, “information service” is a tragically misguided term as a primary descriptor.

Chris Olson’s marketing firm “has transformed libraries into uniquely branded information services.” If you’re in a public library and ready to throw away 80% approval rating in favor of pushing your role as doing something that most people explicitly say they’re perfectly capable of doing themselves…well, I trust you have another career in mind. [Walt at Random, February 2, 2006.]

69. Whatever happened to the Semantic Web?

I heard about the Semantic Web around the turn of the century. I even met Tim Berners-Lee and commented that I didn’t think it would work (syntax is easy; semantics are hard; people are lazy). I’ve cited a few articles on the issue—none in the last year or so.

Here’s another (thanks to Lorcan Dempsey): “Taking a stand on the Semantic Web” by Catherine C. Marshall (www.csdl.tamu.edu/~marshall/mc-semantic-web.html). She was on a World Wide Web Conference panel with the question “Will the Semantic Web scale?” and took the negative position. I love her example of the response: “Some people didn’t see that the Web was going to take off. This proves that the Semantic Web will take off despite your criticism, which is so lacking in insight that I can only snort derisively.” Hey, Walt Crawford denying the inevitable rapid success of ebooks over print books: I’ve been there.

Marshall likens the Semantic Web to the Flowbee (you know, the thing that turns your vacuum cleaner into a haircutting system)—and then makes another analogy: to MARC. She notes that MARC works pretty well—because it includes an infrastructure for training users (catalogers and trained assistants), a set of authoritative agreements (MARC21 itself, LCSH, the LC Name Authority File…), and more. There’s no such infrastructure to assure interoperable XML, despite the amount of domain-specific interpretation needed to make the Semantic Web work well. Apparently, Tim Berners-Lee claims that the metadata will be created using algorithms and heuristics, but that’s not what’s happening with tagging, and it’s not clear that will work (it certainly hasn’t shown much progress in the last five years, an eternity in web time).

It’s the old problem: Most content creators, especially casual content creators such as bloggers, aren’t interested in formally describing what we do; we just want to do it. Tell us that tight XML coding and description would make our work part of a greater whole, and I’m afraid we’ll mostly just yawn. (There’s a lot more to Marshall’s essay, including why the Semantic Web can be dangerous, and it’s charmingly written; go read it.)

70. Metacontent: Say what you (don’t) mean

Here’s a thought to give you nightmares: What you say in your econtent is only part of the message people receive. The rest is metacontent—and you have less control over metacontent than you’d like.

[As an example] I’m going to look at an article from [EContent], Geoff Daily’s “Epaper: the flexible electronic display of the future” [March 2005, pp. 36-41]. It provides a good overview of epaper and its potential for econtent: it is well-researched, with apt quotes from developers, forecasters, and gurus… I’ll stipulate that epaper will have worthwhile uses…but that’s not the point.

Consider the metacontent: there’s a photo illustrating Gyricon’s epaper-based SyncroSigns “as mutable message boards in hallways.” Great—except that the display pictured, presumably a poster-size board, is so low-rez that the biggest word could easily be read as “WELCONE” or maybe “UELCONE” or “UELCOME.”… A similar photo appeared in the April 2004 EContent, with the same awful resolution. The metacontent delivered, true or not, is “Another year of development hasn’t improved SyncroSigns from a nearly-unreadable resolution.”

Text about Gyricon’s SmartPaper. “Some retailers have implemented epaper price tags that can update prices dynamically through the store.” Hmm. One market we shop at has had organic carrots priced at $1.99 a pound on the shelf tag for a year now and $4.99 a pound in the computer. When we complain, they charge the shelf price. With epaper price tags, the computer would automatically change the shelf tag to match. As a consumer, this possibility does not give me warm and fuzzies about epaper. [Portions of “disContent” from EContent 28:9, September 2005]

71. Worst tech moments of 2005

Same writer (Kevin Poulsen), same outlet (Wired News), same download date as the “Best” list. Are these the “nastiest” moments in technology in 2005?

TiVo boxes start blocking and expiring certain recordings. The Commerce Department asks that the “.xxx” domain not be approved by ICANN, after lobbying by that so-called “Christian” group Family Research Council—and ICANN caves. PayPal delays the transfer of more than $25,000 from the Something Awful website to the American Red Cross—the item doesn’t say “refuses” but “delays.”

When the space shuttle Discovery finally takes off after 2.5 years of fixing foam-insulation issues…a flying chunk of foam rips off a fuel tank shortly after takeoff. Bush “corrupts” the National Security Agency.

Hwang Woo-suk’s magnificent cloning triumphs turn out to be phony. Sony BMG’s rootkit…well, you know about that one by now. Yahoo! makes it easier for China to imprison a dissident journalist. Apple attacks bloggers who run confidential information. An accused killer “blogs his descent into madness.”

I’d say the technology of levee building and maintenance had a “worst” in 2005 that puts all ten of these to shame, and that some of these aren’t technology failures at all. The shuttle, the fraud in refereed journals—that’s it as far as actual “technology failures,” and it’s hard to describe an ethical failure as a tech failure. Good thing I don’t work for Wired News.

72. Survival: Not always predictable [1993]

If you believed some prophets a decade ago, CRTs would be long gone by now…

Speaking of dead ducks, consider hard disks. I saw several well-considered projections half a decade back that showed solid-state memory, with its far superior speed and resistance to crashing, becoming cheaper than hard disks within five years. That’s true: RAM is now much cheaper than hard disk storage was five years ago, and even the kind of stable RAM needed for solid-state disks is about where hard disks were five or six years ago. But hard disks are a whole bunch cheaper and faster now than they were then.

I can almost hear the engineers who have brought down the price of durable RAM: “Well, we made it for $100/megabyte; what more do you want?” Hmm. Right now, I’m paying $2-$3 per megabyte for hard disk storage; that seems like a good target. A tough one, though. [“Knowing niches, scratching itches,” CLSI Eastern Regional Users Group, May 17, 1993, Birmingham, AL. Flash RAM has come down to $50 per gigabyte in some cases—and hard disks are down to somewhere between $0.50 and $1 per gigabyte.]

73. XML: Even if it is snake oil,
you’ll still feel pretty good

This Richard Hammond article in the January/February 2006 Online is a charmer—talking about the real benefits of XML as well as the level of snake oil involved in hyping this eight-year-old format. I don’t believe Hammond mentions the Semantic Web, perhaps the most grandiose claim for XML’s powers. He does make it clear that XML as a tool is powerful but can be taken way too far—and that interoperability between domains is by no mean assured. After all, XML metadata is “using words to describe words,” and without authority control, the descriptive words don’t necessarily mesh. More to the point, most content creators have little interest in providing high-density metadata or the level of XML markup that would yield large benefits—which is probably why most content doesn’t use XML.

Worth reading and worth paying attention to; Hammond strongly favors XML where it makes sense.

74. Symbiotic and parasitic applications

I’m following various discussions about layered web applications–software that “layers over” other web sites or software. That includes “mash-ups,” API-based applications, and other “Web2.0ish” things.

Lots of these ideas and applications are wonderful. Once in a while, I do have mild skeptical thoughts about two aspects of them–particularly if and as such applications are suggested as replacements for more, shall we say, traditional applications rather than as extensions or complements. I’ll just mention one in passing, since it should be obvious to anyone who’s been through the dotcom bust: A layered application ceases to work if the underlying operation goes away. But you all knew that, right? Whenever any private business says “forever,” be a little cautious: “Forever” can mean “at least through the end of this fiscal year.” (The same is true for nonprofits and government entities, of course.)

My other mild concern is highlighted in the posting title. To wit: What’s the ‘business’ relationship between this layer and the underlying operation?

I think there are three general answers:

Ø    Best case: Symbiotic. The layered application clearly benefits from the underlying operation, but in a manner that also benefits the underlying operation.

Ø    Most common case: Mildly parasitic. The layered application uses some of the underlying operation’s resources without providing any benefit to the underlying operation–but the amount of resources used is relatively small, and the layered application doesn’t weaken the underlying operation except to the extent that load becomes a problem. (Not that load can’t become a problem; very few underlying operations have the apparent robustness of, say, GoohooMszon.)

Ø    Most dangerous case: Strongly parasitic. The layered application uses the underlying operation’s resources to compete with the underlying operation, directly or indirectly.

I guess I wonder whether layered applications of the third variety have predictably long lifespans. If, for example, you’re providing a service that appears to sit at an online retailer and tells people how they can use what the retailer sells without paying for it…well, doesn’t the retailer have some motivation to find ways to prevent your use of their resources? And aren’t they justified in doing so?

This is just musing. Maybe the online sites that become underlying operations for layered applications are run by such powerful and/or benevolent corporations that they would never worry about parasitism.

Then again, maybe not. [Walt at Random, February 16, 2006]

75. You’re stupid, they’re stupid, we’re all stupid

That’s one conclusion in PC Magazine’s “Sorry state of security” (February 21, 2006) and related articles on PC security. “No matter how many times we suffer the consequences of online attacks…we always get burned again. Expert advice, warnings, and even new security programs ultimately do no good. After more than ten years of this recurring nightmare, we’ve come to the conclusion that there’s only one possible explanation: Stupidity.”

Stupidity: Most PCs that a typical PC “rescue” technician sees either don’t have security software at all or use badly outdated versions. Stupidity: Software—not only operating system but applications, even security applications—go out with correctable coding flaws that make attacks easier. Stupidity: A year after all major antivirus vendors released signatures capable of identifying and stopping the Zafi-D worm, it was still among the ten most widely encountered viruses and worms. Stupidity: You respond to phishing attacks—or there wouldn’t be phishing attacks. Stupidity: You buy from spammers.

The truth is, most of you bring attacks on yourselves. If you don’t stay away from the seedier side of the Web, well, you’re being stupid.

It’s a tough story. Nobody avoids blame—including PC Magazine and the rest of the media.

Cites & Insights: Crawford at Large, Volume 6, Number 5, Whole Issue 75, ISSN 1534-0937, a journal of libraries, policy, technology and media, is written and produced by Walt Crawford, a senior analyst at RLG.

Cites & Insights is sponsored by YBP Library Services, http://www.ybp.com.

Hosting provided by Boise State University Libraries.

Opinions herein may not represent those of RLG, YBP Library Services, or Boise State University Libraries.

Comments should be sent to waltcrawford@gmail.com. Comments specifically intended for publication should go to citesandinsights@gmail.com. Cites & Insights: Crawford at Large is copyright © 2006 by Walt Crawford: Some rights reserved.

All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

URL: citesandinsights.info/civ6i5.pdf