Trends & Quick Takes
No, that’s not a typo. As a term, “Andersonomics” (which I’ve used elsewhere) isn’t happening—but the various flavors of “X wants to be free” continue to aggravate more than they inform. Here’s one from November 2007—”Against free” in Nicholas Carr’s Rough type. Carr notes a New York Times op-ed by Jaron Lanier titled “Pay me for my content.”
It’s an interesting op-ed—if only because Lanier does something almost unheard of for a guru or pundit. Here’s the relevant comment, after noting that he used to dismiss those who complained about the “unremunerative nature” of content on the Internet:
Stop whining and figure out how to join the party! That’s the line I spouted when I was part of the birthing celebrations for the Web. I even wrote a manifesto titled “Piracy Is Your Friend.” But I was wrong. We were all wrong. (Emphasis added.)
Whuh? What’s that? Admitting you were wrong? Outrageous! Lanier notes that, as with others in Silicon Valley a decade and more ago, he thought the web would increase business opportunities for writers and artists—but he’s finding the opposite. The big names are “assembling content from unpaid internet users to sell advertising to other internet users.” You know—user-generated content and the like. Lanier thinks there should be better ways to provide affordable online content, ways that reward creators. That’s been difficult (micropayment schemes over the years have generally failed), but he makes a good point. He also thinks it’s important and concludes:
We owe it to ourselves and to our creative friends to acknowledge the negative results of our old idealism. We need to grow up.
Carr plays The Realist:
“Free” comes more from the inherent economics of the digital world than from the technical structure of online distribution and commerce. You can try to change the structure, but if you can’t change the economics your efforts will likely go for naught.
I’m not sure what that means, but I am sure Carr thinks he settled the discussion. The “inherent economics of the digital world” mean nobody’s going to pay for content. Gotcha.
Commenters? One says he’s paying for Carr’s content by coming to his site, since “you show me ads and make money doing so.” Bzzt. Sorry, wrong answer, thanks for playing. Carr only runs ads for his own books. I’m guessing most real bloggers (other than superstars) who use AdWords find that clickthrough-based pricing means they’re giving up sidebar space for little or no revenue. This snarky commenter gives a more substantial (if even less friendly) answer as well: “While I like your content, I don’t like it that much. So if you want to get ‘paid’ for your content, go write a book.” (Which Carr did, to be sure.)
Seth Finkelstein notes the first-level truth: people do pay for online content in two categories—porn and financial information. He also notes that “cultural content” has always been a tough (and frequently subsidized) market.
James Cortada wrote “Save the books!” as a Viewpoint in the December 2007 Perspectives from the American Historical Association. (Go to www.historians.org/perspectives/ and search for the December 2007 issue.) Cortada, a long-time IBM employee who’s also a historical writer, believes libraries are getting ready to jettison their 19th and 20th-century books once they’ve been scanned (no matter how badly).
A problem is slowly emerging for historians in the form of librarians discarding books from their collections, a procedure that has potential long-term consequences for scholars doing research in the years to come. We need to understand the features and magnitude of the problem and begin to address it today…
The pace of disposing of such materials is about to pick up sharply over the next few years because Google is rapidly scanning tens of millions of volumes, with the intent of making these available online. Large research libraries are willingly participating by making their materials available to Google: Harvard, Michigan, and Oxford Universities, to name a few. Their goal is noble: to make millions of volumes of information available online in a convenient fashion and, soon, searchable. That last function—”searchable”—means enabling a Google search through all the scanned pages for information; for example, one that could list every reference made to oneself.
Once the scanning project is well underway, the temptation for librarians to dispose of their paper copies of books will be enormous because of lack of space and budgets to keep the originals. Their arguments will be exactly the same as what we heard over the past decade with magazines and journals: easy access, convenience, and so forth. The limitations of that strategy will also be the same, most notably the loss of the serendipitous effect of walking down an aisle of books on a topic of interest or the ability to work with the original artifacts as read in their day, compromising our effectiveness as researchers.
I’m not sure what to say about this commentary (it’s roughly 1,800 words, of which I’ve quoted just over 260). I’d like to say it just won’t happen—that libraries, and particularly ARL libraries, wouldn’t be that rash. On the other hand…well, Cortada does bring up newspaper microfilm and the extent to which libraries have abandoned print magazines.
Cortada’s point is this:
Historians individually and as a community should help librarians appreciate the value of holding on to individual volumes that make up the ephemera of earlier times and not simply capture an image of those books. One cannot assume that they appreciate the urgency of this issue; assume nothing, and have the obvious discussion with your university librarians about what to save. In short, inject yourself and various historical associations into the decision-making process that determines what is to be saved or discarded.
Is this happening? If so, how are libraries responding?
A digression: Some astute readers—hell, some non-astute readers—may note both an unusual level of randomness in this edition of Trends & Quick Takes and the use of older source documents, going back to 2007 in the cases above. (A digression to this digression: How on earth can T&QT be anything but digressions?) The randomness, which is SOP for T&QT, comes in part from the second factor: I’m trying to get just a little caught up. I’ve started using delicious (look, Ma, no interpunctuation!) to mark items I want to discuss or use, either for Library Leadership Network or here, rather than printing lead sheets on the spot. (Then I may go back and print lead sheets to organize a discussion—or I might use the items directly.)
But I’ve only been doing that since March 2009—and I already have 643 items bookmarked (as of 1 p.m. on September 11, 2009). Six hundred and forty-three. I’ve never had anywhere near that many lead sheets waiting to be used (or at least I don’t think so). Roughly 20 of those are for LLN (those ones get used faster—that’s my job!), leaving what, 620 or so for C&I—including 50 tagged “tqt” and many others that could wind up here.
So I’m trying to get a little caught up on earlier material, a process that may take a while. I see roughly three dozen items in the folder. Some of those I’ll toss when I get to them; a few, I’ll find I’ve already discussed elsewhere. Some will go into a different folder. The rest? You’re seeing some of them now.
Maybe that isn’t a digression. The next item up is “The beauty of the dialectical process,” posted January 10, 2008 on davidrothman.net—and it is at least in part about information overload and whether such a thing exists. You could consider the three dozen T&QT lead sheets and 620 virtual lead sheets a symptom of information overload—or you could consider my current approach to finding and flagging interesting source items an example of effective (or partially effective) filtering. My opinion? See the subhead for this item: I believe we’re getting better at learning to sip from the firehose.
The post itself is part of an ongoing discussion between David Rothman and Dean Giustini. The background (or part of it!):
· Giustini published an editorial in the December 22, 2007 BMJ entitled “Web 3.0 and medicine.” Among other things, Giustini says we need the Semantic Web because people spend too much time searching, not finding. He says, “In medicine, finding the best evidence has become increasingly difficult, even for librarians. Despite its constant accessibility, Google’s search results are emblematic of an approaching crisis with information overload, and this is duplicated by Yahoo and other search engines…”
· Rothman, who’s no great fan of “web 2.0” (or “library 2.0”) as a term, isn’t thrilled about “web 3.0” either—and did some self-proclaimed fisking of Giustini’s editorial. He takes issue with both sentences quoted above. To the first, he responds: “I don’t think I can agree with this premise. I think that Web tools have made the best stuff increasingly easier to find for those with the skills to use the tools.” His response to the second is more tentative: “Huh? How are Google search results emblematic of information overload?” More generally, he takes issue with blaming Google for information overload or glut, saying it’s the other way around: “in the hands of a skilled user, Google is a powerful tool for filtering out the chaff.”
· Giustini, correctly calling the discussion amicable, offered a riposte in his own blog to Rothman’s “Huh?” comment:
Google most certainly is emblematic (a visible symbol) of information overload, and in fact is the information specialist’s laboratory for it. It’s well-documented throughout the blogosphere that web 2.0 has resulted in too many RSS feeds, too much data and information from disparate sources with little connection to each other.
Google is the epitome, the very gateway to all of this information. 100-200 million searches a day! So yes we do have information overload for most searchers in Google. 99% of the information that we are finding in Google is irrelevant to medicine.
Infoglut is the most shocking byproduct of web 2.0.
(All emphasis as in the original—except that Giustini has the first paragraph highlighted with a yellow background.)
Which brings us to this post. Regarding the first sentence:
I see honest disagreement here.
I think Google is emblematic of the way that the clever application of technology overcomes “information overload.” The Web is huge, filled with an insane amount of information that is varyingly good, bad, ugly or [fill in your favorite adjective here]. But if one uses Google to search for Google Scholar Dean, the first four results are about Dean Giustini, the author of the UBC Google Scholar Blog. It took typing three words and I found EXACTLY what I was looking for in about 0.51 seconds. To me, this doesn’t paint an image of Google as a symbol of information overload.
As to the second sentence of the first paragraph, excerpting:
[T]here are many popular positions (technical, political, philosophical…) expressed in the blogosphere (and elsewhere) that I believe to be wrong-headed, foolish, unwise or silly…
I’m sincerely flabbergasted to hear a librarian (or any information professional) complain that there is “too much data” or “too many RSS feeds”.
“Web 2.0″ doesn’t cause an information glut. What causes an information glut is being an information glutton, taking on more than anyone can reasonably manage. There aren’t too many RSS feeds. Rather, there are users who subscribe to too many RSS feeds. The solution isn’t for less data to exist, the solution is smarter, more selective use of the data. The tools that help us filter and manage the information that we care most about are continuing to improve in power and sophistication…
There’s more (a lot more), but let’s leave it at that.
I’ve been on both sides of this long-term discussion—and at this point, I agree with those who say the problem isn’t information overload, the problem is inadequate filtering. I still monitor 500 blogs, according to Bloglines—and that doesn’t cause me much grief. After all, I certainly don’t read every post from beginning to end! (With changes in blogging behavior, 500 feeds may mean fewer than 60 posts per day, and rarely more than 100. Note that those 500 feeds do not include mediablogs or other blogs with dozens of posts each day.) The 640+ delicious items at the moment? Realistically, I probably have had 300-400 lead sheets at a time. Now, the recent backlog doesn’t use real paper until it’s been re-filtered by a second look. It all works. It’s all good.
That’s Steve Smith’s version of my “top tech trend” for this year, “Show me the business model”—but it’s farther-reaching. It’s also the title of his March 2009 EContent column—which notes that, for all the attention paid to internet media, “the real money isn’t there yet.” (Not that the eyes are either: a recent report says that, for all of YouTube, Hulu and the others, 99% of all video is still watched on TVs.) Smith notes that on-air and print advertising sells at much higher rates—and produces more revenue—than most digital models. “We talk ad nauseam about digital being the real ‘growth center’ for media, but how can it be called growth without growing revenues?”
His advice for making money in online media boils down to five principles:
· Reaggregate. (Find more audiences, realizing that you’ll get a lot less from each audience member.)
· Charge advertisers more. (Online ads are relatively cheap at this point; maybe online/offline integration will help.)
· “Go hybrid”—make sure people pay you for something somewhere, don’t assume ads will pay the bills.
· Deal in data: “Ultimately, online publishers are not selling advertising against content but against audiences.”
· Create content on the cheap. “The age of mass media is over… The money available for original content creation will shrink … permanently.”
If that’s all true, it’s sad. Is it true? I’m not sure.
Computer magazines tend to be full of tweak articles, various ways you can improve (or at least modify) your computing experience—frequently for free. It’s a little rarer for things like TV and home theater, which makes “Money for Nothing and Your Tweaks for Free” in the June 2009 Home Theater fairly refreshing.
Some of the tweaks are free, some are inexpensive—and I suspect most people will find something here they can use and hadn’t thought about. Sometimes that’s as simple as cleaning the dust off your flat screens once in a while—using microfiber cleaning cloths or brushes specifically designed for the job (never Windex, and never spray a fluid directly on the screen). There are quite a few others, first for TVs, then for sound systems.
I haven’t spent much time with the Wolfram Alpha search engine, or answer engine, or whatever you choose to call it. But I have spent enough time—and read enough informed commentary—to recognize Steven Levy’s woo-woo “The Answer Engine” writeup in the June 2009 Wired for what it is: Levy once again losing critical detachment in the face of something suitably Shiny. Consider the conclusion:
[O]nce Alpha tells you how many Nobel Prize winners were born under a full moon, you’ll know that we’ve moved one step up the evolutionary ladder of knowledge.
Wolfram claims that Alpha “makes it easy for the typical person to answer anything quantitatively.” OK, let’s step back—unless Wolfram’s misquoted, that’s just dumb, because many questions do not admit of quantitative answers. Maybe he said “answer anything quantitative”—that is, that Alpha will make it easier to answer quantitative questions—which could be a relatively small subset of knowledge and the questions we’d like to answer.
Levy’s level of detachment is usually obvious from the start of an article, and this one’s no exception. His example of using Alpha: “Type in a phrase, hit Return, and knowledge appears.” (Emphasis added.) After a cheerleader act for Stephen Wolfram, he comes up with this detached comment:
So when Wolfram asked me, “Do you want a sneak preview of my most ambitious and complex project yet?” he had me at “Do.”
I bet he did, Steven. It’s one thing to be a fanboy; it’s another thing to be so blatant about it.
The Alpha site itself isn’t quite so woo-woo:
Wolfram|Alpha’s long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything.
I’d even argue with that more modest claim—unless “systematic knowledge” reduces knowledge to that which is computable. I suspect Alpha’s claims are way too broad even in that case, but who knows?
Spending time with Alpha (in August 2009), I find that my naïve questions generally yield the Alpha equivalent of “Does not compute.” So I went to the examples—and I have to say, they offer a strange, befuddled form of knowledge. Say I want to compare the New York Times “vs” the Wall Street Journal. (I don’t, but that’s an example.) Alpha returns comparative circulation and the names of the publishers (also the websites and countries of publication). Frankly, about the least useful contrast you can make between these two publications is that one has twice the circulation of the other—but it’s a computable distinction. So I tried a few more… “Analog vs Asimov’s” yields “isn’t sure what to do with your input.” (Using the full names of the two publications doesn’t help.) “Time vs Newsweek” yields the same “not sure what to make of it” result…which makes the first example seem, um, canned. I tried a few more, ones that are directly equivalent to the example given. PC World vs PC Magazine? Isn’t sure. EContent vs Online? Ditto. Modesto Bee vs San Francisco Chronicle—ah, another case where it gives relatively useless numbers. Stereophile vs Absolute Sound (a meaningful comparison)—no result. (Yes, I tried others—with little success.)
My first response to Levy’s example was “Who cares? How does that ‘fact’ add to the store of human knowledge?” My second response—thinking it over—was to check Wolfram|Alpha, two months after an article in which Wolfram says “his engine would have no problem doing this on the fly.” Guess what? “How many Nobel Prize winners were born under a full moon?” asked on August 9, 2009 yields:”Wolfram|Alpha isn’t sure what to do with your input.”
Um, Steven? Before you tout the wondrous abilities of a new device and give an example of those abilities, shouldn’t you try the example? (I expected this one to work—because I expected it to be a canned result.)
The site proudly claims to already contain “10+ trillion pieces of data, 50,000+ types of algorithms and models, and linguistic capabilities for 1000+ domains.” We get that “systematic knowledge” phrase again along with wording touting Alpha’s “ability to understand free-form input.” If it can understand free-form input, why can’t it do anything with it?
I managed to come up with some workable examples—but “workable” in an odd sense. “UC Berkeley vs USC” yields a table that makes the two institutions look pretty much alike—except that USC has a higher percentage of grad students and is thus, presumably, a more serious institution. How many people believe USC is directly comparable to, or better than, UC Berkeley? Well, computationally, USC shines…
Maybe there’s the rub—quite apart from the extent to which suggested examples are peculiar examples (that is, broader sets of the same things simply don’t work). The things that Alpha tells me about USC and UC Berkeley aren’t all that significant—they’re a small collection of facts. But they are computable.
Yes, it’s possible that Wolfram|Alpha will someday be more than a sideshow. Levy’s level of adulation is, if nothing else, wildly premature.
When a different writer at Wired.com did a little item saying W|A is no good at “cool” searches, commenters were on him with a vengeance—mostly saying “It’s not a search engine” and defending its magnificence. One of them gave an example—and the example may illustrate the limits. He notes that “Male age 19 6’2” 215 pounds” will yield a table with BMI, ideal weight, fat mass and a couple of other facts (or presumed norms). But that’s not how someone would use it, I believe. I typed in exactly the equivalent for my own age, height and weight, and did indeed get such a table (saying my “ideal weight”—one of those odd constructs—is two pounds less than my actual weight, even though my BMI is well within the preferred region). But, you know, I’d be more likely to ask “Am I fat at 5’10” and 161 pounds?”—and that question yields, well, you know by now. W|A claims an ability to understand free-form input, a claim at which it manifestly fails.
Another rave review of W|A cites the things it does very well—and they’re a very limited set of things, mostly returning results that aren’t particularly useful…and wrong in at least one case. The commenters get more and more into “You’re not asking the right questions,” which is a tough defense for a tool, particularly when one asks precisely the kind of questions suggested by the site. At least one example of what W|A does so well is apparently also canned—directly comparable searches just don’t work. Many comments cite abstruse questions—but, as the writer noted, many perfectly normal computational questions just don’t work.
Sure, it’s alpha. Sure, it’s a specialized tool—but when it fails at the very claims it makes for itself, and when its results fail the significance test in so many cases, one wonders whether there’s a fundamental disconnect. When more careful commentaries admit that it’s “kinda picky” and demands a “specific syntax,” it’s clear that the site’s own claims far outstrip reality…and, to be sure, Steven Levy’s fanboy commentary.
In February 2008, Karin Dalziel wrote two successive essays on “professional social networking” at nirak.net – musings of an LIS student (www.nirak.net—note that Dalziel now has her Master’s but has kept the subtitle). You’ll find them on February 6 and 7 respectively: “Professional social networking: why and how” and “Do’s and do not’s of professional social networking.”
What is professional social networking? In Dalziel’s case, it doesn’t mean she’s doing social media as a career—it means she’s used social networking for her career. Since she explicitly uses a Creative Commons Attribution (BY) license and it’s very good stuff, I’ll quote most of the posts, with comments at the end of each post.
When I started library school a yearish ago, I knew no one in the library world. I had never heard of Stephen Abram or Walt Crawford, let along Meredith Farkas or Karen Schneider. I had only started working in a library a few months before, and despite the fact that my first job was as a page in a library, my knowledge about libraries was limited.
I found that I really liked my first library class…and it spurred a lot of thoughts in my existing blog. As time went on, my blog became more and more about library stuff. At the same time, I sought out other library blogs and subscribed to them. At one point I was subscribing to hundreds of library blogs—I have cut back since then. Reading blogs did several things—it gave me glimpses into the different types of careers I might have, it clued me into what librarians were talking and thinking about now (something reading the professional literature just didn’t do) and also let me experience what a conference was about before I went. By selectively delving into the archives of some of the more long running blogs, I was able to gain an appreciation of where the profession has been in the last few years.
After reading blogs and writing for a while, I started commenting. I tried to keep up with my comments [and found a Firefox plugi, cocomment, that watches comments for her]. I also started to examine my other web presences. I had a MySpace profile…I cleaned it up a bit so it looked presentable for potential employers and colleagues to find. I started actively seeking out librarians on social networks—looking through friends of friends for names I recognized, mostly. I did the same on Flickr, signed up for a Facebook account, etc. I joined the Ning network “Library 2.0” and was active there for a while…
Somewhere along the way I redesigned my site and migrated to WordPress from Movable Type. I created a second site at karin.dalziel.org to serve as my C.V… I started treating everything online as part of my professional identity—this may not always be important, but I believe it makes a difference, especially in the year or two before job hunting. That said, I tried not to totally stifle myself, either—much of my life is online, after all, and I don’t want to completely cut that off. Another big change was to start using my real name for nearly everything—commenting, site logons, etc. I still have a few places where I use an alternate logon, but there aren’t many…
Now I am in the maintenance phase of my online life—I take a look at new applications occasionally, but mostly stick with what I have…
A few specific examples of what online networking has done for me:
I created an “Open Access for Librarians” presentation for a class… This was the first thing I put on my “Publications, Presentations and Projects” part of my professional website. I quoted Dorothea Salo’s blog Caveat Lector in my presentation, so I sent email with a link to the presentation on my own site so she would know. (At the time, I considered this more of a professional courtesy than anything….) Dorothea linked to my presentation (and complimented my on my website!!) and it was also picked up by Peter Suber and American Libraries Direct… It was also featured on the home page to my own library’s website.
More recently, I gave a brown bag talk on Zotero, a open source citation management program I have been using for over a year... I added the talk to my website…and told people about it in Twitter and on my blog. I also responded to a request for slogans on the Zotero forum, pointing to the research LOLcats I made for the presentation on Flickr. I got a nice email thanking me, and got a free Zotero Tshirt and stickers. I was also recommended to do another presentation on Zotero.
Can it work for you?
I don’t necessarily think the online social networking approach will work for everyone, but for me, it has been amazing. I can’t afford to go to that many conferences…but online I can take part in conversations I wouldn’t otherwise be able to. It’s not a replacement for traditional, face to face networking, publishing, and conferences, but it is a great supplement. Another huge advantage for me is that I am a little shy when meeting people for the first time, but if it is someone I know from online, I at least have a way to start up a conversation.
Dalziel didn’t set out to build The Dalziel Brand as such, but she picks up on what I consider important lessons: Using your real name (or at least using a consistent handle, so people know who they’re dealing with), being aware of your online totality, recognizing that your online identity is part of your professional identity—and following up possibilities to see where they can lead. I’d add “and believing you’re as qualified to speak up as anybody else out there,” but she doesn’t explicitly say that.
Portions of the second post, leaving out most of the details:
Do: Learn how each social network works…
Don’t: Use networks to spam people…
Do: Choose the networks that work for you…
Don’t: Join networks for the sole purpose of asking for a favor…
Do: Put up pictures of yourself…
Don’t: Put up potentially embarrassing pictures of yourself…
Do: Check your name in search engines…
Don’t: Fall for “Search Engine Optimization” offers…
Do: Share your knowledge…
Don’t: Become locked into your opinion…
Do: Carry business cards with your web address at all times…
Don’t: Complain, gripe, be snarky, or otherwise be overly negative…
Do: Utilize a number of social networking sites in your “main” site…
Do:…Link early, link often.
Don’t: Limit your networking to online.
Do: Use Creative Commons licensing whenever possible…
This concentrated list is 121 words out of 1,142 in the original post. The other thousand-odd words add meat to these bones; the whole is a remarkably sound starting point for social networking. I wish I’d had this list ten years ago—and I might benefit from it now as well. Not that I’d always agree 100% with everything in the list; what fun would that be?
Oh, and with regards to the final “Do”: On February 22, 2008, Dalziel posted “Why I use Creative Commons and not public domain,” after someone (commenting on another blog entirely) called Creative Commons a “great leap backwards” from Public Domain. It’s an excellent post, well worth reading.
Scott Rosenberg’s written a book about blogging—and maybe it’s not surprising that Wired’s related interview says it’s not likely anybody’s written a “coherent narrative of blogging” within a blog. (OK, the interview’s by Steven Levy, so you can’t expect a lot…) Here’s the first and probably stupidest question and Rosenberg’s good response. “Here’s something I bet a log of people ask: If blogs are so great, why did you have to write a book?” The response: “It’s an inevitable question, but it’s illogical. When Greil Marcus writes a book about Bob Dylan, do you say to him, ‘Why’d you write a book? You should have written a song.’”
· There’s an interesting article in the July/August 2009 ONLINE (36 pages before my column, which of course you should read), “Don’t Confuse Me with Facts: Explaining Research-Based Information to Experience-Based Listeners.” It begins with a discussion of how the author answers a question she says she’s “often asked,” namely “Why is it that there are more kids with disabilities than when we were young?” She goes through a laundry list of reasons that isn’t really true—and says that, when she gave a similar answer to one medical technician, the person said “But I still think that there are more.” Which she interprets thus: “In other words, ‘Don’t confuse me with the facts!’” This is interesting because, in her spiel, there are lots of suppositions and assertions—but not one quantifiable fact. Later, we get a perfectly reasonable question followed by “You can hear what the speaker really wants to rant about beneath the reasoned question.” I dunno. I have problems with “research-based” responses that don’t cite any research. Maybe that means I don’t care about facts, but I don’t think so.
· Kate Sheehan makes a useful distinction in “time’s on our side (sometimes)”—a February 23, 2008 post at loose cannon librarian. A discussion on a library list over whether libraries should buy Blu-ray discs involved a couple of other assertions: That downloads will wipe out discs anyway, so why bother—and that relying on downloads increases the digital divide. (OK, there was more to it—the view that, what the heck, nobody cares about better picture quality, so why bother with Blu-ray?) Sheehan suggests that they’re both right—but on different time scales. That is: Downloads will (or might) eventually (maybe) replace DVDs and Blu-ray discs—but not for a long time yet, and particularly not for people who don’t even have 768K “broadband” yet, much less the 20Mbps broadband you’d need for true high-def streaming. In other words, what a library should buy now for use over the next five to fifteen years is different from where things might eventually wind up. Would you consider a library that refused to buy audio CDs in 1990 (or 1995 or 2000—or, for that matter, 2009) because, after all, eventually downloads will replace them “forward-looking” or anti-patron?
· Sometimes a rant is so well done and probably so deserved that you just feel the need to link to it. So it is with the brilliantly titled “Post #103” by Mike Simpson on A splash quite unnoticed, which appeared on May 29, 2008. You’ll find it at www.ice-nine.net/~mgsimpson/asqu/archives/103 (the dash is part of the URL). Simpson had stomach flu and had been watching “Some Vendor’s webinar” (I’m delighted to say Simpson also loathes “webinar” as a term). There’s no way I can do this piece of writing justice through excerpts, but I will quote four sentences from two paragraphs, separated by an ellipsis: “While we’re on the topic of your horrible slides, why are there gross grammatical errors in your canned presentation? Do you read your own slides to make sure they make sense?”… “You have now claimed that open source products require a huge investment in local support costs. You are now a lying tool.” Seriously good stuff.
· Sometimes there’s a story title that, to some of us, writes the story better than what follows. In this case, it’s from a June 5, 2008 BusinessWeek article: “Online polls: How good are they?” Maybe you’ll find the story more convincing than I did (although it raises a few flags, it basically says fine, just fine.)
· I never got around to doing a predictions-and-followup story this year. Silicon Alley Insider put together a nice set of “the worst predictions for 2008” in a December 29, 2008 story—one that also cited some reasonably good ones. Some of the worst? PC World said Linux would gain major market share. CNet predicted broadcast TV would die (in 2008!), that PCs would become passé and a cluster of other bad guesses—er—informed projections. InformationWeek and others saw the internet defeating Chinese (and other) censorship. BusinessWeek saw fast and major changes for AOL. TheStreet saw the Wii falling out of favor. (Among several “best predictions,” the story includes “the shine comes off Google”—which is presumably why people are touting GoogleWave as the best thing since the internet, feel that GoogleReader could be a FriendFeed replacement, and seem actively hostile to the idea that another search engine could be useful.)
Cites & Insights is sponsored by YBP Library Services, http://www.ybp.com.
Opinions herein may not represent those of Lyrasis or YBP Library Services.
Comments should be sent to email@example.com. Cites & Insights: Crawford at Large is copyright © 2009 by Walt Crawford: Some rights reserved.
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.