Pointing with Pride, Part 4
As I look for interesting stupidity on my part (never in short supply) and oddly wrong projections, I sometimes encounter forgotten gems.
Living with Contradictions was the lead essay. By my standards, it’s a classic.
The White Queen in Lewis Carroll’s Through the Looking-Glass has useful advice for those trying to find the one true path for the future. When she tells Alice that she’s one hundred and one, five months and a day, Alice responds, “I can’t believe that!”
“Can’t you?” the Queen said in a pitying tone. “Try again: draw a long breath, and shut your eyes.”
Alice laughed. “There’s no use trying,” she said: “one can’t believe impossible things.”
“I daresay you haven’t had much practice,” said the Queen. “When I was your age, I always did it for half-an-hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.” (p. 200 in the Modern Library edition of The Complete Works of Lewis Carroll.)
Substitute contradictory for impossible, and I’m on the side of the White Queen. It’s too easy to assume that if A is going to happen, that means that B can’t possibly happen—or, vice versa, that B precludes A. But many contradictory situations arise because we substitute “all” for “most,” and the world is much subtler and more complex than most of us wish to consider. When I see “we all” in an article, I’m finding it useful to raise the same immediate yellow flag that I do when I see “inevitable.” “We all want to be connected all the time.” “We all use cell phones.” “We all watch too much television.” “We all” is generally a dangerous oversimplification. Take away the simplification, and contradictory trends can be reconciled, if only because life tends toward complexity.
A recent piece in The Industry Standard entitled “Living with contradictions” makes some interesting points. “Sustaining contradictory ideas simultaneously is one of the hardest things for the human mind to do, and that goes a long way toward explaining why this kind of in-between position seems to be rare.” That comes after the author’s suggestion that it’s true both that “the Internet is changing everything” and that “a few big, traditional companies will continue to be major players in many industries.” He further notes that some Internet stocks were still good investments at their peaks—while most Internet stocks were absurdly overvalued. His most heretical stance: “Amazon, most likely, will neither overtake Wal-Mart nor fade into oblivion, but rather end up somewhere in between.”
I’m not sure that everyday people have that much trouble handling ambiguous or contradictory ideas—but the article is certainly right in noting that “the dynamics of the media and public discourse tend to polarize the discussion.” Ambiguity doesn’t work well in headlines or make a pundit’s reputation, so the tendency is to simplify at the expense of the truth.
“Categorical statements about the future of the Internet Economy…are likely to be proven incorrect. Wisdom lies in the ability to identify and interpret the subtleties, and to accept that the world is a complicated and contradictory place.” You could substitute “media” for “the Internet economy” or almost any hot topic you choose.
A few months ago, I was mentally belittling Michael Fremer (a staff writer for Stereophile and Stereophile’s Guide to Home Theater who firmly believes that LPs offer better sound than CDs) for one particular article. As part of evaluating some device, he noted that he gets better sound from CD-Rs than from the source CDs. My response was, in essence, “That’s impossible. How can a copy of a compact disc possibly offer better sound than the original?”
The ad hominem answer would be that the CD-R copy is “better” in a special sense: that it loses just enough of the CD signal to show a bit of euphonic distortion, making it more “musical” than the original. Another ad hominem answer is that it’s all in Fremer’s head.
Maybe that’s a lack of flexible thinking on my part. Bob Starrett’s “The CD Writer” in the September 2000 EMedia carries the title “High fidelity: archiving audio to CD-R.” In this one-page treatise, he notes that he has opined that “the discs you make yourself have much lower error rates than the pressed CDs that you buy at the store.” Challenged to demonstrate that assertion, he grabbed a bunch of AOL CD-ROMs, tested them for Block Error Rate (BLER), then copied one to CD-R and tested the resulting BLER.
Part of the essay was his surprise that the AOL spam tested as well as it did: error rates of 5.8 to 7.3, far below the maximum allowable 220. But his CD-R copies had BLERs of 1.1 to 1.4: “Like I said, recorded discs generally have lower error rates than pressed discs.”
Audio CDs tend to have considerably higher BLERs than CD-ROMs. When he tested six brand-new CDs, four had BLERs between 10 and 24, while one had a disturbing 142. His copies tested at 1.7 or so: that seems to be fairly consistent.
So what? From one perspective, none of this should matter. A good CD drive should be able to recover data perfectly from a disc with BLER less than 220—after all, if it didn’t recover the bitstream, how could you cut a “better” CD-R? On the other hand, discs with higher BLERs are likely to be more susceptible to failure through fingerprints and scratches.
Applying a little White Queen thinking, Michael Fremer may not be as crazy as I thought. A disc’s BLER should be inaudible as long as the bitstream can be recovered fully—but that’s also supposed to be true of a disc drive’s jitter rate (which I’m not about to explain here). Reasonably sound tests suggest that keen listeners can hear the difference in drives with high jitter rates; is it possible that a high BLER also influences the sound in subtle ways?
That leads us into difficult territory, as Dana J. Parker discusses on the last page of that same EMedia in “The green flash and other urban legends.” You may know about the green flash that supposedly appears just as the sun sinks below the horizon—but that’s not the green flash she’s interested in. Parker wants to poke fun at the kind of device that Stereophile’s writers seem to tout with regularity—one reason I treat parts of Stereophile as a humor magazine.
There’s the classic green marker. For a decade now, some people have claimed that you can improve the sound of a CD by marking the inner and outer edges with a green felt marking pen. Today, Audio Prism sells “CD Stoplight,” a device that “reduces jitter” by absorbing “stray light” within a CD player—and, indeed, that’s one of the devices Stereophile recommends. Then there’s the more expensive CD Blacklight, a disc that you expose to bright light, then set on top of your CD. It glows—and supposedly increases stability, reduces electrostatic discharge, and reduces jitter. Other devices claim to reduce electrostatic discharge—which, as Parker notes, should be irrelevant for an optical medium.
Maybe so—and you won’t find any of these bizarre accessories on my CDs. But it’s possible to make a case of sorts. Yes, the device reading the CD is a laser; yes, the optical path should be impervious to electrostatic issues. But that CD rides on a physical assembly (and the optical signal is immediately converted to electrical form), and it’s not inconceivable that electrostatic interference could play a role at either of those two points. Unlikely, but not impossible—any more than it’s impossible for a copy of a CD to sound better than the original. Dana Parker, meet Robert Starrett.
Admittedly, some tweaks go beyond the wildly improbable. One $180 device claims to “polarize the polymer” on a CD “in such a way as to maximize the laser’s ability to retrieve stored data.” For a mere $20 per pack, you can get Rainbow Electret Foil. Attach a little strip of this foil over the CD logo on a CD, or on the speed indication on a record label, or on a tape cassette—or on a bottle of wine or a plant. It claims to “neutralize the adverse energy [created by interaction of all spinning discs with the gravitational forces] by inverting the energy pattern and therefore restoring it to a naturally occurring environmental pattern.” You say your wine and plants don’t spin all that much? You gotta believe!
There are three messages here:
· The improbable isn’t always impossible. I disagree with Bob Starrett a lot, but I see no reason to doubt his BLER tests.
· The conclusion above leads too many people into total credulity, where they’ll believe almost anything if the claims are packaged properly.
· When it comes to the musicality of your sound system, the perceived quality of your wine, and many other areas, the credulous people are absolutely right. If they believe a device works, then it does for them.
One of the Web’s better humor sources (either the Brunching Shuttlecocks or Modern Humorist) had a wonderful piece in early 2000. It looked just like the dreamy ads for prescription drugs you see in all the best magazines these days—but it was for the ultimate drug, Placebo. The testimonials from satisfied users could be just as genuine as for any other drugs—and the motto was dead on the money: “Placebo: It works because you think it does.”
When you’re truly bored and find yourself reading the tiny print that accompanies one of those ads, pay attention to the clinical results. In a surprising number of cases, clinical effectiveness is demonstrated by the drug yielding a slightly better result than a placebo—e.g., improving the condition in 18% of cases as opposed to the 15% of cases where the equivalent of water did the job.
Since 2001, I’ve ripped all my CDs to MP3 twice, the second time at 320Kbps, because I think I can hear the difference. I’ve had occasion to listen to CD-Rs prepared by expanding those 320K MP3s and comparing them to the original CDs. Yes, in some cases I do believe the CD-R sounds better—but that could be euphonic distortion from the mild compression.
What else? An article mourning the death of MusicMaker, a website that let you legally prepare a CD-R filled with five to 15 tracks you wanted. It was too expensive and ran into two problems, one being the old Napster and the other being that many early DVD players wouldn’t play CD-Rs (but would play CDs). Today, you’d just buy the tracks you wanted and burn your own CD-R—but we’re just starting to get downloadable tracks at something like CD quality.
Also a piece on “stories between the ads”—the percentage of various magazines that was actually editorial content. In T&QT, I belittled the assurance that we’d all have huge personal lockers attached to our houses so the inevitable success of internet retailing would work—so that your drycleaning and groceries and pet food could be delivered securely. I was a little snarky about “It,” otherwise known as “Ginger,” the thing John Doerr said would be more significant than the web and someone else said would make Dean Kamen richer than Bill Gates. Remember Ginger? You may even see some of them, usually doing tours or hauling overweight cops, always making their riders look like dorks: The Segway, which somehow wasn’t quite as revolutionary as people thought.
Twelve articles in 20 pages: A record for succinctness and variety I’m unlikely ever to match. The issue had everything: censorware, PC values, T&QT, ebooks, copyright, products, and both varieties of Press Watch, the good and the silly.
The Filtering Follies discussed two reports that offered “strong factual ammunition against mandatory filters”—one from Ben Edelman and one from Marjorie Heins and Christina Cho. Not that it made much difference in the end; Congress was more interested in being For The Children than it was in evidence.
Noting PC Values again, I put together “one good configuration” for November 2001—modifying a top unit to make it more balanced in my perspective. What seemed good at the time? A Gateway 700S: Pentium4 at 1.8GHz, 256MB RAM (remember when 256MB was a lot of RAM?), 64MB display RAM, 80GB hard disk, DVD-ROM and CD-RW drives, Boston Acoustics 2.1 speaker system, Windows XP Pro, OfficeXP Small Business, and an 18” (that is, 19” diagonal—18” viewable) Diamondtron (CRT) display. All for a mere $1,977.
It’s interesting to look back to a flurry of postings on Web4Lib from August 28 through September 2, 2001 on whether ebook readers would be big hits then, soon, eventually or ever. It makes interesting reading at this remove, since the readers on the market then are mostly gone now and today’s readers differ in a number of ways. I won’t quote from it here—and in any case nobody (including me) would suffer embarrassment, because I deliberately paraphrased all the excerpts and kept them anonymous.
Most fascinating article cited—well, here I think it is once again worth quoting in full:
Miall, David S., and Teresa Dobson, “Reading hypertext and the experience of literature,” Journal of Digital Information 2:1 (jodi.es.soton. ac.uk).
I was unaware so many English professors claim hypertext is somehow better for literature than linear text. Maybe it’s just as well. When I read some of the assertions quoted in this scholarly article, I’m even more convinced that I’ll never be a scholar. I never thought of books as “machines for transmitting authority” or that hypertext would somehow empower the reader or improve communication in general.
Miall and Dobson put together an interesting experiment. They took two short stories and split each one into chunks of text (one story for each of two experiments). Two groups of readers were asked to read and comment on the stories. For half of the readers, the chunks of text (presented on a computer screen) always ended with a “Next” link at the bottom of the page, offering a straight linear path through the story.
For the other half, each chunk of text (one or more paragraphs) included three hyperlinked words or phrases, designed to suggest a continuation focused on plot, character, or “foregrounding” (which I don’t fully understand). Readers could choose links as they wished. In both cases, there was no “back” function.
Here’s what makes the experiment interesting. All three links in each chunk of text had the same result: each linked to the next chunk of text. There was no way for readers to know that, of course, since there was no “back” function. Links were chosen so that the linkages made some sort of sense. In other words, all the readers were reading precisely the same text in precisely the same order—but half of the readers had reason to believe that they were choosing their own path.
How did it go? In the first sample, 75% of the hypertext readers “reported varying degrees of difficulty following the narrative. Only 10 percent of the linear readers made similar complaints.” Hypertext readers took longer to move from screen to screen; they thought the story was jumpy and that they were missing information. The second story—a different kind of story—yielded similar results. Hypertext readers found the story confusing. Additionally, hypertext readers didn’t comment on imagery as often as linear readers and tended to find the story less involving.
“Hypertext, as a vehicle for literary reading, seems to distance the text from the reader… The absorbed and personal mode of reading seems to be discouraged.” The authors try to avoid generalization, but their conclusions seem sensible to me.
This is very much a scholarly article from literary scholars; expect some slightly tough reading and more arcane politics than you’ll find in librarianship. But it’s worth plowing through as one of the few real case studies of the effects of hypertext on reading.
Another issue illustrating that some things never change. I grumped about a commentary in American Libraries that said all librarians wear “last century’s clothing—and that “librarian fantasies” (the ones where a cliché librarian suddenly casts off the glasses, lets down the hair and becomes a fantasy woman) would never happen with doctors or lawyers. “If she’s really saying men don’t fantasize about prim female lawyers or doctors being overcome with desire and turning into fantasy women—it seems to me there are enough TV shows and movies to indicate otherwise.” What doesn’t change: Aspersions about the cliché librarian and the universal applicability of that cliché. Oh, and the idea that somehow doctors and lawyers have such great images in the media. Really?
Back then, enough seemed to be happening with ebooks that I ran several ebooks/etext roundups a year. I noted the passing of the Frankfurt E-book Awards, “created for a new technological form, yet judged on literary merit.” USA Today was serializing “ebooks”—but the example noted was 7,000 words, which isn’t even a novelette, much less a book. (The most common classification for word length is probably that of the Science Fiction and Fantasy Writers of America, which uses word count for determining Nebula categories. Short stories run less than 7,500 words; novelettes, 7,500 to 17,499; novellas, 17,500 to 39,999; novels, 40,000 words and up. The Hugo Awards use the same definitions.) This conforms with other ebook sales claims of the past: Many sold “ebooks” were (and are?) actually short stories.
The longest articles in this extra issue were a Copyright Special on the Broadcast Flag (a bad penny that just keeps turning up) and Perspective: The Shifting Commons. As to the Broadcast Flag, a thorough unbalancing of copyright in favor of secondary rightsholders (Big Media), the FCC implemented it—and the courts struck it down as being way outside FCC’s authority, at least for now. Herewith my conclusions, after citing and commenting on a range of material:
Will the FCC take the proper course and laugh the Broadcast Flag proposal out of existence? Only time will tell. For all I know, that could have happened by the time this appears.
Even if it does, the experience is worth remembering. Elements of Big Media appear determined to assert absolute, total control over every use of “their” products, overriding first sale, fair use, and any other doctrines and without regard to secondary damage to consumers, the consumer electronics industry, the computer industry, or others.
It’s becoming increasingly clear that the MPAA and RIAA don’t think current copyright law is unbalanced enough. Given the history of prerecorded video and DVD, this attitude doesn’t appear to make commercial or financial sense.
The Broadcast Flag debate has no immediate effect on libraries, but the indirect effects could be considerable—particularly if this end-run or congressional action eventually crippled general-purpose computing devices, eliminated the possibility of archival copying, and possibly even eliminated free circulation. Would Big Media ever do something that would make it impossible for libraries to purchase and circulate music, movies, or books as they do now?
Do you need to ask?
I should have known that, faced with a choice of favoring big business or favoring the public—who, after all, own the airwaves that broadcast media use—the FCC would always, always favor big business, in this case Big Media.
The other essay was an oddity, combining two themes that connect primarily based on this truism: “People tend to generalize from their own situation, and that’s usually a mistake—even in this sentence.” The first portion had to do with Creative Commons and MovableType’s inclusion of a CC license in the default implementations of its software—something that seemed to anger a number of bloggers. The moral, as it pertained to the common theme:
Most of the brouhaha reflected in the two personal weblogs has to do with generalization, as does Arnold Kling’s essay. That is:
· One group is asserting that others believe that everything (at least within a category of work) should use CC licenses, and that such generalization is a bad idea.
· Another group is asserting that, because they personally don’t find CC licenses worthwhile, nobody should use them.
So far, I haven’t seen explicit evidence that the first assertion is real. Creative Commons most certainly does not suggest that everyone should use a CC license. I suppose there’s an “intellectual property is theft” crowd that might make such an argument, but neither Lessig nor Creative Commons are in that group. I would sharply disagree with such an assertion: CC licenses don’t make sense for everybody…. Neither general adoption nor general shunning makes much sense.
The second part—well, it’s not worth revisiting. But there was a third part, sort of—about contemplation. The background: I’d objected when David Levy said, in a presentation, that nobody had time to contemplate—and wrote a “Crawford Files” column extolling the off switch as the century’s most vital technological device, as it allows you to switch things off so you can think. One colleague noted that she and other extreme extroverts think things through by talking about them. An entirely separate article (in Atlantic Monthly said that introverts are “more intelligent, more reflective, more level-headed, more refined, and more sensitive than extroverts.”
As an introvert, my response to this claim was “Give me a break” and other pithy comments about what I considered extreme overgeneralization. I drew another moral out of it all:
When you generalize by saying that nobody has time to contemplate, you’re wrong. (See the original column: Such a generalization was the trigger.)
When I generalize by saying that everybody needs to spend time in quiet contemplation, I’m also wrong….
I believe we all need to spend time thinking deeply. I believe we can all make such time.
If your style is such that thinking deeply is a talkative, social activity rather than a quiet, solitary activity, that’s a difference between your mind and mine.
And followed with notes on my own generalizations
I’ve probably erred in making fun of some gadgets, technologies, and services just because I don’t find them useful. If so, I apologize—and I have reason to believe that y’all will accept my standing invitation to call me on such erroneous negative generalizations in the future. By now, you should know that I love (and use) thoughtful feedback, particularly when it expands my understanding by offering another viewpoint.
I will continue to be critical on at least the following grounds:
· Too many gadgets and technologies are touted as something everyone needs or will want. That’s automatically grounds for skepticism on the basis of false positive generalization. Other than food and water, there’s precious little that “we” all want or need.
· If I believe that a gadget is a solution to no need (that I can perceive), or is an absurd way to do something that something else does better, I’ll feel free to call it pointless. If I’m wrong, let me know. (I do not regard “It’s kewl” as plausible justification for a gadget, or at least as a good reason for librarians to think about the gadget.)
· It’s reasonable to say why I would find a system, technology, or gadget more problematic than promising; once in a while, I’ll try to note that others might find them wonderful. Maybe you really love the idea of “pervasive computing.” My sense that it’s a thoroughly dystopian notion is just that: My sense.
· And, at least to my mind, there are many devices that make reasonably good sense for thousands, millions, or tens of millions of users but that don’t necessarily work well within my conception of a library environment. My conception: Maybe not yours.
This issue was 26 pages long; it was the first issue longer than 20 pages (except for the December 2000 introductory issue). My original aim was 12 issues a year, ranging from 12 to 16 pages. Since there has never been a 12-page issue of Cites & Insights (so far), and since eight of that first year’s 13 issues ran 18 or 20 pages, I modified the aim—to 14 to 20 pages.
In attempting to maintain the 20-page limit and as I started to cover new areas, I found myself cutting out my own commentaries to leave room for everything else. That couldn’t continue. Part of what I said in the opening Bibs & Blather:
Given time, energy, competing pressures, and the sheer volume of stuff I want to write about, something’s gotta give. I’m trying to determine where I can provide added value and am willing to spend the reading, thinking and writing time to do so. In order to allow that process to move forward, a few changes are already in order:
Volume 4 will be “lumpier” than Volume 3. I’m abandoning the 20-page limit (at least for now), and the intervals between issues may be much more variable than they were last year. Yes, this issue is too long—but the more I edited, the less I was willing to cut, and I’ve already set aside 12,000 words for future issues. As for intervals, I’m aiming for a dozen “regular” issues with monthly designations, and anywhere from one to…(well, however many it takes) thematic issues. It’s possible that the first thematic issue will be out before ALA Midwinter, that is, roughly two weeks from this issue. It’s also probable that there will be at least one five-week or six-week gap between issues (April may be late this year), and even that various conflicts might lead me to emulate American Libraries and EContent and do a combined two-month issue.
How did I do with that? There have been a few thematic issues, sometimes around Midwinter, and the one for 2004 was, in my humble opinion, spectacular—but more about that next month. I did do one two-month combined issue, July/August 2005, and that could happen again. There was a long run when a few thousand words rolled over from one issue to the next, but not recently (and I’d rather not have it happen). As for the 20-page limit, there hasn’t been an issue shorter than 20 pages since February 2003. It’s been some 32 months since the last issue as short as 20 pages, and that one—the shortest issue of 2005—was almost entirely one essay.
In another section of that lead essay, I did my first objective “study” of liblogs, based on 234 liblogs listed in Open Directory. (Remember when Open Directory was a key source?) It was a quick check of how recently each blog had been updated, done on December 12, 2003, “before most people would wind down writing for the holidays.” Since then, the studies have become more ambitious.
The final Scholarly Article Access piece discussed the Public Library of Science as a publicity engine and other aspects of open access. While there was an Ebooks, Etext and PoD roundup, it came after almost a half-year interim: “Maybe that’s because I’m not paying attention—or maybe it’s because very little has been happening.” That essay discussed the death of Gemstar’s ebook operation and Barnes & Noble’s shutdown of ebook sales—but also cheerier items from eBookWeb and overly-optimistic projections for new dedicated ebook readers, e.g., the Sigma Ebook from Matsushita.
Copyright Currents discussed, among other things, the SunnComm follies—SunnComm being the creator of MediaMax CD3, the silly “copy-prevention technique” that you could evade by turning off AutoRun. SunnComm also being the company that threatened to sue a researcher for pointing out the absurdly weak “prevention” and later backed off, saying the harm to its corporate reputation had already been done. Oh, and some of the events in the long-running SCO vs. Linux legal battle—the one where SCO’s head asserted that it’s unconstitutional to waive your rights as a copyright holder! (I’m not kidding: “SCO asserts that the GPL, under which Linux is distributed, violates the United States Constitution…”)
I was way too optimistic in the issue’s final essay, A Scholarly Access Perspective: Tipping Point for the Big Deal? The lead paragraph, before citing a number of examples and hoping that there was momentum to get away from the hugely expensive ejournal bundles:
While several aspects of scholarly article access remain active, I believe one recent and ongoing story may be most important for librarians and libraries. A growing number of academic libraries are finally saying “Enough!” to Elsevier and ScienceDirect, and the faculty at some universities are lining up behind the libraries—and even, in at least one case, calling for scholarly boycotts.
The biggest section was Library Access to Scholarship—roughly half the issue. That included a chunk of quotations from Scientific Publications: Free for All? and some early commentary on this landmark UK government report. There was also news on how universities were improving the Big Deal, back-and-forth about society publishers (and cross-subsidization of other society activities by profits from journals), and commentaries on a bunch of articles on OA and related issues.
In The Good Stuff, I found myself scratching my head over an early collection of scholarly articles on blogging, Into the blogosphere: Rhetoric, community, and culture of weblogs. Some of the papers were worth reading—but the foreword served as a powerful reminder that I was probably destined never to be a successful Ph.D. candidate in Rhetoric. Here are the first two sentences and the final paragraph:
Blogging offers one powerful way to embed a reraced, regendered liberal arts. The familiar system of studying/performing/credentialing is, as folks reading this piece know, premised on the magic number seven.
With the 4 E’s (explain, enable, embed, and enthymeme the verb) and the 7 reraced and regendered liberal arts (frequently presented as general education programs), as well as with the many suggestions, theories, insights, and inquiries of volumes such as Into the Blogosphere, we might have hope.
The Broadcast Flag again? I subtitled the Perspective (an Endless Story?). Excerpts:
On May 6, 2005, the U.S. Court of Appeals for the District of Columbia circuit ruled unanimously: The FCC exceeded its authority in establishing the broadcast flag. “We grant the petition for review, and reverse and vacate the Flag Order insofar as it requires demodulator products manufactured on or after July 1, 2005 to recognize and give effect to the broadcast flag.” The American Library Association and co-petitioners won…
Remember the first word in “broadcast flag.” This was never about protecting pay-per-view material or premium cable or preventing redistribution of a DVD or a CD. The material in question has been broadcast—over the airwaves that the U.S. government provides for free to a group of highly profitable businesses.
That material has already been paid for. The presumed intent is for it to reach the widest possible audience. It’s called broadcasting, not narrowcasting or restricted transmission.
Ever since the Betamax decision, we’ve assumed we had the right to watch broadcast TV as we see fit—delaying it, watching it over again, even (gasp!) fast-forwarding through commercials. MPAA hated Betamax, with Jack Valenti predicting it would strangle Hollywood. Quite the opposite happened—but Big Media has never given up its attempts to assert control over every use of its products, even after those products have been broadcast over the airwaves.
You can support copyright protection and still find the broadcast flag extreme, even reprehensible. You can support strong copyright protection and understand that the flag goes way too far. I do not believe that you can support the broadcast flag, or any variation of the concept, and claim that you believe in balanced copyright or in citizen rights.
The broadcast flag would injure every library and librarian, directly or indirectly. For now, it’s dead. Let’s hope it stays that way—and here’s to Public Knowledge, ALA, ARL, SLA, AALL, MLA, the Consumer Federation of America, Consumers Union, and EFF. They fought against this unreasonable regulation (and FCC power grab), and they won. At least this round…
The broadcast flag story isn’t over. I suspect no sane politician will embrace the notion of “breaking all the TVs” and “shutting down the TiVos”—but you can never tell.
In between, notes on the ruling, some of the comments, a little background—and some of the usual crapola from Big Media as they went about trying to get Congress to overturn the court order. Once again, studios threatened to withdraw their HD programs from broadcast TV. Once again, the idea was that “consumers could lose content”—and once again, it was a hollow threat. To the best of my knowledge, there is no currently active legislative attempt to validate the Broadcast Flag—but Microsoft’s Windows Media Center will honor the flag, unfortunately.
A rare combined Net Media section discussed “wiki wackiness,” weblogs and RSS, and “audio blogging” (OK, podcasting). The wiki section included Meredith Farkas’ sensible commentary, an overdone condemnation of wikis from a pseudonymous blogger, and a typically unbalanced Wired piece on Wikipedia along with comments on that piece—and I was struck that most problems arise either from the apparent need for a zero-sum game (Wikipedia can only “win” if traditional encyclopedias lose—why is that?) and pure hype. Then there was the Blogging, Journalism and Credibility Conference, which ALA cosponsored, and which seemed mysterious in attendance, reason for being and results.
Glancing at T&QT, I see early commentary on high-def discs—at a time when the format war still seemed avoidable—and a blurb on OLED TVs, likely to happen “but not for a while” (it’s happened, sort of, with Sony’s $2,500 11" set). And I quoted this great sentence by John Blossom, explaining why epaper was the future and the “mass-produced publishing model for paper” is “dead” (some deaths just take longer…):
In general, content is moving towards the proliferation of contextualized content objects that are most easily monetized when they flow into the venue where their value is most easily recognized by very specific audiences.
Blossom’s an industry analyst. I’m not sure who else can come up with comments like that—or understand them, for that matter.
Ah, “folksonomy and dichotomy.” I couldn’t help poking fun at Clay Shirky’s “Ontology is overrated” with this short Perspective’s first subhead: “Dichotomy is Overrated.” So it is, and most current efforts to add folksonomy (tagging, etc.) to library catalogs and other databases recognize that fact, as few non-extremists actually argue for scrapping professional cataloging and indexing entirely:
There should be no dichotomy. “Popular tagging” has been part of the process of organizing and identifying items throughout history. The web makes it easier and some tagging applications make it fun. I wonder whether most web users are really interested in doing lots of tagging, but that issue will be settled over a few years.
Once you eliminate the dichotomy—once you think “and, not or”—I lose interest in trying to put down folksonomy or determine whether it really is a superior tool for all applications. More interesting questions are how tagging can be used effectively, and how tagging and formal systems can best complement one another. I’d like to think that people smarter than I am are working on those issues. I’m certain that people are working on those issues who are better informed on the topics involved and far more likely to produce good results.
The longest section: Library Access to Scholarship, focusing on the Federal Research Public Access Act, an OA-heavy issue of Research Information, and clusters of items from Open Access News, Dorothea Salo and DigitalKoans—real sources for OA coverage.
I enjoyed the lead essay: Perspective: The Lazy Man’s Guide to Productivity. I described my working habits and ways that I manage to get a fair amount done in relatively little time (deadlines, creative procrastination, a place to write, focus and unitasking, through writing, “one point five” drafts, touch typing, integrated formatting and realistic expectations) and noted some caveats. Here’s the start:
Once in a while someone asks me, “How do you do all that writing on your own time? Do you ever sleep?” Those questions arose more often when I was doing three columns (two monthly) as well as C&I, but they still comes up. Recently, a colleague convinced me that they deserved more than my usual one-sentence answer to the first:
I’m lazy but I’m efficient.
That’s always been my answer. It’s true and relevant. The tough part was what followed. “I do almost all that writing in an hour or so every weekday and three or four hours each weekend.”
Looking back, I’m not sure how I did manage to write three columns and a monthly journal, a few speeches each year, even a book and briefer book-type project in that amount of time. Maybe I’ve grown less efficient or a bit slower, but it all sounds improbable.
“Lazy but efficient” may be snappy but it’s less than useful. So, since you (at least one of you) asked for a longer answer, here’s more about how I manage.
Previewing Public Library Blogs: 252 Examples, a fairly long feedback/following up section (they don’t happen often—and this one was particularly rich in comments on On the Literature and On Authority, Worth and Linkbaiting). T&QT included questions about the safety of some Web2.0 applications, notes on just how compressed HDTV really is (at least 63:1 for Blu-ray, at least 155:1 for broadcast HDTV) and notes on how many people use Second Life.
Looking at products, I was (and am) interested in Zenph Studios’ process for turning a piano recording into data that can be used to create a new, higher-fidelity recording (using a special grand piano)—the first recording recreates Glenn Gould’s 1955 version of Bach’s Goldberg Variations.
Cites & Insights is sponsored by YBP Library Services, http://www.ybp.com.
Opinions herein may not represent those of PALINET or YBP Library Services.
Comments should be sent to email@example.com. Cites & Insights: Crawford at Large is copyright © 2008 by Walt Crawford: Some rights reserved.
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.