Cites & Insights: Crawford at Large
ISSN 1534-0937
Libraries · Policy · Technology · Media


All of Cites & Insights 9, Number 2: Midwinter 2009


NOTE: This HTML version is intended for on-screen use only. Please do not print it out in full: It will use much more paper than the PDF version!

A was for AAC: A Discursive Glossary, Rethought and Expanded

The year was 2004. The time, “right about now” (as this appears)—that is, shortly before the ALA Midwinter Meeting. It was a special Midwinter issue—Volume 4, Issue 2, Midwinter 2004. I hadn’t yet started a blog (that was 15 months away). I was still a member of the LITA Top Technology Trends panel of “trendspotters,” still writing a column in American Libraries and still speaking fairly frequently. (Oh, and I still had a full-time job. How times change!)

Here’s most of how the original “discursive glossary” came about:

Eli Edwards asked a perfectly reasonable question about OpenURL. I attempted an email response, but also recognized that I throw around a fair number of abbreviations and specialized terms in Cites & Insights, rarely expanding them except on first use.

That thought—combined with some mandated vacation and my continued uncertainty as to how this is all going—resulted in this issue. It’s certainly not complete (I didn’t go back beyond 2003 for abbreviations), it’s inconsistent (I name a few people and weblogs, but omit most of those I value), and I vary between pure description and opinion. (I hope this issue meets the second antonymic definition of “discursive,” but you can be the judge.)…

The order is alphabetic…

That was then, this is now

I’d considered doing a brand-new glossary five years after the old one—but began to wonder whether I could spot enough interesting terms where I would add value to what you’d find with a web search.

Since I do now have a blog (and have become more deeply enmeshed in blog and wiki issues than I’d have ever expected), I asked for advice from “you.” And got some—not many responses, but consistently favoring such an update. One suggestion was supported in other responses: You saw value in my looking at five-year-old definitions and noting what’s changed.

Here it is. For most terms in the 2004 piece (excluding personal-name entries for library people), I include part or all of the original commentary in smaller indented paragraphs (as usual for quotes), with “Then:” heading the first paragraph. “Now:” offers new commentary—some silly, some historical, whatever felt appropriate. I also mix in new terms—and in those cases, the commentary begins with “New.”

I was surprised and a little disturbed when some commenters on the 2004 piece seemed to treat it as a real glossary, something sufficient unto itself. It wasn’t. It was a set of miniature essays organized as a glossary. Neither is this set intended as more than a set of commentaries. I’m not qualified to prepare a proper glossary for libraries or library automation. (I can suggest lu.com/odlis/, the Online Dictionary for Library and Information Science, as one such glossary, although I haven’t tested its completeness or accuracy.) Meanwhile, enjoy the trek—still “from AAC to zine.”

A-B

AAC

Then: Advanced Audio Coding, the form of lossy compression used by Apple iTunes. Supposedly offers better audio quality than MP3 at the same bitrate. Any form of lossy audio compression at aggressive rates (e.g. 128K) will yield audible differences in some music, to some people, at some times.

Now: There was a bigger problem with AAC than mediocre sound quality, at least as used in iTunes. For original iTunes, offered at 128K, the AAC downloads came with DRM, which means you’re not really buying a tune for your $0.99—you’re buying some unclear set of rights to listen to the tune, rights that can be (and have been) changed by Apple after you’ve paid your money. That’s just changed as I write this: iTunes is dropping DRM and moving to consistent 256K AAC. This may be the final step in doing away with DRM for music.

AACS

New: Advanced Access Content System, a primary DRM tool for Blu-ray discs. It was also used on HD DVD. As with most DRM systems, it doesn’t deserve much more discussion, since it can be circumvented by commercial pirates and mostly serves to inconvenience ordinary citizens.

ACCOPS

Then: The Author, Consumer, and Computer Owner Protection and Security Act of 2003, HR 2752, introduced by Howard Berman. While the press release for the bill talks about the “growing scourge of illegal activity on the Internet” such as “identity theft, distribution of child pornography, unlicensed drug sales…stalking, fraud, trademark counterfeiting, and financial crimes,” the bill is really about one such “scourge”: “Online copyright piracy, in particular, has gotten out of control.” With very minor exceptions, the proposed legislation is about infringement…

Now: One piece of ACCOPS became law as part of another bill—the piece making “camcording” an explicitly illegal act (see Family Movie Act).

The bill was absurdly overbroad. In making “enabling software” illegal, it would have—at least arguably—made Windows itself illegal (along with OS X and every browser). The legislation was reintroduced in 2004 but never passed. The good news is that most extreme-copyright legislation has gone nowhere over the last five years. The bad news is that most legislation seeking to balance copyright has also gone nowhere over the last five years.

ALAWON

Then: The ALA Washington Office Newsletter, an irregular free electronic newsletter distributed through its own list. Valuable to keep track of legislation related to librarianship. The ALA Washington Office website is also a valuable place to check on library-related legislative issues.

Now: While the list still exists, the most recent issue of ALAWON—retitled ALA Washington Office Newsline—appears to have been #126, dated December 21, 2006, at least according to the ALAWON Archives. Since then, ALAWON appears to have become a category in District Dispatch, www.wo.ala.org/districtdispatch/, the blog of the ALA Washington Office, which began in September 2006. (For all I know, the list may distribute each post categorized as ALAWON.)

If you care about legislation (and related matters) related to librarianship, you should subscribe to the blog. Since it offers email subscriptions as well as RSS feeds (and, of course, visiting the site itself), it does appear to be a better tool than the former newsletter, with no downsides I can think of.

Alliance for Taxpayer Access

New: Operated by SPARC, ATA is “a diverse and growing alliance of organizations representing taxpayers, patients, physicians, researchers, and institutions that support open public access to taxpayer-funded research.” It’s a surprisingly broad group, including not only library associations such as ALA, AALL, ACRL, ARL, SLA and many others, but also a wide range of health organizations and others (including individual university libraries and other public advocacy groups such as Public Knowledge).

The alliance has a clear statement of principles and serves to advocate in favor of some legislation and against others. The set of principles:

1. American taxpayers are entitled to open access on the Internet to the peer-reviewed scientific articles on research funded by the U.S. Government.

2. Widespread access to the information contained in these articles is an essential, inseparable component of our nation's investment in science.

3. This and other scientific information should be shared in cost-effective ways that take advantage of the Internet, stimulate further discovery and innovation, and advance the translation of this knowledge into public benefits.

4. Enhanced access to and expanded sharing of information will lead to usage by millions of scientists, professionals, and individuals, and will deliver an accelerated return on the taxpayers' investment.

One key current area for ATA is FRPAA, which see. You’ll find ATA at www.taxpayeraccess.org.

ALPSP

Then: The Association of Learned and Professional Society Publishers, a trade association for not-for-profit publishers. Publishes Learned Publishing, a quarterly print and online publication that offers free online access excluding the current volume. While it would be silly to say that all ALPSP members are good guys (some “nonprofit” societies and publishers still charge outrageous subscription prices—and the institutional subscription price for Learned Publishing is more than twice the individual subscription price), ALPSP is by design much more open to change and alternative models than its for-profit equivalents.

Now: In theory, what I said five years ago should be true (the definition itself is correct.) In practice, at least when it comes to balanced copyright and Open Access, it’s difficult to differentiate the stances of ALPSP from those of IASTM and AAP’s PSP division. ALPSP as an organization and lobbyist seems to function as part of Big Scholarly Media even though its members are typically much smaller and nonprofit in nature.

analog hole

New: The fact that any DRM scheme (other than visible/audible watermarks) can be bypassed by converting digital media to analog form and either copying it in analog form or reconverting it to digital form. There is almost always some loss of quality in the process, but also almost always complete loss of copy restrictions.

When Big Media was pushing for things like DMCA—which negates fair use entirely within the digital domain—and tighter DRM, the analog hole was a good thing: It was the one and only way that fair use rights could be maintained within restrictive digital environments.

But, as you might expect, that wasn’t good enough. Starting in 2003, MPAA started touting the idea that the analog hole should be closed. How do you do that? There’s really only one surefire way: Require that all devices capable of receiving, storing, copying, converting or displaying media (whether digital or analog, since you must convert from digital to analog in order for human beings to use media) be licensed. In essence, either Big Media or a lapdog agency like the pre-2009 FCC would control the design and manufacture of all CD players, DVD players, TV sets…and computers, MP3 players, you name it.

Absurd? Not really. See broadcast flag: The FCC approved an analog-hole measure, although so far the courts have prevented it from being enforced and Congress has shown no appetite for restoring it.

There are ways to make copying more difficult, although these also depend on licensing to some extent. The best-known example is Macrovision, which prevented copying of commercial videocassettes by degrading the picture in a way that was usually invisible (although, on the TV set we owned in the early 1990s, it resulted in waviness on the top inch or two of the picture) but prevented direct copies. Even there, legislation has also been required to make Macrovision work, since it’s easy to filter out the degradation.

Two of Big Media’s best friends in Congress, James Sensenbrenner Jr. and John Conyers, introduced the Digital Transition Content Security Act, HR 4569, in 2005, “to require certain analog conversion devices to preserve digital content security measures.” In other words, the bill would have attempted to close the analog hole by mandating one specific commercial technology. As always with such restrictive moves, the MPAA said the bill would “promote more consumer choice” while groups such as Public Knowledge pointed out the reality of the situation. Fortunately, HR 4569 went nowhere.

Technically, burning audio CDs from copy-protected downloaded music files is not a use of the analog hole (audio CDs are digital media), but it has the same effect: DRM does not survive the process.

ARL

Then: The Association of Research Libraries, a membership organization of America’s leading research libraries. Deeply involved in copyright, scholarly access, and related issues. Publishes the ARL Bimonthly Report (available on the web).

Now: Still true, and ARL—composed of 123 large academic libraries in the U.S. and Canada—continues to be a key resource for these areas and for digital repositories and other library initiatives. ARL is the home of SPARC, LIBQUAL+ and CNI, and a lead partner in the Information Access Alliance and Library Copyright Alliance. You’ll find them at www.arl.org. While ARL: A Bimonthly Report on Research Library Issues and Actions from ARL, CNI and SPARC (the full title of the “ARL Bimonthly Report”) costs $50 a year in print form, issues become available (as full-issue or individual article PDFs) at www.arl.org/resources/pubs/br/index.shtml shortly after publication. (You can sign up for email alerts or an RSS news feed.)

If you want to contrast ARL with ACRL (which isn’t in this glossary): ARL is an organization made up of large academic libraries in the U.S. and Canada; ACRL is a division of ALA made up of academic librarians (from all institutions large or small) and others concerned with academic library issues.

BALANCE

Then: The Benefit Authors without Limiting Advancement or Net Consumer Expectations (BALANCE) Act of 2003, introduced by Congresswoman Zoë Lofgren of California’s Silicon Valley. The proposed legislation asserts that the authors of the DMCA did not intend a dramatic shift in the balance of copyright rights, noting a key clause in the House Judiciary Committee report at the time: “[A]n individual [should] not be able to circumvent in order to gain unauthorized access to a work, but [should] be able to do so in order to make fair use of a work which he or she has acquired lawfully.”

Now: Reintroduced in late 2005, cosponsored by Rick Boucher (along with Lofgren, two of very few legislators who support—and apparently understand—balanced copyright), sent to committee, never heard from since. It might have been a good starting point in undoing some of the dramatic power shift represented by DMCA—so maybe it’s not surprising that it disappeared without a trace.

bar camps and library camps

New: See unconferences—only because I can’t explain distinctive differences in practice between library events called “bar camps” or “library camps” and those called “unconferences.”

Berman bill

Then: Any of several bills suggested or introduced by Howard Berman (D-Hollywood). Berman’s bills would at one time or another have made it legal for the RIAA (and others) to hack the computers of anyone suspected of having infringing downloads. Berman’s statements make it clear that, in his mind, copyright holders should hold all the cards: He speaks of their “exclusive rights” to make decisions [about] use of anything to which they hold copyright.

Now: Berman’s still in Congress—since 1983—and still a liberal who’s also a copyright hardliner and acts as though he’s in Hollywood’s pocket. Maybe it’s not surprising that he’s also against the NIH mandate for open access, making the remarkably stupid statement that “the N in NIH shouldn’t stand for Napster.”

big deal

Then: An extreme form of bundling, exemplified by Reed Elsevier’s offers to campuses and consortia to provide electronic access to a much larger number of journals than are currently subscribed to in print, for a steadily-increasing price, with severe penalties if any of the print subscriptions are cancelled.

Now: A number of universities (including Harvard, Cornell, Duke and others) refused Elsevier’s Big Deal in 2003-2004. Unfortunately, the Big Deal approach not only hasn’t gone away, some big publishers are pursuing it more avidly than ever, attempting to lock libraries in to large, expensive sets of subscriptions. ScienceDirect is a prime example of the Big Deal.

The term itself hasn’t been used as much lately—but that doesn’t mean Big Deals have gone away. A few paragraphs of “Journal economics—bundled or aggregated subscriptions to electronic journals,” published August 15, 2008 in ACRL’s Scholarly Communication Toolkit (www.acrl.ala.org/scholcomm/node/36):

While there are some short term advantages to the "Big Deal", there are also long term implications for purchasing aggregate subscriptions, some of which are problematic. Issues which must be considered include the following:

Libraries lose control over selection decisions and may not be able to meet the changing needs of faculty, researchers, and students. Libraries cannot cancel titles that may no longer be useful to a campus, or may have declined in quality. Similarly, a bundled plan may include titles that never would have been selected by librarians or faculty, either because they are not needed or are not of sufficient quality. Bundled content with long-term subscriptions offers publishers little incentive to terminate lower quality titles.

Libraries may end up committing larger proportions of their collections budgets to fewer publishers. The result is that libraries are less able to purchase new titles from other publishers, and may even result in the need to cancel some currently subscribed titles.

The Big Deal can have an exclusionary effect on competition and the entry of new journals into the market, affecting the entire scholarly community. Ironically, these packages of electronic titles serve as an effective barrier to new journals at the same time that advances in electronic publishing would make it increasingly feasible for smaller, non-profit, or alternative publishers to enter the market.

Although that’s marked as a 2008 publication, it’s worth noting that the Big Deal refusals referred to are all in 2003 and 2004.

Big Media

Then: My term for several small groups of companies: the biggest record publishers, the biggest broadcasting conglomerates, the biggest movie studios. Big Media tends to act as a single force in copyright-related issues and tends to view fair use as an annoyance to their complete and absolute control over “their” creations.

Now: Some things never change. This is one of them. Big Media representatives consistently use scare quotes around “fair use” and frequently deny that it’s law, and pretty consistently state expansionist views of copyright as though they’re good for the consumer. I forgot to note that most Big Media groups would also love to see the first-sale doctrine go away, which is quite conceivable if and as things move from physical media to either purely digital downloads or media that require digital handshakes to operate.

Block, Marylaine

Then: “Librarian without walls” and inspiration to many. Had she not started Ex Libris years earlier, I might never have started Cites & Insights. Ex Libris has the virtues of brevity, wit, clarity and content; it’s a must weekly visit as far as I’m concerned. Also the inspiration for COWLZ.

Now: COWLZ is long-gone—and ExLibris has shut down after many good years.

blogs

New: See weblogs. While that’s not the term I would use in 2009, it’s the term I used in 2004, so I’ll leave the entry there. After three books and who knows how many articles, I’m not sure I have much new to say about blogs on this particular occasion…

On the other hand, I can offer the minimalist definition that appears in “Blogs and libraries” on the PALINET Leadership Network, a definition I regard as the most you can really say about blogs in general:

A blog is a web-based set of individual posts, by one individual or a defined group, initially presented to readers in reverse chronological order--that is, newest first.

Go any further than that and you’re in deep water. Read pln.palinet.org/wiki/index.php/Blogs_and_libraries to see why (and for part of a largely original set of articles on blogs, wikis and the distinctions between the two).

Blu-ray™

New: Blu-ray or Blu-ray Disc (BD) is the surviving high-density optical disc format, now that Toshiba has abandoned HD DVD. Why “blu”? Because Blu-ray (like HD DVD) uses a blue-violet laser rather than the red laser used for DVD. (They left out the “e” in “blue” so Blu-ray could be trademarked.) The blue laser has a shorter wavelength than a red laser, so can focus more narrowly. As a result (and with other changes), a single-layer Blu-ray disc can hold 25GB of data, as compared to 4.7GB for a single-layer DVD. Most commercial Blu-ray discs are dual-layer, holding 50GB. Two firms have demonstrated 20-layer discs holding 500GB. (That’s one terabyte on two 12cm discs!) You may also see BD-R, BD-RE and (rarely) BD-ROM. These are, respectively, recordable Blu-ray, rewritable Blu-ray and read-only Blu-ray.

The first sentence above says “high-density,” not “high-definition.” There’s no technical reason a Blu-ray disc couldn’t contain standard-definition video material, and some extras on some commercial Blu-ray discs are in standard definition. You could put 23 hours of SD video on one dual-layer Blu-ray disc: That’s a full TV season on one disc, with room to spare.

Blu-ray’s not a newcomer. The first prototypes appeared in 2000. The name was adopted in 2002. The first Blu-ray recorder appeared in 2003. The first commercial movie releases were in 2006.

Will Blu-ray replace DVD? Not in any great hurry: Since most HDTV owners appear not to notice whether they’re actually watching high-def, most are unlikely to care about the quality difference between Blu-ray and DVD. The player and disc prices have dropped fairly steadily and new movies generally appear on Blu-ray and DVD on the same day, but it’s likely that DVD will dominate the market for years to come. You can conduct your own reality checks as to whether Blu-ray has commercial impact. Drop by Target, for example, and see how much of the optical-disc space is devoted to Blu-ray.

Fortunately, unlike the VHS-to-DVD transition, libraries don’t need to worry about obsolescence. Blu-ray discs, DVDs and CDs are all the same size and thickness. Every Blu-ray player also plays DVDs (and CDs), usually “upscaling” them to look better on HDTVs. (How do Blu-ray players play DVDs and CDs? By having a second optical pickup with a red laser. That may add a tiny amount of cost to the player, but it would be commercial suicide to release a Blu-ray player that did not maintain backward compatibility.) It is also supposedly true that Blu-ray discs are less scratchprone than DVDs, since they have hard surface coatings developed as part of the Blu-ray development process.

Why did Blu-ray defeat HD DVD? In my opinion, there were three key reasons, the third (unfortunately) being the least significant:

Blu-ray was always a large coalition, and players were available from several vendors early on. Meanwhile, Toshiba was the only significant manufacturer of HD DVD players. (Sony was a key player in developing Blu-ray, but learned from the Betamax debacle.)

The Sony PlayStation 3. Even as media reports said that HD DVD players were outselling Blu-ray players and that there were only tens or hundreds of thousands of Blu-ray players, they overlooked the PlayStation 3, with more than 16 million sales as of September 2008. Every Sony PlayStation 3 is also a Blu-ray player, and in fact they were regarded as the best (and cheapest) Blu-ray players until fairly recently.

Blu-ray is better than HD DVD, with 67% more capacity per layer and a mandatory hard coating (the coating was optional for HD DVD).

One of the claims for HD DVD was that the discs would be cheaper to manufacture since they used existing production lines. That may have been true, but the difference never showed up where it counts. It turns out that disc manufacturers quoted essentially similar prices for commercial-release runs of either format as early as mid-2007, with at most a dime’s worth of difference.

BOAI

Then: Budapest Open Access Initiative—a major international push for open access. In some ways, BOAI pushes for a Grand Solution to scholarly access. I’ve criticized the approach and questioned elements of the BOAI FAQ at some length…

Now: As a clear set of principles, BOAI played a useful part in defining open access. As a movement or Grand Solution, it’s at best a work in progress. I’ve continued to write about OA from the perspective of improved library access to all literature (not just scholarly articles) and put together a group of resources on open access I highly recommend. Start with “Open access basics”: pln.palinet.org/wiki/index.php/Open_access_basics. That brief article includes links to seven related articles—and to a broader “Leader’s guide to open everything.”

Boucher, Rick

Then: If Howard Berman is the prototypical Big Media representative, Rick Boucher (D-Va.) has been a strong advocate for restoring balance to copyright. His proposed legislation has included the Digital Media Consumers’ Rights Act and others.

Now: Boucher’s still there—and he’s also been in Congress since 1983. He’s known as a key player on internet and electronic commerce legislation and received Library Journal’s “Politician of the Year” award in 2006 for efforts to protect fair use and expand rural access to the internet.

broadcast flag

Then: A Big Media initiative that would undermine convergence, possibly undermine general-purpose personal computing, and swing copyright even further in the direction of total control by the rightsholders. The FCC has approved the broadcast flag, pending final reading. There will most surely be efforts both in Congress and in the courts to overturn the decision.

Now: I suggested there would be more in C&I about the flag, “possibly even a special issue.” So there was, and not much later: The April 2004 Cites & Insights (4:5) was entirely devoted to the broadcast flag, a special issue that was replicated on at least one other site.

On May 6, 2005, the U.S. Court of Appeals for the District of Columbia circuit, ruling on a suit brought by ALA and copetitioners, ruled unanimously that the FCC exceeded its authority in establishing the broadcast flag. Of the suit’s three grounds for striking down the flag, the court ruled on just one: the grotesque seizure of authority the FCC was attempting to claim over the design of broadcast receivers (and downstream devices as well). To quote one sentence in the decision, after noting the FCC’s “strained and implausible interpretations of the Communications Act of 1934”:

As the Supreme Court has reminded us, “Congress does not…hide elephants in mouseholes.”

As always, Big Media immediately claimed that studios wouldn’t create HDTV material if there was no broadcast flag—a threat that was empty before 2005 and has continued to be empty today, when studios are mostly desperate to get people to watch.

There were, of course, immediate attempts to get Congress to provide FCC with the authority it wanted—and maybe more. Another attempt was made in 2006 by Ted Stevens. Meanwhile, although there is currently no legal requirement that any hardware or software enforce the broadcast flag, Microsoft included flag enforcement in the Vista version of Windows Media Center—and some NBC programs set the flag to prevent recording. (Actually, according to EFF’s items, Windows Media Center over-enforces, going further than even FCC’s defunct regulation…and Microsoft later claimed it was looking at some other flag it should have ignored.)

Incidentally, for those who believe ALA’s Washington Office is useless, I would cite the fight against the broadcast flag as one of many cases where ALA has made a key difference. ALA doesn’t win them all, but it wins a lot.

bundling

Then: Providing several related goods or services at a significant discount over all the elements priced individually. A common and frequently beneficial practice in library acquisitions and most other areas of commerce, dangerous only when (a) it’s used as a weapon to freeze out competition (as is frequently claimed for Microsoft’s bundling of applications with Windows) or (b) it’s used in a way that damages the long-term flexibility and resources of the buyer (as is being suggested for big deals).

Now: I wouldn’t bother to include this—but do note that Apple does its share of bundling as well.

C-E

CBDTPA

Then: The Consumer Broadband and Digital Television Promotion Act of 2002 (had it passed), introduced by Sen. Fritz Hollings (D-SC). The key provision of CBDTPA is that pretty much every digital device would be legally required to include undefeatable copyright-protection circuitry defined by the government. Which digital devices? “Any hardware or software that reproduces, displays, or ‘retrieves or accesses’ any kind of copyrighted work.” Since anything having fixed expression is copyrighted, that means any hardware or software that can access, copy, or display any digital file. The bill included remarkably few remnants of fair use…

Now: Hollings’ Razzberry (try pronouncing CBDTPA!) disappeared. Part of the idea showed up in the Broadcast Flag, even that wasn’t so absurdly overreaching. Hollings, by the way, is not still in the senate. He didn’t run for reelection in 2004.

CD

Then: Shorthand for Compact Disc or, really, Compact Disc Digital Audio, currently the most popular sound recording medium. Roughly 20 years old. To be called a CD, a disc must follow the Red Book specification (a licensed standard from Philips & Sony, developers of CD), which does not allow for copy protection. Thus, Cites & Insights calls audio discs with copy protection pseudo-CDs and Philips has expressed a willingness to sue publishers that use the CD logo on copy-protected discs.

Now: Pseudo-CDs have largely gone away—and, despite press to the contrary, CDs really haven’t, even though they’re now at the 25-year point at which the predominant audio medium has historically been ripe for replacement. (Yes, downloads are increasing and CDs are decreasing—but CD sales still represent the majority of music-industry revenue.) It does seem increasingly probable that no newer physical medium will replace CDs any time soon, if ever.

CDL

Then: California Digital Library, the “tenth campus library” for the University of California… CDL is known for work in the standards arena, and most recently drew attention through its statement that it’s paying $8 million for Elsevier ejournal access, half of all the money it spends on ejournals.

Now: Not a lot new to say.

censorware

Then: A more accurate term for “filtering” as applied to the Internet, since such software works by censoring particular addresses and language.

Now: I haven’t written much about censorware in some time—but if I was in Australia, it would definitely be a topic of current interest. There, the government is trying to impose mandatory censorware at the ISP level, even though it’s fairly well established that the “filtering” won’t do what it claims. Apparently, some of Australia’s largest ISPs are resisting the efforts, which among other things may slow internet access.

CIPA

Then: The Children’s Internet Protection Act, which is law now that it’s been upheld by the Supreme Court. The act mandates that any library receiving federal funds through either of two programs must have censorware (“filters”) installed on all computers capable of Internet access, including staff computers—but only to prevent access to images in three categories: child pornography, obscenity, and “material harmful to children” (which equates to child pornography or obscenity with age-appropriateness added)…

Now: The biggest issue now is that pro-censorware activists and others are ignoring the Supreme Court’s actual decision. Based on the Supreme Court’s decision, an adult should always be able to request that censorware be entirely disabled without stating a reason and the library should comply without undue delay—and CIPA does not apply to text, period.

CLIR

Then: Council on Library and Information Resources. Geezers like me may know this as CLR; the “Information” wasn’t always there. CLIR funds useful studies and reports and takes part in various initiatives relating to libraries and access to information. Under their earlier name, they also commissioned J.C.R. Licklider’s Libraries of the Future, which in 1961 essentially recommended that print books ought to disappear, as their physicality “makes them intrinsically inefficient means for storing, organizing, and retrieving information.” Live and learn.

Now: Licklider is no more right in 2009 than he was in 2004 or 1961, but CLIR continues to carry out useful studies and initiatives. The bimonthly CLIR Issues is available at www.clir.org/pubs/issues/issues.html

cloud computing

New: What happens when you work on your netbook or smart phone after you’ve had a few too many liquid refreshments.

Alternatively, maintaining your files “somewhere” in that great internet cloud and using internet-based software to create and modify those files. When done as part of an overall computing strategy, using online applications and storing files online makes enormous sense. When done as a substitute for local applications and storage, I’m not so sure, particularly when the cloud you’re relying on is free or very inexpensive. Clouds can disappear.

It’s also odd that, at a point when a $500 notebook may have a dual-core CPU that’s forty or fifty times as powerful as the best computer you could buy a decade ago, inexpensive graphics cards offer graphics processing power that’s phenomenal by any ordinary standards, and a terabyte of storage costs $150 or less and requires only a single drive, we get the idea that we should forget all of that and use our computers as smart terminals. I haven’t spent much time writing or thinking about this, but somehow the idea of exclusive cloud computing as desirable strikes me as odd…and, by the way, unlikely to be less expensive if it did become ubiquitous. When someone’s giving you services, you have to wonder about the business model—and advertising really won’t pay for everything, particularly as we move from an era of excess to an era of limits.

compulsory licensing

Then: A proposed scheme whereby broadband users pay $3 to $5 more per month, passed on by the ISPs to an authority that would distribute it to music publishers and musicians on some equitable basis. That “license” would allow people to download whatever music they want from wherever they want. [The 2004 discussion runs almost 500 words.]

Now: It’s baacckk…and, unfortunately, EFF seems to think it’s a peachy-keen idea. As one who has never downloaded music illegally and doesn’t plan to download it legally either, I would really resent having my broadband bill go up by 20% in order to fatten the purses of RIAA members (with a few bucks probably making it to the wealthiest artists as well). The only legitimate basis for compulsory licensing is proof that we really are all thieves at heart, and that we really do all want lots of new music. Otherwise, it’s a music industry ripoff that penalizes honest people.

content

New: You can multiply your channels and media, add shorter forms and new distribution methods, but in the end it’s all about content—having something to say. Or, with some distribution methods, maybe it doesn’t matter whether you have anything to say or not.

There’s a tendency to “solve” a shortage of information in an area by creating a new channel for such information, without taking steps to be sure there’s content in that channel. I’ve seen that in LITA, where there now seem to be five ways for some interest groups to fail to inform members about their conference plans. I sense something similar in some dormant library blogs, particularly cases where a library seems to have established many blogs for many areas—only one, two or none of which have a significant flow of posts.

There’s a snarky admonition that could be used for those people who don’t have anything to say: “If you don’t have anything to say, please shut up about it.” But that would be mean.

COPA

Then: The Child Online Protection Act, Congress’ second attempt to regulate pornography on the Internet. COPA has been struck down as unconstitutional, twice, but the Justice Department keeps trying to resurrect it. A predecessor to CIPA, with broader implications.

Now: Three strikes and you’re out? In June 2004, the Supreme Court upheld the injunction against COPA. In March 2007, the district court again found it unconstitutional. In July 2008, the appeals court upheld that decision. That may be the end of COPA. Maybe.

COPPA

New: Children’s Online Privacy Protection Act—not at all the same as COPA. COPPA has been law since October 1998. COPPA aims to prevent online collection of personal information from children under 13 without permission of parents. I’ve never written about COPPA and have no particular reason to believe it’s bad law. It appears reasonable and useful.

copyleft

Then: Cute name for the licensing method in GPL, the Gnu General Public License. Copyleft is a “general method for making a program free software and requiring all modified and extended versions of the program to be free software as well,” according to the Gnu website (www.gnu.org)…

Now: Creative Commons licenses have become more important than GPL outside the software community—and the issue of compatibility between GPL and CC licenses has become important, particularly given that Wikipedia operates under a GPL license. This situation seems to be in flux.

copyright

Then: “The Congress shall have power to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive rights to their respective writing and discoveries.”

That’s what the U.S. Constitution says and that’s the legal basis for American copyright and patent law. Note “limited times” and that rights are granted to authors and inventors, not intermediaries. Established law in the U.S. is that facts cannot be copyrighted.

Currently, U.S. copyright protects all works as soon as they are “fixed” (saved to disk, recorded to tape, whatever), with no registration, publication, or copyright notice required. Works created by an individual are protected for the life of the creator plus 70 years (so dead composers and writers are really motivated to keep working!), while corporate works (those done “for hire” by employees or otherwise protected at the corporate level, including most motion pictures) are protected for 95 years.

U.S. copyright law also includes explicit recognition of fair use—those cases where you can reasonably use someone else’s work without notice or payment—but as a set of principles rather than a set of specifications. Most of the controversies surrounding copyright (and discussed in Cites & Insights) fall into these areas:

Ø  Attempts to restrict fair use through digital rights management, licensing, and other method.

Ø  An apparent ongoing attempt to make copyright eternal on the installment plan, adding another 20 years at 20-year intervals.

Ø  Inappropriate use of copyright

Ø  Whether laws should be passed that assume we’re all thieves and compensate copyright holders appropriately.

Discussions of copyright tend to be confounded by the three general approaches to copyright (and “intellectual property” in general):

Strong copyright, the view that copyright is a property right and the rights of the owner (or licensee) outweigh all other considerations…

No copyright, the view that creation should be its own reward, creative works should enter the public domain immediately, and creators should earn their livings through public appearances, live shows, or day jobs…

Balanced copyright or weak copyright, the view that creators should have the ability to benefit from their creations for some reasonable period of time, but that copyright should be a temporary status on the way to public domain—and that the rights of the creator or licensee must be balanced against the rights of the user and the need for new creations. This middle range covers a wide variety of specific views, and includes positions taken by ALA and other library associations, Creative Commons, Public Knowledge, CreateChange, and a growing number of elected officials including Rick Boucher, Zoë Lofgren and Barbara Boxer.

Now: I’ve repeated most of the 2004 entry—because most of it’s equally relevant now. Except, unfortunately, that “growing number of elected officials” doesn’t seem to be growing very rapidly, since balanced-copyright initiatives haven’t gone anywhere.

COWLZ

Then: The Coalition/Consortium/Committee/ of Web-based Library-related Zines and newsletters. Marylaine Block had the idea: to make the ongoing gray literature of librarianship—the newsletters and zines—more visible and try to assure their longevity after the creators disappeared. I tried to start things going….

Now: COWLZ is long since defunct, the COWLZ site for C&I is gone—and most of the founding “zines and newsletters” have disappeared. Mark this “Fail.”

Creative Commons

Then: An organization attempting to rebuild the public domain, enhance access to creative works, and encourage creativity by establishing flexible, customizable intellectual-property licenses. Lawrence Lessig chairs the group, which began in early 2002. Cites & Insights operates with a Creative Commons “attribution-noncommercial” license, which means anyone’s free to use any or all of an issue as long as that use is attributed and they’re not charging for the reuse. BioMed Central uses the attribution license for its Open Access journals. A new range of licenses addresses derivation rights, particularly important for music.

Now: The Creative Commons “CC in a circle” logo will be found millions (literally) of places—and in most cases, I believe, the creators know they’re deliberately reserving some rights rather than all copyright-related rights. Lessig is no longer CEO and chair of CC (he’s moved on to other areas). The organization continues to port licenses to new legal jurisdictions (CC licenses are very carefully crafted to assure legality). As of 2008, there are an estimated 130 million CC-licensed works, up from 40 million in 2005.

One interesting new development is CC+, “a protocol providing a simple way for users to get rights beyond the rights granted by a CC license.” If I had an easy way to provide commercial rights to C&I content, I could have a CC+ logo—the CC BY-NC license I already use, plus a link to a C&I commercial-license site. (Want to license C&I content for commercial use? Send me email: waltcrawford at gmail.com)

CSLDRMAA

Then: The Consumers, Schools, and Libraries Digital Rights Management Awareness Act of 2003, proposed by Sen. Sam Brownback (R-KA). The act would require digital media rightsholders to file “John Doe lawsuits” in order to obtain identifying information on an Internet user and would also call for labeling on any digital medium protected by DRM…

Now: Not only gone but mostly forgotten: 75% of Google results point back to C&I.

CSS

Then: Content Scrambling System, one of two forms of copy protection used on most commercial DVDs. Only players and computer programs with appropriate licenses are authorized to unscramble CSS. “deCSS,” a tiny little computer program that unscrambles CSS, was developed so that people could watch the DVDs they owned on their Linux computers (for which no DVD software was available), and quickly became a flashpoint for DMCA enforcement. deCSS is apparently illegal in the U.S.

Now: The only named developer of deCSS, Jon L. Johansen, was never convicted of any copyright-related crime. The DVD CCA dropped its case against him. DeCSS is apparently illegal but also widely available.

There’s another CSS that may be more important for most library people: Cascading Style Sheets, the system used in most HTML, XHTML and XML documents and pages to control presentation. Ideally, CSS separates presentation (how something looks) from content demarcation—so, for example, a “strong” portion of text might be boldfaced in one CSS implementation, larger type in another, or blinking red with sparkling edges in yet another. (Please don’t.)

CTEA

Then: The Sonny Bono Copyright Term Extension Act of 1998, which extended U.S. copyright from its previous “life of the creator plus 50 years, or 75 years for corporate works” to “life plus 70, or 95 years for corporate works.” CTEA extends existing copyrights, not just copyrights for new works. The timing is interesting because the first Mickey Mouse cartoon appeared in 1927 and would have entered the public domain in 2002. Named as a memorial to Sonny Bono, who was a congressman many years after Sonny & Cher (his widow Mary is still in Congress, and seems to believe in infinite copyright).

Eldred v Ashcroft was an attempt to overturn the extension (at least for works about to enter the public domain) on Constitutional grounds, led by Lawrence (Larry) Lessig. The attempt failed, with the Supreme Court ruling 7-2 to uphold CTEA…

Now: The Internet Archive pursued another legal challenge to CTEA. It went nowhere. What will happen come 2018 or thereabouts? At the least, there might be a little more opposition to “son of CTEA”—but you can bet there will be another proposed 20-year (or longer) extension.

DLF

Then: Digital Library Federation, a group of universities and others heavily involved in developing and doing research on “digital libraries.”

Now: I had the sense in 2004 that DLF was only secondarily about libraries. At this point, the group certainly aligns itself with libraries—although the motto, “Providing Leadership for Libraries,” may be a tad presumptuous for a group concerned only with digital technology. The group does worthwhile work in a range of areas related to standards, digital preservation and digital collections. It currently consists of 37 “partners” (e.g., CDL, CLIR, Cornell, LC, NYU, Harvard) and five “allies” (CNI, JISC, OCLC and others).

DMCA

Then: Digital Millennium Copyright Act, perhaps the most important unbalancing of copyright toward the “copyright community” (other than continued term extension). Briefly, among its other provisions, DMCA makes it illegal to create or promulgate information on anything that could circumvent digital copy protection or digital rights enforcement mechanisms.

Write an algorithm that decrypts encrypted digital material: That’s a crime under DMCA. Publish an article that points to that algorithm, and you can be charged with a DMCA violation…

Now: DMCA is still the law of the land, still an egregious unbalancing of copyright, still really bad law. Attempts to ameliorate its worst aspects have had limited success.

DMCRA

Then: The Digital Media Consumers’ Rights Act, HR 107, one of Rick Boucher’s proposals to moderate some of the worst of DMCA’s provisions. DMCRA would explicitly protect research and would permit circumvention of copy protection in order to exercise fair use rights…

Now: Introduced in 2003; reintroduced in 2005 as HR 1201. As with other attempts to balance copyright, it has gone nowhere so far.

DOAJ

New: The Directory of Open Access Journals, at www.doaj.org. As of January 5, 2009, DOAJ includes (and links to) an astonishing 3,812 journals, of which 1,341 are searchable at article level.

DOAJ isn’t entirely peer-reviewed journals. The current definition is “free, full text, quality controlled scientific and scholarly journals” and the FAQ says this under “quality control”:

The journal must exercise peer-review or editorial quality control to be included.

These are all “gold OA” journals—journals that don’t charge for online access and have no embargo period before articles are freely available.

DRM

Then: Digital Rights Management (or “digital restrictions management” if you’re a weak-copyright person). Any software or hardware system to control use of digital media, which inherently means restricting usage. DRM can undermine fair use, limit first sale rights, and make effective digital preservation difficult or impossible. It can also, depending on its characteristics, be an essential and useful ingredient in digital dissemination.

Now: DRM is going away for downloaded music. DRM is firmly embedded (and relatively easy to break) in DVDs and even more firmly embedded in Blu-ray discs—but at least with those physical media, DRM does not weaken first-sale rights. You can still lend, sell or give away a DVD or Blu-ray disc once you buy it. That’s not true for purchased ebooks on the Kindle or Sony Reader; our friend DRM restricts your rights once again.

Despite the clear history—that DRM doesn’t hinder pirates but does get in the way of honest people—it’s an uphill battle to get rid of DRM.

DVD

Then: DVD stands for DVD. That is, “DVD” does not officially stand for anything, although it was expanded to “Digital Video Disc” and, later, “Digital Versatile Disc” while it was being developed…

Now: That’s the first 31 words of a 590-word discussion. DVDs have almost wholly displaced VHS and almost certainly represent the fastest-growing entertainment medium ever. They’re absurdly cheap to produce, with giveaway DVDs as part of ad campaigns. Five years ago, I was astonished that I could purchase a set of 50 classic movies on 12 two-sided DVDs for $20, which took up 1.5" of shelf space as compared to the 50" or more you’d need for 50 videocassettes. This year, I purchased a set of 250 mystery movies on 60 two-sided DVDs for $50—less than a buck a disc and $0.20 per movie, in a collection taking up less than six inches of shelf space.

Will Blu-ray replace DVD as rapidly as DVD replaced VHS? Almost certainly not (and given that Blu-ray’s been around since 2005, that’s a pretty safe bet). It’s not clear that Blu-ray will replace DVD at all, and in any case all Blu-ray players also play DVDs.

ebooks

Then: That simple word covers a confounding variety of digital technologies, some of which are already successful and some of which may never succeed. If someone asks, “Will ebooks succeed, fail, or just hang on?” the answer is “Yes.” The nine-part view of ebooks I set forth in American Libraries 31:8 (September 2000) is, although woefully oversimplified, still as good a breakdown as I’ve seen (sez he, humbly as ever).

Now: Some things change slowly if at all. But see…

ebook appliances

Then: The deadest duck in the ebook pond, the one that’s generated the most hype and the least sales. Most notoriously, Gemstar and the REB successors to the Rocket eBook and Softbook. There have been several other dedicated (single-purpose) ebook appliances; most have either failed or never entered full production. While some of us continue to see potential for a dedicated ebook appliance (or reader, or just ebook) for K12 or higher education, a truly effective book equivalent at a reasonable price seems no nearer now than it was a decade ago: Always “two years from now,” once technology solves all the problems. Note that the failure of ebook appliances does not mean the failure of digital text or “ebooks” as a whole. Some people even read booklength texts on the low-resolution screens of personal digital assistants, and the best notebook computers are halfway to providing near-book resolution. (That last half—going from 150dpi to 300dpi—may take a long time, since it’s taken more than a decade to get from 96 to 150dpi, but at least it’s progress.)

Now: I was partly wrong here. Two semi-dedicated ebook appliances, Amazon’s Kindle and the Sony Reader, do appear to be “good enough” for a few hundred thousand people. Both use e-ink, making them less suitable for reading in bed (unless you add a light) but a whole lot more suitable for most reading and with much better battery life. They’re not 300dpi and don’t have the contrast of book print, but most people who own them seem to love them. On the other hand, both appliances do have other functions, e.g. playing MP3s or doing limited web browsing.

“Two years from now” was still wrong for 2004, but three years wasn’t bad—if you consider the prices reasonable and the readers “truly effective book equivalents.” Enough people do to make them a small but plausible business.

EFF

Then: Electronic Frontier Foundation. Loads of information (and The EFFector) on their website. Generally a strong pro-consumer, anti-regulation, weak copyright voice on policy and legal issues: In most cases, I’d find myself on the same side as EFF. At the moment, I have mixed feelings about the group. Its recent publicity campaign regarding peer-to-peer networking seems to say that it’s OK to steal as long as 60 million other people are doing it. I find that unacceptable. Ethics should not be a popularity contest, and EFF should not condone theft in its urge to change policy. Nonetheless, EFF does good work.

Now: Still around, still doing EFFector (w2.eff.org/ effector/) and the Deeplinks blog (www.eff.org/deeplinks). There’s an interesting new “patent busters” initiative, trying to get particularly egregious technology patents reexamined by showing prior art or other reasons the patents are invalid. The initiative seems to be having some success. EFF continues to be a valuable force.

Eldred Act

Then: Also known as the Eric Eldred Act or, formally, the Public Domain Enhancement Act (PDEA), HR 2601. Larry Lessig suggested the idea in early 2003, as the Supreme Court upheld CTEA. California congresswoman Zoë Lofgren introduced the bill on June 25, 2003, “to amend Title 17, United States Code, to allow abandoned copyrighted works to enter the public domain after 50 years.” The proposed changes in copyright law boil down to these two clauses:

“The Register of Copyrights shall charge a fee of $1 for mai*ntaining in force the copyright in any published United States work. The fee shall be due 50 years after the date of first publication or on December 31, 2004, whichever occurs later, and every 10 years thereafter until the end of the copyright term.” If the fee isn’t paid within a six-month grace period, copyright expires. Payment of the fee for a work also maintains copyright in ancillary and promotional work.

“The maintenance fee…shall be accompanied by a form… The form may be used to satisfy the registration provisions…”

One buck, each ten years, after the first 50 years. Registration, after 50 years. Registration makes it possible to find copyright holders to license their work. No registration, no buck, and work goes into the public domain—after the creator has had fifty years to profit from it. This is a good idea, and one that should not (but will) be controversial.

Now: What a good idea! Introduced again in 2005. Need I mention that it’s gone nowhere?

Eldred v Ashcroft

Then: The case that raised the profile of copyright imbalance. Eric Eldred has a website with the texts of classic books, poems and essays in the public domain. He and copetitioners argued that Congress overreached its constitutional authority by passing CTEA (which see) and extending copyright yet again. Lawrence Lessig served as chief counsel. Eventually, the Supreme Court upheld CTEA 7:2, but the issues raised will continue to be discussed.

Now: Discussed, yes. Will that prevent another 20-year extension in another decade? We shall see.

Ex Libris or ExLibris

Then: I’m never sure which form to use, but in either case it’s Marylaine Block’s usually-weekly zine: One good essay (or interview) with a few extras. Always interesting, usually enlightening.

Now: The archive’s still there (marylaine.com/exlibris/) but the last issue, #309, appeared on April 18, 2008. Three hundred and nine: An impressive record and a worthwhile publication.

EZ-D

Then: “DivX, only worse!” That was my heading for a June 2003 commentary on EZ-D, Disney’s name for Flexplay’s “limited-play DVDs.” They’re DVDs with a special coating: Open the airtight package and you can play them as much as you want. For 48 hours. Then they’re unplayable and you throw them away (or, supposedly, recycle them). Disney tried selling these environmental wonders for $5 to $7—which the New York Times called “close enough to the cost of a typical DVD rental.” I guess prices really are higher in New York! The discs don’t include any commentaries or other extras. Disney’s idea is to peddle the self-destructing discs at convenience stores and gas stations so Disney gets more of the revenue. My guess is that Disney and FlexPlay have overestimated the gullibility of the public.

Now: EZ-D sank without a trace after a limited trial—but you know how it is with bad pennies. A new version is out—using different technology, because the original EZ-D version might be playable on Blu-ray players after it’s supposed to self-destruct. Staples seems willing to sell these unfortunate products, now just called FlexPlay.

F-K

fair use

Then: The principle that some uses of copyright material are legitimate and may be done without permission from the copyright holder. Strong copyright advocates tend to put scare quotes around the two words as if to deny that there’s really such a thing as fair use. DMCA, DRM, and proposed laws would generally restrict fair use by substituting controls wielded by copyright holders. While fair use is a set of principles, it’s also a law, albeit a somewhat less-than-specific law—Section 107 of Title 17 of the U.S. Code (that is, copyright law):

Sec. 107. - Limitations on exclusive rights: Fair use

Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include -

(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2) the nature of the copyrighted work;

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4) the effect of the use upon the potential market for or value of the copyrighted work.

The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors.

Now: I’m repeating this in full because fair use continues to be part of the law—and Big Media and other extreme-copyright types continue to write it as “fair use” (with scare quotes) and deny that it is actually law—or, in some cases, that it even exists. Of course, with DMCA and DRM, fair use has been seriously abridged in many digital works, particularly given that fair use is barred as a defense of DMCA violation.

Family Movie Act or FMA

New: Part of the Family Entertainment and Copyright Act, one of few copyright-related laws passed in recent years; it became law in 2005. The other part, the ART Act, targets “camcording” (using a videocamera to record a movie in a theater, presumably to create bootleg copies) and “early release” (copying or otherwise obtaining movies that haven’t been distributed yet and making them available), two acts that seem closely related to commercial piracy.

The Family Movie Act or Family Home Movie Act provides explicit clearance for somebody to watch a movie at home in the manner they prefer, which can include paying someone else to omit or alter certain sections—as long as that omission is voluntary, clearly noted, and doesn’t actually create a new permanent copy of the movie.

Whazzat? Specifically, it’s ClearPlay, a Utah company, and the combination of ClearPlay DVD players and ClearPlay Filtersticks and subscriptions. Basically, ClearPlay editors go through DVDs watching for various categories of “questionable” content—sex, nudity, violence, bloodshed—and create ClearPlay files identifying exactly where and for how long such scenes appear. If you’re a ClearPlay user and the movie you want to watch has been vetted by ClearPlay, you’ll get a menu offering choices of content to filter. If you choose any of them, the movie will begin with a warning similar to what you see on full-screen versions of widescreen movies (and mostl movies on airlines), a warning that the movie has been modified. Then, the player will skip or mute the movie to shield your sensitive eyes and ears from the objectionable material.

Censorship? Not at all: You’re doing it voluntarily, at home, to your own copy of a movie—and you’re not altering the movie. The DVD remains unchanged.

Personally, I would have thought that was legal in any case. I mean, you can certainly rip pages out of your purchased copy of a book before reading it (or hire someone else to do that for you) without violating copyright. But movie studios and some directors disagreed—suing ClearPlay. Apparently, it’s OK for them to create sanitized versions for airlines, but it’s not OK for you to sanitize your own viewing experience. This legislation mooted the lawsuits and made ClearPlay’s business model legal—as long as all ClearPlay does is skip or mute, rather than substituting different video or audio, and as long as ClearPlay is only a playback technology.

ClearPlay is still around, although very few retail outlets carry the players. Amazon does. The players run $80 or so. Memberships—the only way ClearPlay will really work—run from $8 per month down to $180 for three years.

When the law passed, Ed Felten cheered it as being anti-censorship. Public Knowledge neither cheered nor condemned. EFF was grumbly but not very much so; A few people either talked about the slippery slope or censorship. I didn’t and don’t think there’s a censorship issue. I wouldn’t buy ClearPlay, but there have been some movies where I wouldn’t mind cutting out graphic violence or constant profanity. I understand the artistic integrity argument, but only up to a point. The RIAA can’t stop you from skipping the vilest songs on a CD. An author can’t prevent you from skipping “the good parts” or multipage descriptions of horrendous violence. Why should the MPAA or directors be able to prevent you from skipping portions of an otherwise-interesting movie that you really don’t want to watch or hear?

FCC

Then: The Federal Communications Commission. As we’re now discovering with the broadcast flag, the FCC can be used by Big Media to do an end run around Congress.

Now: And, as the FCC discovered, the courts really won’t allow it to expand its authority just because it feels like doing so.

Felten, Ed

Then: Princeton professor whose research team cracked the watermarks proposed for the Secure Digital Music Initiative. When he planned to give a paper on the research at a conference, the RIAA threatened him with a DMCA lawsuit. When he proposed to countersue, the RIAA backed off and mooted the case by claiming they’d never intended to sue. One excellent outcome of all this nonsense: Ed Felten started the Freedom to Tinker weblog, a great source of thoughtful, down-to-earth commentary on issues relating to digital media, copyright, and society.

Now: Felten is now director of the Center for Information and Technology Policy at Princeton (unless he heads for DC), and Freedom to tinker comes from that center. You’ll find it at www.freedom-to-tinker.com.

FEPP

Then: Free Expression Policy Project. This project, www.fepproject.org, maintains first-rate ongoing studies on various aspects of free expression, including copyright and other issues. Among other things, the site maintains a Supreme Court watch on the status of cases related to expression.

Now: Still there, perhaps less active than in previous years. For several years, FEPP was part of the Brennan Center for Justice at NYU School of Law, but that’s no longer true.

Finkelstein, Seth

Then: A consulting programmer and censorware activist and researcher; you’ll find lots more at sethf.com, including Finkelstein’s own weblog. Cites & Insights uses “censorware” rather than “filters” after reading and considering Finkelstein’s arguments…

Now: The blog, Infothought, has been heavy on matters related to Wikipedia and Wikia, but he continues to focus on censorware, Google and copyright as well. Ignore his tick of considering himself unread (although he does frequently publish columns in the Guardian, something other “unread bloggers” can’t say). He’s worth reading.

FOS

Then: Free Online Scholarship—the name and initialism used by Peter Suber for his weblog, list, and newsletter on what’s now called Open Access. If you’re looking at scholarly access-related material prior to September 2003, you’re likely to find much of it under FOS.

Now: Open Access has taken firm hold as a term, on its own and within the set of “Open”s. FOS is history.

Freedom to Tinker

Then: Ed Felten’s invaluable weblog.

Now: The blog, www.freedom-to-tinker.com, is a group blog hosted by Princeton’s Center for Information Technology Policy. Posts cover a wider range of concerns, mostly related to “the intersection of digital technologies and public life.”

FRPAA

New: The Federal Research Public Access Act, introduced in 2006 and to be reintroduced in the current Congress. The act would broaden the NIH mandate to cover every federal agency “with an annual extramural research budget of $100 million or more” and would mandate an embargo of no more than six months before final manuscripts become available for public use (in a “stable digital repository maintained by that agency or in another suitable repository that permits free public access, interoperability, and long-term preservation”).

I have not written about FRPAA to any great extent, and suggest you use the usual channels to explore it further.

Google Library Project

New: Part of Google Book Search, a massive project to scan books in many large libraries and index them as part of Google’s database—and, for public domain works, to make them available online.

That’s the short form. The long form is so long, controversial and complex that I can only suggest you look up some of the commentaries I’ve written on GLP and Open Content Alliance, a complementary project. Those commentaries aren’t up to date: I have yet to write about the settlement of the lawsuits against Google and the project. What I do know:

Google has already scanned millions of books as part of GLP.

The scanning has been done quickly and cheaply, which doesn’t always mean well. GLP scans generally can’t be considered anywhere close to archival quality.

Google opted not to take the chance to clarify or expand interpretations of Fair Use by spending a considerable sum (starting at $45 million) to settle lawsuits—and, in the process, almost certainly made it more difficult for other bodies to expand or even retain broad fair use concepts. One of these days, I may try to collate commentary on that set of issues…

GPL

Then: Gnu Public License, the “copyleft” license used by Linux and quite a bit of other open source software. See copyleft

Now: GPL has a specific philosophical mandate that Creative Commons licenses do not. As with any license, be sure you’re aware of the implications—and that, once licensed, you can’t “unlicense” existing software or creative works.

gray literature

New: In my mind, the gray (or grey) literature of librarianship is the literature that isn’t formally published in recognized venues: The stuff that isn’t traditional journals or books. Thus, Cites & Insights is gray literature; so is every liblog. There are other, more formalized and more sophisticated, definitions for gray literature, some of which might exclude this ejournal.

Merriam-Webster defines it as “written material…that is not published commercially or is not generally accessible.” That’s a narrower definition, as you could argue that anything indexed in Google is generally accessible.

Why do I mention this? Because of an essay I expected to be more controversial than it apparently was: “On the Literature,” Cites & Insights 7:9 (September 2007), which begins with this paragraph:

I believe that gray literature—blogs, this ejournal, a few similar publications and some lists—represents the most compelling and worthwhile literature in the library field today.

inevitability

Then: Unless you’re discussing death, don’t tell me “it’s inevitable.” Provide convincing arguments. Make a case. Taxes aren’t inevitable. Neither are the death of the book, the end of privacy, the complete success of open access archiving, “mobile everything,” or much of anything else I’ve heard described as inevitable.

Now: It appears likely that people who should know better will continue to describe things as inevitable—especially when they don’t really have solid arguments for the inevitability. My mental translation for “inevitable” is now “I know I can’t make the case for this.”

Information Access Alliance

Then: A new initiative sponsored by AALL, ALA, ACRL, ARL, the Medical Library Association (there are so many MLAs), and SPARC. The focus appears to be the need for more stringent antitrust review when examining mergers in the STM serial publishing industry. IAA’s site is at www.informationaccess.org and has several white papers.

Now: The website www.informationaccess.org shows IAA as hosted at ARL and SLA as an additional sponsor.

The most recent item in the news archive is an ARL Issue Brief from 2007. The archive consists of that, three items from 2006 and four from 2003.

information commons

New: A number of slightly related ideas (and renamed library spaces) have carried this name. One version dates back to 2000 or thereabouts and may deserve a few belated words, particularly since I caught some flack for not including this term in the 2004 glossary.

The ALA website shows a June 11, 2002 press release titled “Librarians call for ‘information commons’ at Conference.” The PR defines the term thus: “a relatively new term used to describe places, services and processes that promote the sharing of information unfettered by overly restrictive intellectual property laws.” Then-ALA president John W. Berry said “A vibrant ‘information commons’ is a necessary alternative to privatizing knowledge and research, making it less accessible as sources for new creations.” Presentations included a talk by Lawrence Lessig and three panels.

The next I heard was a policy report and Midwinter 2004 open forum, as part of an ALA initiative. I went to part of the open forum, and had few notes.

Later in 2004, the Free Expression Policy Project published Nancy Kranich’s The information commons: A public policy report. I commented on that report in the September 2004 Cites & Insights (4:11), noting that neither it nor a David Bollier article convinced me that “information commons”—as used for that initiative—had a clear and useful definition I could support.

ALA’s Office for Information Technology Policy was deeply involved in this initiative or cluster of initiatives. It established a website, info-commons.org, to support the initiative and changed that site from “an irregular online publication” to a blog in April 2003. The blog was active in April-June 2005—but closed in January 2006. Not only did it close, but the site disappeared entirely. That URL now yields a typical linkblog/parking spot. As I said in April 2006, “Disappearing the archives of a blog strikes me as an interesting act for a library association—and for the whole commons concept.”

Going to the ALA OITP web pages now is remarkable. The list of “Initiatives & projects” simply does not mention Information Commons at all. Apparently, OITP doesn’t care about recent history, particularly when it didn’t work out very well. (The Washington Office shows a paragraph about the initiative on an undated page, “ALA plays a key role in civil liberties debates,” that’s apparently from 2003.)

Almost certainly, if you’ve heard “information commons” used in the past couple of years, including names of ALA groups, it refers to those library-related spaces being called Information Commons at some universities. This commentary does not relate to those spaces and their naming.

information overload

New: The trouble is all inside your head—at least that’s what some people claim, and I’m inclined to agree. Not that there isn’t more information than any of us can process into useful stories, wisdom or knowledge. Of course there is, but that’s been true for a very long time.

The problem isn’t information overload. It’s either inadequate filtering or a failure to understand limits. Some people don’t understand the need to filter—to choose which sources they’ll pay attention to, in order to provide a workable flow of primary resources while still retaining enough surprises and breadth to stay in touch with the world. (Focus and serendipity aren’t mutually exclusive.) Some people also feel as though they should follow “everything”—or at least “everything about Topic X.” That’s becoming less and less plausible unless you keep defining Topic X more and more narrowly.

I won’t give you a formula for handling information overload, mostly because no such formula exists. Understanding that you can’t do it all and can’t know it all is a vital starting point. Maintaining an ongoing balance is essential, but how you do it depends on who you are. I do believe the balance continues to shift over time.

Want to see how fuzzy “information overload” is? Take a look at Wikipedia—at least on January 8, 2009.

KTD

Then: Kids These Days. KTD and the spelled-out phrase represent my offhand summary of an “argument” made by many advocates of digital-everything, convergence, the death of books, and so on. The argument has many variations, but says (among other things) that the next generation grew up with computer screens and is more comfortable reading from the screen than from the page; that the next generation both assumes and demands “digital everything” and will settle for nothing less; that the next generation has short attention spans; and that the way young people behave today is the way they will always behave. In essence, KTD proponents believe that today’s young people are mutants, and the rest of us must plan to redo everything to suit their preferences…

To my mind, KTD ranks right up there with “inevitable” as a way to foreclose serious discussion and to win arguments without actual evidence… Yes, today’s kids and teenagers are more comfortable with technology than we were back then—how could they not be? One result, from what I’ve seen, is that fewer of them fall in love with technology for its own sake: They recognize tools for what they are.

Beyond that, the idea that the habits, desires, and needs of a generation don’t change as they age is truly novel and belied by pretty much all of recent history. To a greater or lesser extent, we all become our own parents. That’s just as likely with younger generations as it is with older ones. That’s one reason that grandparents seem to understand kids better than their parents do: The grandparents have already seen the changes happen.

Am I saying we should ignore the new pressures faced by today’s young people and assume that they’ll be just the same as we are when they grow older? Of course not: We don’t grow to be exact duplicates of our parents. We are changed by technological innovation, and we have different ways of incorporating it into our lives. But most of us do just that: we incorporate the technologies that serve us; we don’t transform our lives to serve the technologies. I expect that to continue.

Now: I’m more inclined to use “gen-gen,” short for generational generalizations—and that also covers the stereotypes applied to Boomers and my generation, absurdly called the “silent generation”—you know how silent us college students born before 1946 were in the 1960s, right?

L-N

LBPRBPA

Then: The Library, Bookseller, and Personal Records Privacy Act, introduced by Senator Feingold (D-WI) and eight other senators. This act would amend the USA PATRIOT act “to protect the privacy of law-abiding Americans and set reasonable limits on the federal government’s access to library, bookseller, medical, and other sensitive, personal information.” (Quoting ALA Washington Office commentary.) Section 1 would restore the requirement that the FBI offer facts that give reason to believe a named person is a suspected spy or terrorist before gaining access to library or other private records.

Now: It’s never a good sign when 100% of Google results for an initialism point back to Cites & Insights! Spelling it out, I find that Feingold reintroduced the bill in 2005. No signs of activity since then.

Lessig, Lawrence (Larry)

Then: Lead counsel for Eldred v Ashcroft (which see), chair of Creative Commons. High-profile advocate of the public domain and weak copyright, with a high-profile weblog.

Now: Lessig was a law professor at Stanford Law School—but he’s moving back to Harvard Law School and to direct Harvard’s Edmond J. Safra Foundation Center for Ethics. While he’s still a board member of Creative Commons, he’s stopped focusing on copyright-related matters and says he’ll be working on political corruption. The Lessig blog continues at lessig.org/blog/, and there’s also a Lessig wiki (wiki.lessig.org/Main_Page). Since the Lessig wiki runs on MediaWiki, it’s extremely transparent. It’s also the home of the remarkable group effort “Against perpetual copyright” (see PermaCopyright) and the lovely “Anti-Lessig reader,” a group effort criticizing Lessig’s work (much of it contained in three subpages, one for each of his books).

liblogs and library blogs

New: I use the term liblogs to describe blogs done by people (or groups) who identify themselves as library people, one way or another—as opposed to library blogs, blogs that are official organs of libraries. I prefer these terms to “biblioblogosphere” both because that term strikes me as awkward and suggesting a commonality that I don’t really find, and because “biblio” should properly also include all book- and writing-related blogs, a much larger set of blogs.

You could define liblogs as “blogs by librarians,” but that’s not quite right—say I, a non-librarian with a reasonably well-known liblog.

The edges get fuzzy. At what point does a blog by someone who used to work in a library and almost never mentions libraries or librarianship cease to be a liblog? Are there blogs that are both liblogs and library blogs? For my own studies—including The Liblog Landscape 2007-2008: A Lateral Look, the largest study ever done of how liblogs are actually working in recent times (www.lulu.com/content/4898086)—I’ve tended to err on the side of inclusion around the edges. Which, among other things, means that there are indeed a few blogs that appear both in The Liblog Landscape and in one of two library blog projects.

I believe liblogs have become an important part of the gray literature of librarianship and have mostly moved from “shiny new toy” to “useful tool.” Library blogs are tougher to comment on intelligently—there are some clear successes, but also a great many blogs that seem to have begun without adequate planning. If you’re attending the 2009 OLA SuperConference, I’ll be talking about liblogs and library blogs on Friday afternoon, and a version of the notes for that talk may appear in the next C&I.

[A certain term would fall here alphabetically, if numbers file before letters. But no, I don’t think so…]

Library Copyright Alliance

New: A joint effort of AALL, ALA, ARL, SLA and the Medical Library Association “to address copyright issues that affect our libraries and their patrons.” You’ll find them at www.librarycopyrightalliance.org. The coalition has been around for some time and submits comments on various issues including orphan works.

Library Juice

Then: It’s not a weblog, it’s a newsletter: Rory Litwin’s pure-text distribution, currently fortnightly. Library Juice is considerably to my left on the library-politics spectrum, but I wouldn’t miss an issue—and Litwin has indirectly humanized SRRT for me.

Now: It’s not a newsletter, it’s a blog—and a publishing house! Litwin discontinued the newsletter (or zine) after the August 2005 issue. In March 2006, he began the Library juice blog at libraryjuicepress.com/ blog/. That site links to Library Juice Press, an imprint of Litwin Books, LLC. One of the books is Library Juice Concentrate, the best of Library Juice.

Library Stuff

Then: Steven M. Cohen’s weblog (and, preceded by “The,” my running title for citations in the library literature)… I frequently disagree with Cohen but always find his work valuable. His weblog has recently concentrated on tools (RSS, weblogging) used to keep up with library happenings rather than the library happenings themselves.

Now: It’s difficult to disagree with Cohen these days because Library Stuff is almost entirely a linkblog—pointers to other sites with little or no commentary. The Library Stuff has been retired as a C&I section heading. Most of what would have appeared there is now in Making it Work sections.

lifestream

New: What you’re doing now, and now, and now, and now…in an endless stream of “events,” possibly including full-time video and audio, recorded for…what? Some people who use Twitter think that’s what they’re doing.

My response to “what are you doing now?” asked either of myself or of anyone else is, 99.9% of the time, “Who cares?” I don’t have any reason to care what I had for dinner last Friday, how long I slept last night, what songs played the last time I used Pandora…and I sure don’t care about those details of your life. One response to “lifestreaming” is that people need to forget the “streaming” and worry about “getting”—a life, that is. (Yes, I’m still using FriendFeed. I’m also hiding big chunks of stuff to keep it under control, and I’m really fast at skimming over screens full of trivia. When I find that someone really is trying to lifestream, there’s a convenient “unsubscribe” link.)

LISNews

Then: The most important multi-author weblog in the library community. Begun by Blake Carver, the site now has a number of moderators, hundreds of contributors and thousands of readers. The moderators are ecumenical in their posting habits (anyone can suggest a story, but only moderators can post them), and the site (based on slashcode) offers robust threaded commenting and lots of extras. Unfortunately, anonymous and pseudonymous commenting seems to have brought out the /. types, but there’s a lot of good stuff mixed in with the usual right-wing nonsense.

Now: I stopped calling LISNews a weblog years ago. It’s more of a portal, containing multiple diaries (or blogs) along with a succession of stories. Still interesting, still ecumenical, still worthwhile. I believe LISNews now runs on Drupal rather than slashcode.

LITA Top Technology Trends

Then: [Excerpts] What it’s not: An authoritative statement of technology trends that should concern librarians, with input from industry sources and so much expertise that it can’t possibly be wrong. What it is: There’s a LITA committee, established a few years ago. That committee selected roughly a dozen LITA members who seem to have some insight into the technology trends that matter for librarians—either because of their jobs, because of their readings, or for other reasons. (They may have selected even more, since I don’t know how many people declined invitations.) Since the group began, a few of the “trendspotters” have retired or quit for other reasons, and a few others have been added…

Now: I resigned from the trendspotters after the 2005 ALA Annual Conference. I didn’t entirely escape the group: I served as moderator in 2006, with the bizarre task of forcefully cutting people off when they ran overtime, and I served as a “trendspotter emeritus” in 2007.

I find it unfortunate that, while there’s a nicely-organized set of historical trends for 1999 through Midwinter 2005 on the LITA website (www.ala.org/ala/mgrps/divs/lita/litaresources/toptechtrends/toptechnology.cfm), no such concise, organized record exists for the seven sessions since then. This is one particularly egregious example of the extent to which LITA groups have gotten worse at communicating with other LITA people and the field as a whole.

Or maybe that is a Top Tech Trend: As long as you’ve put the information out there somewhere, you don’t need to worry about organization or coherence. It’s up to the seeker to explore possible channels.

LOCKSS

Then: Lots Of Copies Keep Stuff Safe. This “cooperative archiving solution for ejournals” is centered at Stanford University and (I believe) has considerable potential as one of many partial solutions for digital archiving. Briefly, LOCKSS would establish multiple full-text archives of journals at various universities that work on a self-healing basis: Each archive would be in contact with others and could restore any lost data from one of the others. The archives could be dark (that is, not directly accessible) for currently-published journals where the publisher does not allow open access, but would be even more effective for Open Access journals (and priced journals that allow open access after an embargo period). I was immediately taken with the concept when the first article I encountered by one of its leaders included the following:

The LOCKSS system will clearly not be the unique and ultimate solution to all e-archiving, or even all e-journal archiving, requirements. It is important that this not be the case. We are emphatic in our distaste for monolithic structures!

It’s so unusual for a project leader to disown the concept of Grand Solutions!

Now: LOCKSS (www.lockss.org/) is “a thriving international community-based initiative with libraries and publishers working together with the shared goal to preserve e-content for the long-term. More than 300 leading scholarly publishers have granted permission for their content to be preserved by LOCKSS Alliance members,” according to the website.

LOCKSS Boxes are low-cost PCs serving as digital preservation appliances. The software is open source, OAIS-complianet, peer-to-peer…and in use at quite a few institutions. CLOCKSS, Controlled LOCKSS, will create a distributed dark archive as a companion to LOCKSS itself; it’s a cooperative effort of major libraries (and library cooperatives) and some of the largest scholarly publishers.

microblogging

New: “Blogging” at no more than 140 characters per post, e.g. via Twitter.

I don’t care for the term, just as I wouldn’t call blogs “minijournals” or posts (in general) “miniarticles.” I don’t usually call journal columns “microbooks” either. Those two sentences—which would require two tweets—may give you a sense of why I don’t care for microblogging (or Wikipedia’s quaint “micro-blogging”).

Little tiny messages can be useful for many things, including conversations. I appreciate the apparent fact that Twitter has reduced the number of link-only posts and short, trivial posts in blogs I otherwise find interesting—as long as the bloggers don’t replace them with feeds of all their tweets in maddening succession.

A good blog post represents a little more thought and reflection than 140 characters allows for, just as a good C&I essay or article or column typically represents more thought and reflection than a typical blog post. On the other hand, you can use a blog as a medium for full-length articles, possibly even subject to peer review. I don’t believe it’s possible to replicate the complexity of even short blog posts within Twitter, without bending the norms of the medium itself. For that matter, books have been prepublished as series of blog posts. Wouldn’t it be a little absurd to break a book down into 140-character chunks and send it out that way? (If your answer is “No,” then we really do think in fundamentally different ways.)

I’m astonished that the same kind of “You must microblog” evangelism is turning up (so far, mostly outside the library community) as we saw a few years ago about blogging. One site claims microbloggers are “ready to take over the ‘Net.” Have we learned nothing about diversity and universalism? I know, that’s a rhetorical question. (The curious thing is that the site on which I found this claims to be “a microblog for microbloggers”—but it is clearly, absolutely, 100% a blog, with posts and single sentences longer than 140 characters. Consider this triumphalist statement: “Microbloggers are going to be ones who can create the waves, shake the foundations, and cause the Earth to move, because they can communicate their opinions to thousands and thousands of people with a few keystrokes.” You can’t express that sentiment on Twitter—it’s too long.)

MP3

Then: MPEG-1 audio layer 3. That’s the formal definition. Essentially, MP3 defines an envelope for a variety of lossy data compression and decompression schemes, with the assumption that what’s being encoded and decoded represents sound. That assumption is necessary to make the lossy compression work: It’s based on a set of assumptions about how people hear…The more aggressive the compression, the more commonly people will hear the effects.

There are quite a few different MP3 “codecs” (compression/decompression routines) of varying effectiveness, and most MP3 software allows a variety of compression ratios, possibly involving variable compression. Most P2P downloading of music uses MP3 because it’s compact. Through a combination of sloppy journalism and general deafness or inattention to detail, the most popular MP3 encoding rate (128Kbps, roughly one-twelfth of the data rate for an audio CD) is generally called “CD quality,” which it is not. Originally, the common term was “near CD quality,” which allows for argument as to “nearness.” I believe that most people with reasonably good hearing and careful attention to detail can hear differences between CDs and their 128K MP3 equivalent on most music. It gets a lot harder at higher data rates; at rates such as 196K or 320K (the rate I currently use for ripping from my own CDs), only the most golden-eared or self-deluded listeners will be able to tell the difference on most music, on most playback systems. If you can’t hear the difference between 128K MP3 and audio CD, that’s fine for you—but don’t tell me that all music should therefore be distributed in such degraded form.

Now: Oddly, although “MP3 player” is a catchall term for most portable digital media players that concentrate on sound rather than video, most legally-downloaded, paid-for, digital music has probably been in other formats (specifically AAC). Yes, I still use 320K for my MP3s—but now I have a tiny little Sansa Express to play them on. MP3 at any data rate is not a substitute for CD-quality sound, much less truly high-resolution audio—but, except for some orchestral music and possibly solo piano, 320K and possibly 256K MP3 and AAC are probably good enough for most of us, most of the time.

The sheer flexibility of MP3 makes it a great format for audiobooks, once bookmarking issues are solved: You can put a lot of spoken word on a single MP3 CD at spoken-word compression rates.

MPAA

Then: The Motion Picture Association of America, one of two quintessential Big Media groups. It represents the largest studios (which are not all American-owned corporation, any more than RIAA’s members) and has been remarkably effective in Washington, thanks in part to that silver fox Jack Valenti. The MPAA bitterly opposed VCRs, and some people in the field still argue that the Betamax decision was a terrible mistake. Amazingly, even though videocassettes resulted in vastly higher revenues for movie studios, they continue to oppose any new medium that might allow any form of fair use by consumers. The MPAA is also a major player in seeing to it that copyright goes on forever.

Now: There may be more Big Media groups that are almost quintessential, but RIAA and MPAA still manage to be more anti-consumer than the others. Jack Valenti retired in 2004. The new CEO and chair is Dan Glickman, a former U.S. Secretary of Agriculture. Not all major Hollywood studios are MPAA members; those are Disney, Sony, Viacom/Paramount, Fox, Universal and Warner. Lions Gate and The Weinstein Company are not.

MPAA delights in nonsense like the “You wouldn’t steal a car” ad you’re forced to watch at the start of many DVDs. It operates the “voluntary” movie ratings system (only voluntary if you want your movie to go direct to DVD). Since MPAA long ago succeeded in getting anti-copy technology into VHS and DVDs, it hasn’t apparently felt the need to sue thousands of citizens—and its actions against websites and other operations tend to focus on getting sites shut down, not bankrupting the people behind them.

How much is “piracy” hurting the motion picture industry? Box office revenues in both the U.S. and worldwide were up roughly 5% in 2007 over 2006. DVD sales also continue to rise.

MPEG

Then: Moving Pictures Experts Group. This group has established several standards for compressing and distributing moving pictures:

Ø  MPEG-1 is what you saw on most CD-ROMs (and VCDs, popular in Asia but almost unheard of in the U.S.): Mediocre video at a very low bitrate. “Sub-VHS” is the kindest word for MPEG-1.

Ø  MPEG-2 is what you see on DVDs. When the compression is performed by expert systems (and experts), typically using a two-pass process, the results can be magnificent, particularly when you consider that MPEG-2 is extremely lossy compression, throwing away most of the original data in a movie. When the compression is too extreme or is handled badly, you get a variety of artifacts, including splotchy rectangles of color and loss of detail. Most personal video recorders and DVD burners allow a range of MPEG-2 data rates; only the highest is DVD quality… MPEG-2 can be extended to handle high-definition television.

Ø  MPEG-4 is fairly new, designed to provide different bitrates for a single video object as needed for different uses. It’s not clear when or if MPEG-4 will offer serious competition to MPEG-2 for high-quality video.

Now: Some Blu-ray discs use MPEG-4. Others use SPMTE VC-1 or MPEG-2. MPEG-4 and VC-1 supposedly encode twice as efficiently as MPEG-2, allowing for much more content on a disc.

multitasking

New: Doing several things at once, equally well and as rapidly as you’d do them one thing at a…sorry, had to go look at email and I lost my train of thought.

netbook

New: Bigger than a PDA or UMPC, smaller (and usually cheaper and lighter) than a traditional notebook, “netbooks” are hot items, even though the definition’s getting fuzzy.

The classic netbook may be the original ASUS Eee—the one that weighs two pounds (0.92kg, to be precise), has a 7" screen and relies on solid-state storage rather than a hard disk. Oh, and sells for $300 or less. (Other Eee models and competitive netbooks have had larger screens and sometimes hard disks—but that also means more weight and more vulnerability. Acer, HP and others are making netbooks.) The first netbooks typically used Ubuntu or some other version of Linux, ideal for small storage capacity and fairly small RAM (early netbooks had 512MB, although many now have 1GB). Quite a few netbooks now come with Windows XP, and Microsoft is keeping XP alive until mid-2010 for netbooks.

What most netbooks have in common:

They weigh less than three pounds and cost less than $450 (as low as $250 currently).

They typically include webcams, microphones and wifi, sound support and enough productivity software for web surfing, email and some document and photo handling. I don’t believe I’ve ever seen a notebook that included an optical drive, and would question stretching the definition that far.

They usually have relatively small displays (7" to 10"), possibly undersized keyboards, relatively low storage capacity (2GB to 40GB flash storage, with 4GB to 16GB fairly common, or 30-160GB hard disk), low memory capacity (512MB to 1GB) and relatively slow CPUs (typically Intel Celeron or Intel Atom) in exchange for light weight, compact size, durability, reasonable battery life and low cost.

Netbooks are fundamentally designed as portable companions, not as primary computers.

I think netbooks are a great idea—as a traveling companion. If I was still taking six or eight trips a year, I’d kick in $300 for one of them. As a replacement for a general-purpose notebook or desktop? Probably not, not just because I love having an ergonomic keyboard and dual-screen system but because I’m not ready to trust my primary operations to cloud computing.

Would netbooks make sense as library loaners? That’s a tough one. Cons: Undersized keyboards in many cases, small screens, no optical disc drive, relatively slow. Pros: Light, presumably durable (although reviews suggest a fair amount of problems with early Eee models) and a little cheaper than budget notebooks. I think good budget notebooks (and boy, are there good deals in budget notebooks these days) would make more sense where carrying weight isn’t a huge factor.

NIH mandate

New: Formally, the NIH Public Access Policy. This policy applies to scientific research funded by the National Institutes of Health, one of the largest funders of scientific research in the world. It requires scientists to submit final peer-reviewed journal manuscripts resulting from such research to PubMed Central when the manuscripts are accepted for publication—and requires that these papers be made accessible via PMC within 12 months of publication. The mandate became fully effective on April 7, 2008. Many journals simplify the process by depositing the final published version of each NIH-funded paper, without author involvement.

It’s a “green open access” mandate, but one that’s so weakened that it should satisfy all but the most ravenous traditional publishers, given that it allows up to a full year embargo. It wasn’t sprung on publishers by surprise: Work toward the policy began in July 2004, and an even-weaker version (a request rather than a mandate) was adopted in early 2005.

Did publishers say “OK, we’ve talked about it for years, Congress is behind it, it’s publicly-funded research, we get a full year embargo, let’s move on”? Of course not. AAP called for more public consultation (apparently four years wasn’t enough). IASTM expressed “disappointment” and raised the usual anti-open-access hobgoblins. Legislation was introduced that would have undermined the NIH mandate. Now publishers are whining to the Obama administration.

But it also works. The “request” resulted in something like 4% compliance. The mandate appears to be yielding 55 to 60% compliance, making the results of publicly-funded research available to the public…eventually. With, just to clarify, no weakening of peer review and no threat to copyright—and, really, virtually no new costs. (Worst case: Full compliance might result in costs amounting to roughly 0.1% of the NIH research budget.)

O-P

OAD

New: The Open Access Directory, a wiki hosted by Simmons GSLIS at oad.simmons.edu/oadwiki/Main_Page. From the home page:

The Open Access Directory (OAD) is a compendium of simple factual lists about open access (OA) to science and scholarship, maintained by the OA community at large. By bringing many OA-related lists together in one place, OAD will make it easier for everyone to discover them and use them for reference. The easier they are to maintain and discover, the more effectively they can spread useful, accurate information about OA.

OAD isn’t an anyone-can-edit, “truth by consensus” aggregation. It’s well controlled and brings together some first-rate existing resources, adding to them over time. It’s already an impressive set of resources, to the point that I’d suggest going here first if you have a factual question related to OA. For example? There’s a set of links to a dozen sources of free and open-source journal management software; a directory of “blogs about OA” (blogs that “focus largely on open access”) includes more than 120 blogs (!) as of January 5, 2009; another directory includes more than 50 OA-related wikis…and so on.

OAI

Then: The Open Archives Initiative. Among other things, OAI establishes a standard for metadata in institutional and topical article archives, so that the metadata can be harvested by an OAI harvester.

Now: OAI (www.openarchives.org) “develops and promotes interoperability standards that aim to facilitate the efficient dissemination of content.” The protocol for metadata harvesting, OAI-PMH, has been joined by a standard for object reuse and exchange, OAI-ORE—and if you want to know more, you should visit the OAI site or talk to people actually involved in this area. From what I can gather, the standards are solid. The big problem with OAI is getting institutional repositories to work—that is, getting faculty to deposit their materials.

OAIster

Then: An Open Archives index based at the University of Michigan, using OAI harvesting to build an index of papers stored in a variety of institutional and topical archives (or “self-archives,” if you prefer). As of December 4, 2003, OAIster indexed nearly 2.3 million articles from 243 institutions. At least one OpenURL resolver can search OAIster to identify full-text sources for articles.

Now: As of December 24, 2008, OAIster (www.oaister.org) “provides access to 19,379,235 records from 1054 contributors.” That’s astonishing growth over five years: Nearly ten times as many records and more than four times as many contributors. By any standard, more than 19 million digital resources is an impressive showing.

OAIster was built by Michigan in a collaboration with the University of Illinois Urbana-Champaign (UIUC), using UIUC’s metadata harvester for the first two years of the project. It was also originally funded through a Mellon grant.

OAIster isn’t as heavily used as one might expect, at least not directly. In 2006, there were something over 625,000 searches done in the interface—and in 2005, just over 265,000.

open access

Then: Frequently capitalized (Open Access) by its promoters and sometimes by its opponents. The fundamental principal of open access is that scholarly research should be freely available to anyone who can use it, at no direct cost to the reader. In practice, that currently means two different initiatives:

Ø  Open Access publishing, in which there is no charge for electronic access to the published journals (or collections of articles)…

Ø  Author-initiated article archiving in institutional or topical archives adhering to a standard set of protocols for metadata so that it’s possible to build common indexes that harvest the metadata from many archives. Article archives may consist of fully edited versions by agreement with the journal publishers or, without such agreement, can consist of “preprints” with change files attached.

Now: These initiatives have names. The first is Gold OA. The second is Green OA. There are also two different flavors of OA in terms of what you can do with the material: gratis OA, where you can read it—but you may not be able to do much more—and libre OA, where you can do considerably more.

Want to know more about OA? I’ll suggest the open access cluster in the PALINET Leadership Network. Start with “Open access basics” (pln.palinet.org /wiki/index.php/Open_access_basics) and go from there to some of the other pieces. I’d say you should at least read “Open access: why it matters,” “Open access myths,” “Open access issues” and “Open access controversies.” If you’re really interested, “Open access resources” will guide you to newsletters, blogs and wikis to provide even more information.

Open Content Alliance

New: I didn’t mention OCA in the 2004 glossary because it didn’t exist—it was formed in October 2005. OCA is a consortium of organizations contributing to a permanent open archive of digitized material. Initially, Yahoo! and the Internet Archive seemed to be the big players (along with some universities)—but, in practice, Microsoft has been much more important than Yahoo! in actually digitizing books.

Unfortunately, after scanning more than 750,000 books and mounting a user-friendly Live Book Search page, Microsoft opted out in May 2008: It stopped funding the scanning of books and closed Live Book Search. On the other hand, it also removed any remaining contractual restrictions on the public-domain content scanned with Microsoft money and gave the scanning equipment to the OCA partners that had been using it.

OCA got off to an odd start: Loads of publicity (with Brewster Kahle featured heavily in it), lots of committees with lots of plans…and then nothing much for quite some time. I’ve been writing about both OCA and the Google Library Project for years, at considerable length, and at points it looked as though OCA had disappeared—except that Microsoft was still scanning away.

As far as I can tell now, looking at sites such as www.opencontentalliance.org, the ambitious set of committees never amounted to much, but OCA itself has continued, with results now available at the Internet Archive (www.archive.org/details/texts) and Open Library (openlibrary.org). It appears that virtually all material in OCA is public domain—and it also appears that OCA’s standards for scanning are considerably better than those of Google Library Project.

open source software

New: Generally, software for which human-readable source code is readily available at no more than the cost of reproduction. There’s a lot more to it than that, as you can read about through the usual channels. You almost certainly use some open source software, at least indirectly (e.g., Apache server software—but also Firefox, Audacity, The GIMP and many more).

Open source software may be particularly interesting for libraries and there are open source integrated library systems. For good library-oriented discussions of open source, start with John Houser’s “Why look at open source now?” on the PALINET Leadership Network at pln.palinet.org/wiki/index.php/ Why_look_at_open_source_now%3F, and proceed from there to some of the nine other PLN articles (as of this writing) related to open source.

I haven’t written about open source software in C&I, at least not much, because I rarely deal with that level of software (and avoid dealing with library systems). There’s also another reason. Some open source advocates urge us to use open source whenever possible as a matter of principle, even if it’s not as good as proprietary software. I have a problem with that, possibly because I earned my living for decades as a systems analyst and programmer. To me, pushes to use open source software (or, specifically, free open source software) in preference to priced software, even if it’s inferior, are pushes against programmer salaries. (I don’t care for assertions that “content should be free” either, for much the same reasons.)

I use Firefox because it’s a good product. I use WordPress and MediaWiki and respect both of them. I tried OpenOffice—but I regard Microsoft Office 2007 as a superior product for my needs and don’t regard the price as outrageous. I had and have no qualms about paying for Corel PaintShop Pro as a commercial product rather than asking my wife to keep trying different open source image editors to see whether one was less inscrutable than The GIMP. There’s nothing shameful about earning a living creating high-quality software; there’s nothing wrong with paying other people and companies for doing good work.

OpenURL

Then: I devoted nearly 1,200 words to an essay on how OpenURL works. I think it’s still a good explanation for OpenURL 0.1 (and, I suspect, most uses of OpenURL 1.0).

Now: I’ve lost track because I’m no longer involved in library automation. The simplicity of OpenURL 0.1 was such that I wrote the spec for my former place of work’s implementation (into Eureka) in about a day—and an admittedly brilliant programmer/analyst, Ho-chun Chin, wrote code from it that worked right the first time (almost unheard of for this kind of application). OK, I did base the specs on deep knowledge of MARC formats, but still… That implementation became a reliable standard testbed for others, partly because we hadn’t collaborated with any vendor, making ours a pure implementation.

OpenURL 1.0, the NISO standard, was much more complex (at least in its documentation) than OpenURL 0.1. And that’s about as much as I know—other than that OpenURL continues to be used.

orphan works

New: Works still protected by copyright, whose owners are difficult or impossible to locate. That’s the Copyright Office’s definition (slightly modified), when a notice of inquiry began in early 2005.

The problem with orphan works is that there’s no good way to use them—either to incorporate parts or all of them into new works or to distribute them to the public. They’re still protected by copyright, so if the holder does exist anyone using them could face enormous penalties.

I discussed this inquiry and the whole issue of orphan works in the September 2005 Cites & Insights (5:10). The Copyright Office received hundreds of comments, 18 of which were identical 126-page “comments” from illustrators that denounced Creative Commons, regarded the whole “orphan works” concept as a movement to subvert copyright protection and claimed commercial image stockhouses would simply declare huge quantities of materials “orphan” to save money. But most comments deal with real-world difficulties in using materials when the owner cannot be located—trying to tell the story of personal computing when so much of it comes from now-defunct firms; a renegade artist who recognizes that new creations build on old ones; people who can’t get their old wedding or family photos restored or copied because photo shops can’t be sure the photos aren’t under copyright by the original photographer—who, of course, typically isn’t known or locatable.

Legislation has been proposed, more than once. Different versions of legislation were passed by both houses in 2008. Ideally, legislation would make it possible for people to use orphan works after reasonable attempts to locate the owner, without potentially ruinous penalties if the owner does turn up. No legislation has been passed, but this is one of the odd cases where the Copyright Office and at least portions of Big Media agree with those who seek more balance in copyright. Those who vehemently disagree, largely photographers and artists, appear to hope for The Big Copyright Jackpot: They’ll find a reuse of material that was never registered, register it, and collect enormous penalties. They don’t want to register up front: That would cost money and take time. One might suggest that when ALA, the Copyright Office and the Association of American Publishers all agree that a change makes sense, it’s probably not a socialist plot to destroy copyright. But associations of illustrators, artists and photographers disagree.

It’s worth noting that orphanworks.blogspot.com is resolutely anti-orphan works legislation; it’s a production of the Stock Artists Alliance. That alliance characterizes those in favor of orphan works legislation as “anti-copyright forces and special interest groups.” Orphanworks.net, on the other hand, could be considered pro-legislation (it’s maintained by an attorney involved in the legislation).

P2P

Then: Peer-to-peer networking, that is, sharing digital files directly between different end-user computers rather than through uploading to master servers and downloading from such servers. P2P networking has any number of legitimate uses, but Strong Copyright groups tend to demonize the technology as nothing more than a tool for copyright infringement. P2P networking is no more “all about stealing” than crowbars are “all about breaking and entering.” Oddly, police departments don’t attempt to shut down hardware stores for selling crowbars, but a number of legislators think that the government should find ways to shut down P2P networks.

Now: I don’t think I can do much better than the last two sentences of the 2004 entry. I regard use of peer-to-peer networks to violate copyright as both unethical and illegal, although neither “piracy” nor “theft” is a good synonym for infringement.

PALINET Leadership Network

New: The PALINET Leadership Network, PLN, is designed to help library leaders (and those who will become leaders) communicate, coordinate, find resources and share information. It’s also “where” I work: my primary source of income is as Editorial Director for PLN. You’ll find it at pln.palinet.org. It takes about a minute (and two email responses) to register as a user with rights to add or modify content. Anybody can read content. PLN is free. It’s not restricted to PALINET members.

At this writing, PLN includes just over 470,000 words of content in just over 300 articles (exclusive of help pages, Category pages and Leader’s Digest articles). It’s a wiki, using MediaWiki software. Any registered user can add or modify content, but it’s not a traditional wiki: Most articles are signed, with new contributions expected on Talk pages rather than as direct changes to the articles. (I monitor changes every day and would reverse inappropriate changes in a heartbeat.)

PLN is a great resource, if I do say so myself, and getting better. If you wonder whether you qualify as a “library leader,” current or potential, the answer’s almost certainly Yes (you’re reading this, aren’t you?). “Who’s a leader?” (pln.palinet.org/wiki/index.php/ Who%27s_a_leader%3F) may help clarify that statement.

It’s not just about becoming a better leader (or manager, not the same thing); it’s also about getting up to speed on aspects of libraries that leaders need to understand. So, for example, there’s a strong cluster of articles on open source software; another cluster on open access; other clusters on blogs and wikis, conferences and presentations, and more to come.

You can use PLN—and PLN can grow through your use, your word of mouth, and your contributions. PLN is becoming a vital resource for library leaders of all varieties. Not that I’m biased, or anything…

PASA

Then: The Public Access to Science Act of 2003, HR 2613, also known as the Sabo bill. Introduced by Rep. Sabo (D-Minn.), apparently at the urging of PLoS. Here are the key elements:

(1) IN GENERAL.—Copyright protection under this title is not available for any work produced pursuant to scientific research substantially funded by the Federal Government to the extent provided in the funding agreement entered into by the relevant Federal agency pursuant to paragraph (2) [Which requires a provision in funding agreements that states that copyright protection is not available for work pursuant to the research]

Sec. 4. Sense of Congress: It is the sense of the Congress that any Federal department or agency that enters into funding agreements…should make every effort to develop and support mechanisms for making the published results of the research conducted pursuant to the agreements freely and easily available to the scientific community, the private sector, physicians, and the public.

In other words, Federally-funded scientific research should not be protected by copyright and should be openly accessible. This proposal is the only justification I know of for the claims by traditional publishers that Open Access implies giving up copyright. It doesn’t. To me, the stickiest point in PASA is “substantially funded.” Anyone familiar with CIPA should recognize that Federal initiatives have a camel’s-nose effect: Would 15% be defined as “substantial”? I was also surprised to see the assertion in the bill that the U.S. government “spends $45,000,000,000 a year to support scientific and medical research whose product is new knowledge for the public benefit.” $45 billion (U.S.)!

Now: The $45 billion figure no longer surprises me. The Sabo bill was an extreme measure, too broad in some areas. While the Sabo bill went nowhere, it may have indirectly started the long journey leading to NIH’s OA mandate, which doesn’t put material into the public domain but does make many scholarly articles available, if with a delay.

PDEA

See eldred act.

peer review

Then: I always thought that meant double-blind refereeing of scholarly papers: That is, the referees don’t know who wrote the paper and the author doesn’t know who’s reviewing the paper. True peer review is at the heart of scholarly journal publishing; at its best, it should level the field between newcomers and established researchers while protecting the integrity of the publishing enterprise. Unfortunately, not all peer review is double blind (some journals leave the names of the authors on papers), and some publications that claim to be peer reviewed have submission-to-approval cycles that appear incompatible with true double-blind peer review. The water is muddied further because some “traditional” journal publishers (or “toll-access publishers” in the Open Access jargon) have asserted that Open Access publishers don’t carry out proper peer review. I have never seen evidence to suggest that this is a valid charge—but some of the same publishers also assert that Open Access means giving up copyright, which is simply false.

Now: The nonsense that somehow OA will mean inferior or no peer review continues to be used as an argument, despite the total lack of evidence for it.

One oddity that’s arisen: A supposed “better than double blind” methodology—where the editor and editorial board of a journal aren’t known. This strikes me as peculiar, but what do I know?

PermaCopyright

New: Many content creators, and the Big Media forces that control most copyrights, don’t much care for the U.S. Constitution—or that part of it that says “The Congress shall have power to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive rights to their respective writing and discoveries.” Mary Bono, Sonny’s widow, famously argued that copyright should be extended to “eternity minus one day,” so as to satisfy that requirement but effectively provide permanent copyright. Mark Helprin and others have argued, apparently seriously, that “creators” should have eternal protection and control over their “creations.” (You’ll find an extended discussion in C&I 7:8.)

I happen to think this is a great idea—as long as “creation” is defined appropriately. Here’s how I put it in July 2007:

Here’s a modest change in U.S. copyright law:

Ø  Any work asserted to be wholly original can be maintained under copyright indefinitely.

Ø  Any work admitted to be partially or wholly derivative is protected under copyright for 28 years (or 40 years or other plausible term).

Ø  When you create a work, you either assert that it is wholly original and get PermaCopyright, or you say nothing and get Founder’s Copyright.

Of course, the words in that first bullet need to be defined.

Ø  Wholly original: No significant part of this work can be found in any previous work. Period. If one percent of the sentences or five percent of the plot in your novel appeared previously (in one work or many—why should pastiches get more protection than straightforward copying?), if two seconds of your three-minute song is recognizable as a melody or chord sequence from other music, if a significant portion of the dialogue, scenes, plot or characterization in your movie is recognizable from other movies (or books or…) then your work is not wholly original. I’m sure we can arrive at similar “levels of unoriginality” for paintings, sculpture, nonfiction and the like. (Nonfiction’s tough: You can’t copyright facts, so you’re presumably claiming that your sentences explaining those facts are wholly original. Good luck.) Oh, and by the way, either there’s a fine for falsely claiming originality (or persecution for perjury) or, at the very least, your derivative work loses any copyright protection since it was protected under false pretenses.

Ø  Maintained under copyright: PermaCopyright requires government resources, just as communities composed of houses do. Those who desire PermaCopyright should pay for those resources—just as homeowners in communities do. Thus, a reasonable annual fee should be part of the process of maintaining indefinite copyright. After all, why should intellectual property be treated more advantageously than real property? Fail to pay the annual fee, you lose the PermaCopyright.

Seems straightforward to me. Truly original artists could get their desires: everlasting copyright. Those who create by building on the works of others would get plenty of protection to earn royalties for their partly-creative work, albeit not for absurdly long terms. Who could oppose this reasonable legislation?

PermaCopyright has never been proposed in Congress. Since I hold copyright on the idea as spelled out (which is to say, I wrote it down), I’d be happy to license it to an agent of Big Media or group of creators for an appropriate sum. $250,000 and an annual royalty of $25,000, good for the duration of PermaCopyright, should do nicely. Or, you know, $25,000 and a really good HDTV and Blu-ray player…

PLN

See PALINET Leadership Network.

PLoS

Then: Public Library of Science, the most heavily publicized (perhaps over-publicized) development in Open Access publishing. PLoS began with a petition in which 30,000 scientists said they wouldn’t submit papers to, referee for, or serve on the editorial boards of journals that didn’t make published articles freely available in electronic form. The publishers called their bluff, and at least 99% of the signatories folded. PLoS returned as a combined hype and publishing effort, with a $1,500 article charge that’s the highest of any known Open Access publisher (three times as high as BioMed Central, for example) and a seemingly endless stream of publicity stunts. PLoS Biology has begun, a monthly that’s free in electronic form and available in print for a modest charge. Others will follow.

Now: PLoS Biology has established itself as an important journal and has been joined by five other PLoS journals. The fees have gone up and range from $2,200 to $2,850 (the highest fee is for PLoS Biology and PLoS Medicine).

There’s also a really interesting recent initiative: PLoS ONE, “fast, efficient, and economical, publishing peer-reviewed research in all areas of science and medicine.” This journal, with a $1,300 fee, differs in that “the peer review process does not judge the importance of the work, rather focuses on whether the work is done to high scientific and ethical standards, is appropriately described, and that the data support the conclusions.” It’s designed to be less selective (but also fast), and includes tools for commentary and rating—or, as the site says, “post-publication tools to indicate quality and impact.” It’s intended to be a high-volume (electronic-only) publication. So, for example, 77 articles were published December 18-24; 212 articles were published in November 2008.

PoD

Then: Print on demand, that is, production of books either in very short runs as needed or one at a time as they are sold. Most forecasts for large ebook sales include PoD as part of ebooks, although the end result of PoD is a bound, toner-on-paper/ink-on-paper, physical book that is in no sense an ebook. (Yes, it begins as a digital markup—but so does almost every other book these days.) PoD already constitutes a multimillion-dollar marketplace. Proponents believe that packaged PoD production systems will come down in price and complexity enough so that thousands of bookstores and, possibly, libraries will have their own in-house PoD. Order a book, get a cup of coffee, and pick up the book: Freshly printed and bound in an order of one. Some proponents with publishing industry experience believe that PoD could entirely supplant traditional offset or webfed publishing, although the costs per copy seem likely to be higher for a very long time to come.

Now: To my mind, there are three very different things called PoD:

Print-on-demand as a back room/fulfillment operation. Quite a few publishers use PoD suppliers to keep backlist books in print when they no longer justify full print runs. This methodology should be transparent to users.

Print-on-demand for on-the-spot books. Yes, there are now systems that will do this; no, there aren’t many of them. The one at the University of Michigan Libraries produces public-domain books at $10 each. I don’t hear as many claims that this is likely to replace traditional publishing any time soon: It’s expensive.

Publish-on-demand. Quite a few sources say there’s no such thing. I use this term to describe Lulu, CreateSpace, and other agencies that act as complete fulfillment houses for authors. They provide the online store, they handle the orders, payments and shipping, and they produce the books using print-on-demand technology as orders are received. I’ve done a number of books using this methodology, and while it’s an expensive way to produce books (compared to traditional printing), it’s the only way I know of to produce professional-quality trade paperbacks with no upfront capital.

PRISM

New: The Partnership for Research Integrity in Science and Medicine. Realistically, another name for AAP/PSP. A “partnership” founded to oppose the NIH mandate and try to prevent similar moves toward open access. You’ll find a lengthy discussion of PRISM in the October 2007 Cites & Insights (7:11). You can certainly visit www.prismcoalition.org and search for other members of the so-called “coalition” or signs of activity on the site. Good luck with that.

Maybe all you really need to say about this misleadingly named partnership or coalition and its ongoing significance can be found on the disambiguation page for “prism” at Wikipedia—where, at least as of January 8, 2009, this anti-access group isn’t one of 25 alternative meanings. It is, apparently, fundamentally less significant than “a program for gifted children at Odle Middle School in Bellevue, Washington and Thomas Grover Middle School, New Jersey” or a defunct TV channel in Philadelpha. That sounds about right.

producer-pays publishing

Then: See open access.

Now: I don’t believe it’s ever called “producer-pays” any more. It’s usually “author fee” or “author-side fee.” It’s worth noting that most open access journals don’t charge author fees, relying on other sources of revenue—and that many subscription journals do charge author-side fees.

Prosser model

Then: A split option for STM journals, one already in use by some entomology journals. If an author (or institution) elects to pay a publication charge, the article becomes open to all readers, free, immediately upon publication. If the author or institution does not pay a publication charge, the article is only available to subscribers (or through other paid provisions such as aggregators). This hybrid model, proposed by David Prosser, offers a way for journals to experiment with open access and its implications for revenue and readership, without abandoning their current revenue streams…

Now: The most common term these days is probably “hybrid open access journal,” and Springer calls it “Open Choice.” Also called the “Walker-Prosser model,” given that it was first suggested by Thomas J. Walker and used for his Florida Journal of Entomology.

There’s a bewildering range of names for this model: AuthorChoice, Free to Read, Open Access option, Author Choice Option, Online Open (as one word or two), BMJ Unlocked, Sponsored Article, [PublisherName] Open…

The American Society of Plant Biologists has an interesting variant for articles in Plant Physiology. Immediate OA is free for members or $1,000 for nonmembers—but it only costs $115 to join the society.

Protecting Children from Peer-to-Peer Pornography Act of 2003

Then: Porn sells—particularly when it comes to misleading names for legislation. This proposal, HR2885, introduced by Reps. Pitts, John, Sullivan, Pence, and DeMint, has the following summary:

To prohibit the distribution of peer-to-peer file trading software in interstate commerce.

Note the absence of “child” and “pornography” in that sentence. “Child pornography” is certainly featured in the findings section, but not at all in the legislation itself. Fundamentally, the bill would prohibit all noncommercial P2P software.

Now: Think about the children! Bad legislation, misleadingly named, par for the course, never passed.

pseudo-CDs

Then: My term for any sound recording that looks like a Compact Disc but adds mechanisms to attempt to discourage or prevent copying. Such mechanisms violate the Red Book, the license under which all CDs are produced, and can’t properly be called Compact Discs. There is no such thing as a copy-protected CD; they’re all pseudo-CDs.

Now: I believe Sony’s disastrous experiment with “piracy-proof” pseudo-CDs containing root kits was the peak of attempts to pawn off DRM-laden “CDs” on customers within the U.S. I haven’t heard much about “copy protected CDs” since January 2006, when SonyBMG agreed to abandon its use of the measures.

public domain

Then: Where Disney gathered the raw material for many of the studio’s finest animated movies (and some live-action ones)—but Disney, among others, is now devoted to making sure that nobody else will be able to build on previous creations in that manner. Creations enter the public domain in three ways:

Ø  When copyright expires. That meant 28 years for a long time. Now it means “life of the creator plus 70 years” or “95 years if it’s a corporate creation”—with nobody willing to take a bet that Congress won’t make those numbers “90” and “115” in another copyright term extension act, some time around 2018. The way things are going, copyright for most material published after 1918 may never expire.

Ø  Because the material was generated by the Federal Government. Such material is always in the public domain within the United States, but not necessarily worldwide.

Ø  Because the creators or copyright holders have explicitly dedicated the material to the public domain, using a Creative Commons “no rights reserved” license or some other methodology. If you believe the arguments of SCO’s chairman, dedicating material to the public domain might be considered treasonous or at least unconstitutional, since it interferes with the holy profit motive enshrined in copyright.

Materials in the public domain may be used at will: copied, distributed, sold as part of new packages, and used as the foundations for new creations. Most musicians, artists, and writers have always looked to earlier works for inspiration; the public domain, which used to grow at a steady and predictable rate, made it feasible to use such inspiration without becoming embroiled in license negotiations.

Now: SCO’s absurd argument went nowhere. You can still explicitly place your creations in the public domain—and that’s about the only way they’ll get there. “1918” in the earlier article was wrong; “1923” appears to right—and a lot of material created after 1923 may be in the public domain, because it was created before 1976 and copyright wasn’t renewed.

Public Knowledge

Then: This organization is a recently-organized “public-interest advocacy organization dedicated to fortifying and defending a vibrant information commons.” (www.publicknowledge.org). The group has four broad goals related to intellectual property, retaining an open market, and open Internet architecture.

Now: PK is still around and does excellent work in areas such as orphan works, open access, patent reform and a variety of other issues. There’s a Policy blog (add “/blog” to the overall URL).

R-Z

RIAA

Then: The Recording Industry Association of America, one of the Big Media groups. It’s not all of the record publishing companies, just the biggest (the “big five,” which may soon become the “big four”). The RIAA blames downloading for any loss of sales, refusing to admit that lousy music, high prices, and awful record store environments might have something to do with it. The RIAA seems to regard its customers as thieves, and is taking delight in suing dozens (hundreds?) of them—although, in some of those cases, those being sued are thieves…

Now: RIAA is primarily the “Big Four”: EMI, Sony, Universal and Warner. There are other members, but those four dominate the association. “Hundreds?” became thousands and tens of thousands, in a massive legal assault taking pains to avoid ever actually coming to court. RIAA seems to be dropping that particular anti-consumer effort. Meanwhile, RIAA continues to abuse the word “piracy,” continues to press for extreme-copyright legislation and seems unwilling to accept the possibility that the drop in music sales is less because of copyright infringement than because of lousy music. (RIAA’s site says “piracy” is too benign as a term!)

Even though RIAA began in order to establish a standard equalization curve for LPs, it’s become almost entirely an extreme-copyright agency (plus its role in certifying Gold and Platinum records). The lengthy discussion on RIAA’s site of what’s legal and what’s illegal never mentions fair use, except in one curious quotation regarding a newspaper suit against a hard-right website for copyright infringement. RIAA explicitly says you have no legal right to copy CDs to your computer, to data CD-Rs or to portable music players, but says doing so “won’t usually raise concerns” if it’s for personal use and it’s from a CD you personally own. “Won’t usually raise concerns”—isn’t that reassuring? In other words, sometimes the RIAA may be concerned (and take action?) even if you’re copying your own CD to your computer. If that’s not true, why use the qualifier “usually”?

RSS

Then (excerpts): Really Simple Syndication. Or Rich Site Summary. Or RDF Site Summary. If that’s confusing, so is the RSS scene, possibly because there are two competing RSS specifications from two entirely different groups….

I don’t use RSS at the moment, so that’s as much as I know. Except that some people sure are committed to the idea that everything should come via their RSS aggregator.

Now: Well, yes, I do now use RSS…a lot. I monitor something over 500 blogs, taking maybe 45 minutes a day (over two slots) to do so. That’s essentially impossible without RSS. Some observers are still bemused that so few people seem to recognize the term “RSS”—but you don’t need to know anything about RSS to click that orange icon on the address bar in so many sites and subscribe (not “use the RSS feed”) to the site in Bloglines, Google Reader, or one of many other aggregators.

In any case, last time I checked, librarians—at least—tended not to use either form of RSS for their feeds: They seem to prefer Atom. I don’t really understand the difference; nor do I (or most users) care. It’s a feed; it’s a subscription; RSS is just jargon.

Sabo bill

See PASA.

SACD

New: Super Audio CD, one of two would-be successors to CD as a higher-quality sound format with support for surround sound. (The other is DVD-Audio.) SACD was developed by Sony and Philips; the dimensions are the same as CDs and DVDs; the capacity is nearly 8GB (dual-layer).

While neither SACD nor DVD-Audio ever became broadly popular in the marketplace, neither did SACD disappear. Quite a few record labels continue to release SACDs, most commonly as hybrid discs—offering a true CD on one layer and a 4.7GB SACD layer. You or your library may even own such discs—they are CDs (and may have the CDDA logo) but should also have the SACD logo somewhere. Thousands of SACDs have been released, mostly classical (but with many jazz and popular releases as well). There are a number of reasonably-priced “universal players” that handle CD, SACD, DVD-Audio and, usually, DVD as well (one highly-regarded player from Oppo costs around $170).

It’s fair to suggest SACD never will replace CD. Relatively few people seem to hear or care about the improved sound that SACD can provide, and other than movies, relatively few people seem to care much about surround sound. The tendency has been in the opposite direction: For people to accept degraded sound in low-rate MP3 and AAC downloads in exchange for greater convenience. Personal admission: I’m not sure I could hear the difference between CD and SACD (my hearing’s not that great), and I’ve never owned or coveted an SACD player.

scholarly access

Then: The existing “scholarly access system”—that is, the system through which scholars (and non-scholars, for that matter) gain access to the articles and monographs written by other scholars—is broken. That system relies on a complex web of for-profit, not-for-profit, and society publishers to publish the articles and monographs, on a combination of publishers and aggregators (usually but not necessarily for-profit) to provide electronic access to those journals and monographs that appear in print form, and a combination of personal and library subscriptions and purchases to provide access.

Let me narrow that: The system for access to STM scholarship (science, technology, and medicine) is broken, and the breakage of that system in turn threatens the rest of scholarly access.

It’s broken because the prices for access are too high for any university library to be able to provide comprehensive access, even to the primary fields for that university. With the growth of new disciplines, many of which cross boundaries of older disciplines, the problem just gets worse, and a seemingly uncontrollable proliferation of new journals and more articles (frequently covering smaller and smaller elements of research) doesn’t help. Add to that the aggressive pricing and substantial market control of a few mostly-European STM publishers, Reed Elsevier the largest and most obvious, and we have a situation in which libraries can’t keep up and certainly can’t maintain the long print runs that scholars have traditionally required. Online access in lieu of print can be a stopgap measure, one that certainly improves access to current materials but, given the nature of most licenses, can endanger long-term access. And while it’s easy to pick on Elsevier, Kluwer and friends, quite a few society publishers also engage in aggressive pricing (“gouging” may be a good synonym), using library subscriptions to subsidize other operations of the organization.

It doesn’t have to be this way. Many scholarly societies charge fair prices for their publications, either offering them at the same price to libraries as to members or adding a reasonable surcharge for the added costs of dealing with institutional subscriptions. Some for-profit publishers are in it first for the publishing, trying to make enough money to keep doing what they love but certainly placing the scholarship ahead of the profit. A variety of initiatives—free online refereed journals, Open Access journals, SPARC’s promotion of less-expensive journals—can help. In the humanities, the price increases and journal proliferation have generally been moderate, and monographs continue to be key to the scholarship—but too many libraries have little money left over for monographs or inexpensive humanities journals after they’ve been ripped off for STM costs.

The system is broken. Libraries need ways to survive. Open Access almost certainly provides some portions of a set of solutions, but Open Access doesn’t directly address the issue of library costs (until and unless OA journals actually replace high-cost journals).

With that grumpy introduction, see open access.

Now: I included the whole 485 words because the system’s still broken—and so far, OA isn’t doing much to address the issue of library ability to maintain monographic and other scholarly access. Nor do some OA advocates feel it should: Some of them seem to think it’s fine for publishers to drain libraries dry, as long as other scholars can get their hands on article preprints.

Scholarly Electronic Publishing Weblog

Then: While I’m certainly not mentioning every weblog I visit regularly, Charles W. Bailey, Jr.’s effort deserves recognition. Bailey founded Public-Access Computer Systems Review, one of the library field’s earliest free refereed journals (begun 1990, strong through 1997, now officially ceased), and the Public-Access Computer Systems List that began before the journal and continues to this day. He’s maintained a Scholarly Electronic Publishing Bibliography for a very long time, and uses weekly entries in the weblog to note new material, much of which will wind up in the next formal update to the bibliography. An impressive long-term effort…

Now: SEPW is now at digital-scholarship.org/sepw2/ and appears monthly rather than weekly. SEPB itself is updated quarterly. (This time around I’m not even mentioning a significant fraction of blogs I “visit” regularly. You can explore most of them, and some I don’t subscribe to, in The Liblog Landscape 2007-2008, available for $35 at lulu.com/content//4898086.)

SCO

Then: Formerly the Santa Cruz Operation, a company that used to be one of the innovators and distributors in the Linux field but seems to have turned into an Intellectual Property Company. SCO purchased Unix system V and is now claiming that Linux infringes on the Unix copyrights—and, along the way, that GPL is unconstitutional.

Now: Actually, Caldera Systems acquired the Santa Cruz Operation’s Server Software and Services Divisions and some other assets and changed its name first to SCO, then to The SCO Group. The lawsuit—primarily SCO vs. IBM—has been going on for a very long time and has been remarkable in a number of ways. SCO claims that copyright-protected Unix code is in Linux—but refuses to show any examples except under nondisclosure agreement. The courts have fairly consistently found that SCO’s claims (including its claims of sole ownership to Unix code) are wrong. The suit is currently “in abeyance” until (unless?) SCO emerges from bankruptcy.

SED

Then: Surface-conduction electron-emitter display. A new possible replacement for cathode ray tubes, one that uses CRT principles. One glass plate is coated with a film containing huge numbers of tiny electron emitters, which fire to another phosphor-coated glass plate a few millimeters away. The result can be large, flat panels less than four inches deep, using about half the power of CRTs or one-third the power of plasma displays (you do know that plasmas are power hogs?), with ultra-high resolution. Canon and Toshiba have been developing the technology; Toshiba claims that SEDs will appear in the marketplace this year. Variations of this technology have been promised for years now. Since no display technology other than CRTs provides true blacks or the widest possible color spectrum, I hope this one makes it to the market.

Now: I was apparently confusing SED and FED, field emission display, which uses multiple emitters per pixel (SED uses one emitter per pixel). That 2005 target slipped—and some of the delay had to do with proprietary information and lawsuits. The key lawsuit wasn’t dropped until December 2008. We may yet see SEDs—but one wonders whether OLEDs won’t, in the end, be more important.

Shifted Librarian

Then: One of the more provocative (and, for a while, voluminous) weblogs in the library field—also its author, Jenny Levine. The weblog has consistently pushed the limits of fair-use quotation from other weblogs and other sources, and has a strongly technocentric and portable-oriented stance. I question much of what’s in it and find it a valuable and provocative resource.

Now: Levine stopped testing the limits of fair use years ago, and more recently changed the primary focus from remote services to gaming in libraries.

SOAF

Then: SPARC Open Access Forum, a list devoted to all aspects of open access. Valuable, probably vital if you’re concerned with open access.

Now: Still in operation (instructions for subscribing at www.arl.org/sparc/publications/soan), with 4,719 posts through December 23, 2008.

SOAN

Then: SPARC Open Access News, edited and largely written by Peter Suber. This free electronic newsletter includes incisive commentaries by Peter Suber and the links that he’s identified in his Open Access weblog. Another valuable resource if you’re interested in open access.

Now: Also still in operation (see address under SOAF) as a monthly enewsletter. A typical issue will have one or two major articles followed by lots of links on key events in OA during the month—for example, the December 2008 issue (#128), in print preview mode, has an 11-page article giving Suber’s predictions for 2009 (and linking back to previous annual predictions), a 15-page roundup of briefly-annotated links to items from Open Access News, and a page linking to upcoming OA-related events and providing credits for the issue. Suber’s essays are always worth reading.

SPARC

Then: The Scholarly Publishing and Academic Resources Coalition. “An alliance of universities, research libraries, and organizations built as a constructive response to market dysfunctions in the scholarly communication system.” That’s from the SPARC website (www.arl.org/sparc/), which includes full details on who’s involved and what the organization has done. SPARC has served as an incubator for “competitive alternatives to current high-priced commercial journals and digital aggregations,” an advocate for “fundamental changes in the system and the culture of scholarly communication,” and a source of educational campaigns “aimed at enhancing awareness of scholarly communication issues.”

SPARC, which began in 1998, now has nearly 300 institutional members and 200 coalition members. It cooperated in founding SPARC Europe in 2001 and is affiliated with major library organizations around the world.

SPARC has had some success—but it’s worth noting that “competitive alternatives” doesn’t always mean free or cheap. Some SPARC-incubated journals have prices that would astonish the casual observer, but they’re significantly cheaper than the commercial equivalents. SPARC has assumed sponsorship of the forum and newsletter begun by Peter Suber; see SOAF and SOAN.

Now: SPARC (www.arl.org/sparc/) now claims “over 200 North American members” in seven Canadian provinces, 45 U.S. states and DC, in addition to “several institutions from outside North America and affiliate memberships of six major library associations.” There are sister organizations in Europe and Japan. SPARC celebrated its 10th anniversary in 2007.

Suber, Peter

Then: The guru of open access. A former philosophy professor who now researches and writes on open access, operating the key weblog in the area as well as SOAF and SOAN (which see). One of those rare gurus who responds to tough questioning with careful, thoughtful comments instead of personal attacks or open disdain. He may or may not turn you into a believer, but at least you’ll understand what’s being said—and why.

Now: That sounds about right…but “former” is incorrect (he’s a research professor of philosophy at Earlham College), and his current affiliations include open access project director at Public Knowledge and senior researcher at SPARC.

swamping

Then: What happens when one set of resources becomes effectively inaccessible because it’s buried by much larger resources. Far more likely in digital environments than in the physical world: After all, if you own a Kia, you won’t lose it in a parking lot because of all those Chevys and Hondas. But when you take two million bibliographic records (with 30-32 significant words each) and lump them into a common index with the text of 120,000 books (with an average of 70,000 words each), it becomes much more difficult to locate books with titles that aren’t peculiarly distinctive. Similarly, if you search a 10,000-record ornithological database simultaneously with a 45 million record bibliographic database, with automatic merging of results, it may be hard to find records related to the birds, as many of the same words are likely to appear far more often in the 4500-times-larger database. Swamping can be prevented by intelligent systems design; it can usually be ameliorated by intelligent search strategies, but that’s probably the wrong place to do it. (I don’t write much about this here, but thinking about it is part of what I do for a living.)

Now: Strike the last sentence as no longer applicable. Obviously, fielded searching helps solve this immediate problem—when fielded searching is available. Otherwise, you’re either relying on “relevance” computed according to an algorithm you can’t see, or doing search refinement—which may or may not be helpful. I believe this continues to be a problem, and can only hope that better minds than mine are actually working on it. It bothers me that I’m not sure we’d know whether they were succeeding: When swamping occurs, you definitely get results—just not the results you might actually need.

TCBR

Then: Technology Consumer Bill of Rights, a proposal put forth by Sen. Ron Wyden (R-OR) and Rep. Chris Cox (R-CA). The bill “aims to ensure that consumers can use digital media as freely as analog media for home use,” according to a December 2002 PC Magazine note. In other words, it is or was one of many bills attempting to redress some of the imbalance in DMCA. As you would expect, Jack Valenti forthrightly said, “The spirit of these resolutions, disguised as pro-consumer, is actually anti-consumer.” In Jack Valenti’s world, that’s exactly right. Note that neither strong-copyright advocacy nor the desire to rebalance copyright follows party lines: Democrats and Republicans are on both sides of these issues.

Now: The fate of this bill was the same as the fate of all other bills in the last six years designed to redress the enormous imbalance in copyright, and more particularly the extent to which DMCA nullifies fair use. To wit, it went nowhere—and people like Valenti once again got away with calling black white and Big Media protectionism “pro-consumer.”

technolust

New: I started talking and writing about technolust and the need to avoid it, no later than 1992—when I presented “The death of print, Project Xanadu, and other nightmares” at the Arizona State Library Association. You can find that speech (as it was prepared, which is pretty much how it was given) at waltcrawford.name/aztalk.htm, and I haven’t touched a word of the talk in the 17 years since it was written. I certainly didn’t invent the term, although I may have been one of the first to use it with regard to technology in libraries. I’m delighted to see others take up the call that we need to find balance and avoid technolust.

top technology trend

Then: I’m indebted to Cory Doctorow and the Boing Boing weblog for this formulation—but the key point is one I’ve been thinking about for years.

The last twenty years were about technology. The next twenty years are about policy. It's about realizing that all the really hard problems—free expression, copyright, due process, social networking—may have technical dimensions, but they aren't technical problems. The next twenty years are about using our technology to affirm, deny and rewrite our social contracts: all the grandiose visions of e-democracy, universal access to human knowledge and (God help us all) the Semantic Web, are dependent on changes in the law, in the policy, in the sticky, non-quantifiable elements of the world. We can't solve them with technology: the best we can hope for is to use technology to enable the human interaction that will solve them.

On that note: I have a special request to the toolmakers of 2004: stop making tools that magnify and multiply awkward social situations (“A total stranger asserts that he is your friend: click here to tell a reassuring lie; click here to break his heart!”) (“Someone you don't know very well has invited you to a party: click here to advertise whether or not you'll be there!”) (“A ‘friend’ has exposed your location, down to the meter, on a map of people in his social network, using this keen new location-description protocol—on the same day that you announced that you were leaving town for a week!”). I don't need more "tools" like that, thank you very much.

Cory Doctorow is certainly no Luddite; his weblog and his science fiction both make that clear. And, although I sometimes have fun with the concept (you do know that the Luddites were quite right in what they were saying?), I’m not a Luddite either. I make my living from technology. This zine is only feasible thanks to a whole complex of advanced technologies. I love what technology has done for entertainment—even as I wonder about what it has done to entertainment. When it comes to libraries, there’s no getting around the significance of technology in where you are today and where you can be tomorrow.

But by now you should be figuring out that technology won’t solve the real problems that libraries face now and in the future. Maybe technological advances will provide some useful new tools, but Doctorow’s examples are vivid reminders that too many new tools come along with unintended consequences (and sometimes intentional consequences) that need to be coped with.

Libraries work effectively by integrating new technologies into an ongoing continuum of collection and services—and librarians work most effectively when they recognize that most users (and, for most public libraries, the most dedicated users) are less devoted to constant technological change than they are to the heart of libraries: Good people offering effective access to varied, worthwhile collections that center on books.

My top technology trend for 2004, when it comes to libraries and librarians, is the same as for 2003, 2002, and before: Toning down the technology in favor of the humanity.

Now: Another essay quoted in full because it’s still relevant. For the first time in some years, I’ll be doing a “top tech trends” panel in January—but in Toronto for OLITA, not as part of ALA or LITA. I’ll have a range of possible trends—but I’ll certainly mention this one as perhaps the most important.

Sadly, in my opinion, Doctorow’s second paragraph has gone wholly unheeded. Instead, there are lots of new applications that either invite you to spam all of your email contacts with invitations or, worse, “just do it.” Unfortunately, I think it’s now fair to say that the last quarter-century has been all about technology (and the last decade about believing things we knew weren’t true, such as “It’s OK to have 75% of your gross income committed to housing payments when the rate resets,” but that’s economics, not technology). Will we see more balance over the next quarter-century? One can only hope.

TWAIN

Then: The image-input protocols used by almost all scanner manufacturers. I’ve always heard that TWAIN stands for “Technology Without An Interesting Name” because the committee working on it was sick of amusingly-derived acronyms. Some current sources argue that there’s historical evidence that TWAIN took its name from the Rudyard Kipling poem “The Ballad of East and West”—you know, “and never the twain shall meet.” And like an idiot, I ran the same silly paragraph about the issue twice in two months last year, albeit in different running sections. None of this matters at all to scanner users, who now have this silly idea that scanners should just work (because they mostly do), but if some old Logitech employee (or someone else involved in the TWAIN negotiations) has unimpeachable evidence, I’d love to see it.

Now: TWAIN continues, in revised and expanded form, to be used by almost all scanners. My reference to Logitech may be obsolete: It appears it’s not part of the current TWAIN Working Group (and, as far as I know, it’s no longer in the scanner business).

As to the name…well, the discussion page on the Wikipedia entry includes a discussion by someone who was there indicating that “Technology [or Toolkit] Without An Interesting Name” is not wrong (and maybe not right either)—but the article itself still says this is a myth and cites the equally-mythical (or non-mythical) Kipling reference, which I’d find odd given the all-caps name. But that reference is cited at www.twain.org. Either there’s some rewriting of history going on here, or people have bad memories, both of which are certainly possible. (Oddly, the article cited on twain.org itself notes both “Toolkit Without An Interesting Name” and Mark Twain…so maybe any story you choose is likely to be correct.)

UCITA

Then: Uniform Computer Information Transactions Act. This proposal, meant to be enacted by every state legislature, would (among other things) enshrine shrinkwrapped licenses as enforceable law at the state level. It was a bad proposal, designed in the apparent hope that it could be pushed through most state legislatures before there was strong lobbying against it.

That didn’t work. Maryland and Virginia passed UCITA; several other states responded by passing a Uniform Electronic Transactions Act, a “bomb shelter” to prevent companies from taking advantage of UCITA’s passage in those two states. UCITA efforts have stalled almost completely, including a downgrading of the proposal by the National Conference of Commissioners on Uniform State Laws. It’s still a potential threat, but a separate effort to pass state “Super-DMCA laws”—laws that go even farther than DMCA in unbalancing copyright—seems more dangerous at this point.

Now: According to a Wikipedia article that’s distinctly non-neutral as of 1/8/09, pushing the idea that UCITA’s a good idea, no other state has passed UCITA. Several states approved “bomb shelter” legislation. Apparently, the last states to have UCITA introduced were Oklahoma and Nevada in 2003; it failed in both cases. I’m inclined to believe it’s a dead duck.

unconferences

New: An unconference is a non-invitational conference or portion of a conference with relatively low (or no) registration fees in which there is no traditional extended program planning process. Most or all presentations and sessions are decided on by the participants in the conference, either at the beginning of the conference or in a brief period before the conference. In a good unconference, all or almost all participants are participants, not just audience.

That’s the definition I use in “Unconferences and library camps” on the PALINET Leadership Network, pln.palinet.org/wiki/index.php/Unconferences_and_library_camps, and I think it’s as clear a definition as is available. Wikipedia’s definition could include ALA and every state library conference, making it nearly useless.

(Theoretically, bar camps require that all those attending actively participate and forbid any advance scheduling—and you can read more about the pure form at barcamp.org)

I think the unconference idea is a really good one, not so much to replace traditional library conferences as to complement them (or, in a growing number of cases, to serve as one element of a traditional conference). The self-organizing aspect of an unconference allows for much shorter lead times on hot topics; the “on the spot” nature can, ideally, limit the extent to which certain voices are heard all the time while others—who might have more to say but aren’t as well known—are largely silent.

Unfortunately (and predictably), “unconference” is also a Hot Term and has been misused to apply to things that really shouldn’t fit—including, in at least one case, a conference where all the presentations were scheduled in advance.

You’ll find an lot more on unconferences at the PLN article noted above, and should also visit the related article on “Unconference and library camp practices,” which notes actual recorded practices at most library unconferences prior to 2009 and includes a tabular summary of those practices.

UTF-8

Then: The most common way to transmit Unicode®, the standard for display of multiple scripts. UTF-8 advantages ASCII, your traditional non-accented character set, by sending those codes as single-byte characters, while most other characters require two or more bytes. Why isn’t Unicode an entry here? Because I haven’t spent much time on it. RLG is a founding member of the Unicode Consortium. It’s important (and yes, Eureka displays the non-Roman scripts that are currently supported by MARC21, using UTF-8 to do so), but I’m not one of the “Unicode people” at RLG.

Now: Unicode support has become nearly universal in operating systems and applications; non-Roman characters pop up on websites, in blogs and in email with no advance planning. It’s worth repeating that RLG was a founding member of the Unicode Consortium. As far as I know, UTF-8 continues to be the predominant transmission method for Unicode, if only because it doesn’t require extra bytes for ASCII.

weblogs

Then: Cites & Insights is not a weblog. That seems like a silly thing to say, but I continue to see it described that way from time to time. It’s not in reverse chronological order, it’s not a stream of items available directly on the web, it doesn’t use weblog software, and the intent is entirely different. Will I ever do a weblog (or a LISNews journal, which sure looks a lot like a weblog)? Possibly. Will Cites & Insights be transformed into a weblog? Absolutely not.

Does that mean I don’t regard weblogs as valuable? No. I don’t manufacture cars, but I consider them valuable too—and I don’t (knowingly) write fiction, but I certainly read it. I regard several weblogs as essential, several others as fascinating (there’s overlap), and quite a few others as intriguing in odd ways. Many people use weblogs for many worthwhile purposes—and who’s to define “worthwhile” except those writing and reading weblogs?

I could do without some of the “neoblogisms,” but that’s my problem. I don’t believe weblogs will or should replace traditional journalism, but that’s not the point. I do, in fact, believe that more libraries and librarians could make effective use of weblogs, although they might want to consider plans for that use before starting blogs…

Now: This item would appear under “blog,” and it would be quite different. It’s been a while since anybody called C&I a blog—and, to be sure, I have a blog (or, technically, three or four of them…) (If you’re counting: Walt at random is my blog. C&I updates is a single-purpose Blogger blog, used only to announce new issues, redundant for anybody who subscribes to Walt at random. Since it has at least 296 subscribers, I’ll keep it. PLN Highlights is my work blog—and it, too, is redundant if you subscribe to W.a.r., since I mirror each post there. The fourth? Technically, there’s a blog as part of LISNews, but it rarely has anything other than the occasional W.a.r. mirror (including all C&I announcements).)

That last sentence in the “then” material? I think the final clause has been ignored by too many people.

wikis

New: The only definition that applies to all wikis:

A wiki is a set of web pages created and managed using wiki software.

Not a very useful definition, to be sure, but every other definition is either wrong or questionable, at least for some wikis.

I have an outstanding essay explaining why other definitions are apt to be wrong, noting the defining characteristics of most wikis, suggesting why libraries should care about wikis and pointing to a few library-related wikis. Rather than repeat it here, I’ll point you to pln.palinet.org/wiki/index.php/Wikis_and_libraries.

From there you can link to “Blog or wiki—which tool to use?”—a clear discussion, original to the PALINET Leadership Network, that may help you decide which lightweight publishing tool is most appropriate for your group or library. It’s a little longer, but it even includes a table for those who are visually inclined.

WTF is FTW?

New: One of my “imaginary friends”—colleagues I’ve never met—suggested I add items about contemporary language oddities, such as l33t (or leetspeak) and texting shorthand. I considered the level of snark I was interested in projecting and concluded that this little essay would have to do, at least for now.

There are several things going on here, I think.

Specialized groups and fields have their own jargon, which simplifies communication because one word or acronym can stand in for a much longer explanation.

Jargon draws a circle around a group, separating the In Crowd—those who know the jargon—from everybody else. I think leetspeak has a lot of this.

Texting and variants such as Twitter almost require shorthand for efficiency. Many of the special forms serve to save keystrokes.

Some slang serves a subversive function. I’ve been astonished to find major daily newspapers using a four-letter abbreviation beginning with “M,” relating to attractive women who are presumed to have children, given that those papers would never run the spelled-out version or endorse it as a concept.

I’d never spell out WTF in Cites & Insights—and neither would I spell out the expansion that roughly half the sources use for FTW! I believe that most people who use it in the online areas I visit where shorthand gets used a lot (LSW Meebo and FriendFeed) mean “For the win.” At least I hope so.

Z39

Then: You may have heard of Z39.50 (a machine-to-machine search-and-retrieval standard for bibliographic data), Z39.2 (the standard that underlies MARC21), Z39.21 (ISBN), or any number of others. Z39 is the ANSI prefix assigned to NISO, the National Information Standards Organization, for use in library-related standards. NISO has a substantial website (which now includes PDF versions of NISO standards). It’s a little out of date, but Walt Crawford wrote Technical Standards: An Introduction for Librarians (second edition, G.K. Hall, 1991), still a good introduction to the field. (I was also the founding editor of Information Standards Quarterly, NISO’s quarterly newsletter. I have no current involvement with NISO)…

Now: NISO’s still around, and 2009 is the quarter-century mark for that name. One truly unusual aspect of NISO (www.niso.org) as an ANSI-accredited standards organization: All NISO standards are freely available from the NISO website. That’s rare in the standards field, where you can easily pay hundreds or thousands of dollars to acquire a set of standards.

zine

Then: What I call Cites & Insights—not because it’s the ideal generic title but because I can’t think of a better one…

Now: I use ejournal. Somehow it seemed odd to use “zine” for something like this, given usage elsewhere.

Cites & Insights: Crawford at Large, Volume 9, Number 2, Whole Issue 112, ISSN 1534-0937, a journal of libraries, policy, technology and media, is written and produced by Walt Crawford, Editorial Director of the PALINET Leadership Network.

Cites & Insights is sponsored by YBP Library Services, http://www.ybp.com.

Opinions herein may not represent those of PALINET or YBP Library Services.

Comments should be sent to waltcrawford@gmail.com. Cites & Insights: Crawford at Large is copyright © 2009 by Walt Crawford: Some rights reserved.

All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

URL: citesandinsights.info/civ9i2.pdf