Trends & Quick Takes
Purchasing Trends inARL Libraries
The June 2004 ARL Bimonthly Report (#234) includes “Serials trends reflected in the ARL statistics 2002-03” by Martha Kyrillidou, director of ARL’s statistics & measurement program. A few points may be worth noting (the report itself isn’t that long—but be aware that key elements are in tables that don’t automatically reach a print copy). Kyrillidou makes some 17-year comparisons between 1986 and 2003:
Ø Serial unit costs (the average cost per serial) in ARL libraries increased 215% (from $89.77 to $283), while monographs and books rose 82% (from $28 to $52) and the Consumer Price Index rose 68%. ARL library expenditures rose 128% during that same period.
Ø In the past few years, the rate of serial unit cost increases has slowed—from 10.2% in 1995 to 7% in 2003.
Ø In 1968, on average, ARL libraries spent 73% as much on monographs as on serials ($1.1 million compared to $1.5 million).
Ø In 2003, on average, ARL libraries spent 34% as much on monographs as on serials ($1.8 million compared to $5.3 million).
Ø Serials acquired by ARL libraries jumped significantly in recent years, probably as a result of Big Deals and other full-text aggregations: The average was 18,142 in 2003 compared to 15,919 in 1986.
Ø For those who say ARL libraries have given up on books and print, the truth’s not quite so simple: The average number of books purchased in 2003 is about the same as in 1986 (32,600), after years of being lower.
Ø ARL libraries get a lot more “nonpurchased serial subscriptions” now than they did in 1986: An average of 8,873 as compared to 3,319 in 1986. Thus, total serials in ARL libraries have risen substantially.
Should ARL libraries be buying more books than they are buying? Possibly—but at least the real numbers are no longer falling. (If you’re complaining that “average ARL library” is a meaningless concept in a population as wide-ranging as ARL’s membership, you’re right; that’s why I tried to follow Kyrillidou’s care in saying “on average, ARL libraries did” rather than “the average ARL library did” whatever.)
The controversy arises first in medicine and related fields but goes much further: What happens when research yields negative results? If you’ve done research, you know the usual answer: Nothing at all—the project shuts down with no findings submitted for publication and as little publicity as possible.
In the humanities and social sciences, maybe that’s OK. I’m guessing most journals in librarianship wouldn’t be interested in scholarly papers indicating that careful study of factors in a situation showed no correlation. I’ve seen a few papers that could be read that way, but the authors manage to couch the findings in language that makes the results appear positive. Actually, a negative result could be significant if the study covers factors that are commonly assumed to be correlated. And in the social sciences, one negative result never really shuts down research possibilities; times change, as do study populations.
I know that, back when I was more active in library automation, some of us lamented the lack of reports and articles on experiments and projects that failed. We knew why there were few such reports: “How we did it bad” is no fun to write! In technology as well as science, however, there is much to learn from failed experiments and projects—if the stories are honestly told.
In medicine, but also in physics, chemistry, and most hard sciences, negative results do matter. For most sciences, publication of a negative result can save time for other researchers: “We went down this blind alley, so you don’t have to.” With medicine, the stakes are higher: Life, death and health. Unfortunately, when medical research is sponsored by companies with stakes in the outcome, there’s a correspondingly high stake in ignoring or suppressing negative results.
It’s hit the news from a couple of directions. The editors of a dozen high-profile medical journals are pushing for a standard that requires that all medical trials be registered, with positive and negative results reported—and backing that up with the threat of boycotting articles on medical trials not on the registry. At the same time, I see reports that the FDA may have suppressed negative results relating to antidepressants prescribed for juveniles—and in our local paper, the situation was described in a way that layfolk can understand and that neither sensationalized nor minimized the problem.
There are, in fact, journals dedicated to negative results. I’ve seen web sites for two, both open access. The Journal of Negative Results (ISSN 1459-4625) provides “an online medium for the publication of peer-reviewed, sound scientific work in ecology and evolutionary biology that may otherwise remain unknown.” Work to be published includes studies that “1) test novel or established hypotheses/theories that yield negative or dissenting results, or 2) replicate work published previously (in either cognate or different systems). Short notes on studies in which the data are biologically interesting but lack statistical power are also welcome.” The journal is at www.jrn-eeb.org.
The second, Journal of Articles in Support of the Null Hypothesis, appears to specialize in psychology:
In the past other journals and reviewers have exhibited a bias against articles that did not reject the null hypothesis. We seek to change that by offering an outlet for experiments that do not reach the traditional significance levels (p < .05). Thus, reducing the file drawer problem, and reducing the bias in psychological literature. Without such a resource researchers could be wasting their time examining empirical questions that have already been examined.
This journal is at www.jasnh.com.
Are there others out there? JASNH has published seven issues in three years (with a total of 12 articles); JNR seems to be a newcomer. It’s an interesting if difficult subset of the literature.
I stopped commenting on John C. Dvorak’s PC Magazine columns some time ago, but sometimes it’s impossible to resist. His July 2004 column, “The dead-media bogeyman,” offers excruciatingly bad advice. He doesn’t think digital photographs will end up on dead media—but that’s not the main problem. He asserts that CD and DVD standards “will probably have playability for a hundred years or longer.” A few paragraphs later, he lengthens that: “The CD/DVD formats look stable on into the future, with so much gear that it is highly unlikely they will become dead media before the year 2200. And if they do die, you can be certain that all the data will be moved forward onto something better.” [Emphases added.] “Be moved” without any conscious action, apparently. And, he says, “people back up their photos mostly onto CDs and DVDs, and they do it redundantly.” I’m sure you and everybody you know does redundant backups regularly of everything on your computer, right? And, you know, nothing better’s ever going to replace CD and DVD, so they’ll be around for two more centuries. (I’m sure that at least 100 million PCs were sold with 5.25" diskette drives, so we can be certain you’ll never have trouble reading such a diskette. Right?)
Ø How much PC can you buy for $300? If you’re willing to buy from WalMart (I will not set foot inside that store), more than you might expect—but probably a lot less than you want. The Microtel SYSWM8001 runs a 1.6GHz AMD Duron, includes 128MB SDRAM, a CD-ROM drive (no burner), a 40GB hard disk, and integrated graphics and audio with no-name speakers. You don’t get a display for that price, of course. Software? Well, it runs the Sun Java Desktop System on top of SUSE Linux; printer compatibility is “limited,” but you do get StarOffice 7 and Mozilla 1.4.
Ø PDAs aren’t doing well in general, and Sony’s giving up the ghost: It’s halted production of Cliés in the U.S. for the rest of the year.
Ø Maybe PDAs will disappear into cell phones as people learn to squint at the tiny screens and tap tap tap at the keyboards; several companies now offer teeny-tiny hard disks, and some analysts expect to see them in cell phones in the near future. So two-inch screens are the future?
Ø According to Nielsen/NetRatings, broadband internet connections outnumbered dial-up connections for American home users in July 2004, the first time that’s been the case. Supposedly, 63 million web users connected via broadband in July as compared to 61.3 million via dialup. Overall internet penetration has leveled off. Marc Ryan of Nielsen/NetRatings put an odd spin on it, or established himself as one of those short-term prophets almost certain to be right: “We expect to see this aggressive growth rate continue through next year when the majority of Internet users will be accessing the Internet via a broadband connection.” Well, yes, if a majority is already using a broadband connection, then “aggressive growth” should assure that a majority use broadband by the end of 2005.
Ø Are micropayments finally poised for the big time? That’s what BitPass believes, based on the success of iTunes and other music-download systems. According to a September 7 news.comarticle, TowerGroup claims that there were more than $2 billion in micropayments in 2003—and they project that to grow to $11.5 billion by 2009. As Buzz Lightyear would say…
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.