Pointing with Pride Part 9
I found the “missing issue”—and it turns out to be a slightly different problem. Somehow, I managed to mark both September 2004 and October 2004 as Whole Issue 54—and have been one off ever since.
The solution shows up here and in Part 10.
In retrospect, this seems like a geeky issue, full of PC stuff—not unusual for 2001. This portion of Product Watch is revealing not only for the price and weight but also for a numbers game that has, I believe, disappeared in recent years.
The Really Big Show
Who would pay $3,499 for a 36" 4x3 TV set? Even Sony XBRs don’t cost that much, and they’re the best direct-view sets you can buy (in my opinion). But Princeton Graphics’ Ai3.6HD isn’t just a TV set. It’s also a multimedia monitor with built-in CPU, 16MB flash memory and 64MB SDRAM, Internet access, and a bunch of connections—as well as a TV tuner and internal line doubler. It’s a high-definition display (“compatible with 480p, 720p, and 1080i input”) but requires an external HD tuner—and, given its 4x3 rather than 16x9 ratio, it’s not a good choice for HDTV (or DVD viewing, for that matter).
PC Magazine gives this beast (210 pounds) a five-dot rave and calls it a “killer display” that’s “a natural for board rooms, company lobbies, training facilities, or any other location where a versatile display is desirable.” It’s certainly one of the biggest PC-compatible displays you can buy, and appears compatible with almost any input.
Unfortunately, you have to be wary of some claims. “As a computer monitor, the Ai.36HD can display at resolutions of 640-by-480 (85 Hz maximum), 800-by-600 (75 Hz), and 1,024-by-768 (60 Hz).” Yes and no. Two sentences later we learn that this display has an Invar Shadow Mask CRT (that is, it’s not a Trinitron display) with an 0.90-mm stripe pitch (which is confusing, because stripe pitches are for Trinitron/Diamondtron displays: shadow mask CRTs normally have dot pitches).
Do the math. Assuming this uses TV-set standards rather than monitor standards, 36" is the visible diagonal measure (always true for TV sets) rather than the tube size (the phony number used in monitor ads). That means the visible area is 21.6x28.8 inches. There are 25.4 millimeters to an inch. Dividing by the dot pitch or stripe pitch of 0.9mm, we get 28.2 dots per inch. Thus—barring magic—the tube can physically resolve 813x609 dots. Any resolution higher than 800x600 represents wishful thinking and approximate display—the unit can accept higher resolution but not accurately display the results.
I’m not knocking Princeton. Fun and games regarding actual resolution seem to be standard practice for very large data displays. Note that 0.90mm is a TV figure. PC monitors typically have 0.24-0.26mm stripe pitch or dot pitch, sometimes a little finer, almost never coarser except on cheapo no-name displays.
The lead item in Trends and Quick Takes (this was pre-ampersand) was “Perfect Compression!” I just love this sort of thing…
Any long-time Analog readers out there? You might remember the Dean Drive, an obsession of the great editor John W. Campbell, Jr.. It had many of the elements of perpetual motion machines and true exothermic systems—that is, systems that create energy without converting matter. As I remember, once an independent party actually tested the Dean Drive, they determined that its supposed miraculous properties (demonstrated by reducing the measured weight of a platform running the drive) came about by disturbing the scale itself.
Perfect compression is like perpetual motion or faster-than-light travel (without using workarounds such as black holes). It’s mathematically impossible, for reasons that don’t require much more than common sense to demonstrate. It is mathematically impossible to create a program that will compress any file by at least one bit in total length (when combining the output file and needed tracking information) in such a way that the original file can be restored without change.
That’s lossless compression—what you get in Zip archives, for example. It’s quite different than lossy compression (e.g., Jpeg, MP3, MPEG-2 as used for DVDs), where the nature of the data is known and the intent is to restore a version that’s perceived as equivalent to the original. You can’t use lossy compression for spreadsheets, word processing, or software itself: there are no characters in this text that a person can’t read because they’re obscured by other characters or because your verbal acuity doesn’t recognize them or care about them. Notably, lossy compression requires detailed knowledge of the kind of file being processed.
Here’s a common sense demonstration that perfect lossless compression is impossible. If it’s possible, then you can remove at least one bit from any file—including a file that’s already been compressed. Thus, logically, you can reduce any file to a single bit without loss of original information. (Actually, you could reduce any file to zero bits if perfect compression was possible.)
In practice, any lossless compression algorithm will expand some files while compressing others. That appears to be mathematically demonstrable as well, but we’ve reached the limits of my mathematical prowess. In real life, of course, it works that way: Zipped archives of previously compressed files can be considerably larger than the originals.
But where there’s money, there’s always a will. A January 16 Wired News item discusses ZeoSync, a Florida company that announced on January 7 that it “has succeeded in reducing the expression of practically random information sequences.” The press release asserts flatly, “ZeoSync’s mathematical breakthrough overcomes limitations of data compression theory.” More specifically, Peter St. George asserts that the company’s algorithms constitute “a significant breakthrough to the historical limitations of digital communications as it was originally detailed by Dr. Claude Shannon in his treatise on Information Theory.” That seems to negate the “practically random” loophole earlier in the release.
The press release is riddled with trademarks and oddly worded claims. Supposedly, the company collaborates with top experts throughout academia. The Wired item includes a brief interview with St. George, one that includes no details at all but asserts that details would be announced in “a few days” from January 16. Naturally, ZeoSync plans to be filing a bunch of “proprietary patents.”
What happened “a few days” later? Nothing that’s been reported. A handful of online and press outlets ran portions of ZeoSync’s press release without much skepticism; some, including New Scientist, were more doubtful.
Claims of this sort have popped up over the years, sometimes as part of startup companies, including WEB Technologies in 1992 and Jules Gilbert in 1996 and beyond. (Gilbert didn’t claim perfect compression—but did claim that 100:1 or 1000:1 lossless compression was feasible “if the input file is sufficiently large.” Gilbert also claimed that he could compress a 3MB file to 50KB without loss of information.) Generally, such claims fade away after a few months as they are put to independent test.
Could ZeoSync be the exception? Watch for further news, but don’t be surprised if there isn’t any.
According to Wikipedia, “the technology was never demonstrated, and the company’s website disappeared a few months later.” Why am I not surprised?
Here’s the first segment of The Library Stuff—back from when I still thought Pew Internet & American Life might be an objective research operation:
Jones, Steve, et al, “The Internet goes to college,” Pew Internet & American Life Project, September 15, 2002 (www.pewinternet.org), and Surmacz, Jon, “Libraries don’t stack up,” Darwin, September 18, 2002 (184.108.40.206/ learn/numbers/index.cfm)
I chose the Darwin story almost at random as one of many odd little stories about the recent Pew survey report. The report itself is interesting but also raises a few unanswered questions. For example, the most talked-about finding, that 73% of college students say they use the Internet more than the library while 9% use the library more than the Internet. My question would be: What proportion of that 73% are, to some extent, using Internet materials that are available because of library subscriptions, specifically online databases and full-text aggregations? Without an answer to that question, the number is fairly meaningless.
Some of the report comments strike me as odd, such as this one: “Surprisingly, only about half (47%) of college students said they are required to use email in their classes.” (Emphasis added.) Why should students be required to use email in their classes? Back when dinosaurs roamed the earth and I was at UC Berkeley, I’d guess most students never communicated directly with their professors during most courses, and none of us was ever required to use postal mail or submit written comments as part of our courses. What makes email so special that it should be required? This only makes sense if the assumption is that all interaction must be forced into technological channels. In practice, three-quarters of students did send email to faculty in classes and 82% of the students have been contacted via email by professors, so I don’t see the problem.
Maybe Pew does have a technological imperative. On p. 19, the researchers note that students aren’t committed to distance learning. “Their current behaviors show them using the Internet as an educational tool supplementing traditional classroom education, and it may be difficult to convince them to abandon the traditional setting after they have had the kinds of attention afforded them in the college classroom.” (Emphasis added.) Again, what makes it necessary to “convince” students to abandon models that work well? There’s another point here: How is it that the Pew researchers can casually assume that student habits and practices will simply carry forward into the workplace? The shock of the real world, both staggering and refreshing, seems likely to be as relevant to today’s college students as to any other.
Finally, although the methodology for the statistical surveys are stated well and appear to involve a large enough sample for reasonable confidence, there are no numbers attached to the observational notes, although these play a significant role in the text. Were there three observations? Three hundred? Are Chicago colleges typical of the nation as a whole?
The study’s worth reading if you haven’t already encountered it—but I would probably have ignored it except for the ancillary reports. Surmacz’ story is typical, with a wildly misleading headline followed by an odd story. In the very first paragraph, Steve Jones says that “the findings shouldn’t alarm librarians,” yet the headline says “libraries don’t stack up.” Later, Jones says that students used to go to the library to study and socialize—but now they’re “much more purposeful…Many go there to study or get materials.” Surmacz turns that into “students go to the library with one purpose—to do research.” In practice, neither is quite what the study says, and that part of the study is weakened by its pure observational nature. Here’s a direct quote: “Rather, email use, instant messaging and Web-surfing dominated students’ computer activity in the library.” That’s research? I see nothing in the report saying that students don’t socialize in libraries, and I’ve been in enough academic libraries in the last few years to consider such a finding highly improbable.
Just as a little reminder that copyright hardliners can sometimes be really hardline—and, by the way, that elected officials happily serve as the servants of Big Media, here’s part of a Copyright Currents:
A Little Collateral Damage
In a related earlier story, Orrin Hatch (R-Utah) surprised even some copyright hardnoses during a Senate Judiciary Committee hearing. According to an AP article, Hatch asked technology executives about ways to damage computers engaged in file trading. A spokesman for MediaDefender, a company that builds technology to download files slowly so that other users can’t get at them, said “No one is interested in destroying anyone’s computer.”
Hatch interrupted: “I’m interested.” Later: “If we can find some way to do this without destroying their machines, we’d be interested in hearing about that. If that’s the only way, then I’m all for destroying their machines.” Hatch said if a few hundred thousand people suffered damage to their computers, the online community would realize the clampdown was serious. [Emphases added.] Senator Patrick Leahy (senior Democrat on the committee) found this a bit much. “The rights of copyright holders need to be protected, but some draconian remedies that have been suggested would create more problems than they would solve.” You think?
Hatch issued a brief press release the next day “clarifying” what he’d said:
I am very concerned about Internet piracy of personal and copyrighted materials, and I want to find effective solutions to these problems.
I made my comments at yesterday’s hearing because I think that industry is not doing enough to help us find effective ways to stop people from using computers to steal copyrighted, personal or sensitive materials. I do not favor extreme remedies—unless no moderate remedies can be found. I asked the interested industries to help us find those moderate remedies. [Emphasis added]
Edward Felten notes the addition of “personal or sensitive” to the mix—and that the press, among others, should be alarmed by this addition.
The first product in Interesting & Peculiar Products, the Kaleidescape Movie Server, with an update:
Sometimes when I’m feeling affluent, it’s good to be reminded that the term has many meanings. Sound & Vision certainly isn’t aimed at plutocrats. Compared to high-end stereo magazines, it’s Everyman’s publication. Which makes John Sciacca’s highly favorable review of this device (in the February/March 2004 issue) all the more amazing.
“This device” is a “system that does for movies what hard-drive storage has already done for music.” Understand the problem that’s being solved: “Why should you be forced to enjoy your DVDs in the same old 20th-century manner?... And how do you manage that library of 100, 200, or 500 titles? How do remember what movies you have or decide what you want to watch?” 100 DVDs: That’s enough to require a four-foot shelf! No wonder people are desperate for a solution! What if they had 200 books or CDs? How would they ever find what they wanted? What to do, what to do?
The solution consists of a DVD reader, a movie player, and a server. The server holds up to 12 hard disks. All the pieces connect via “Fast” Ethernet (100Mbps, not 1Gbps). The movie player connects to your TV. You load all your DVDs onto the hard disk, pulling information from a web-based database in the process, then play them from the server. The database service makes this into “a video godsend,” according to the review, because it makes “the act of selecting a movie entertaining in itself.” You can sort by actor! You can sort by genre! You can sort by director or MPAA rating! Heck, you can browse by the cover—let’s see you choose one out of 200 boring old physical DVDs by looking at covers!
Oh, and when you pause on a cover, the device gives you other titles that are “like” that one. “This sounds simple—Amazon.com does it all the time—but I found it to be phenomenally cool, and I spent lots of time with it.” Sciacca even made a game out of predicting what Kaleidescape would pick. (I suppose you could do that with Netflix, which has a great “more like this” capability—but that would miss the coolest aspect of this server, coming soon.)
Here’s what’s really cool. You get all this functionality for a mere $27,000 with enough disk space for 160 DVDs (presumably four 300GB drives). Since you spent as much as $3,200 for those 160 DVDs, this seems like a real bargain: You’re paying a bit less than nine times as much so you don’t have to alphabetize boxes and can do neat sorting. If you want to store 440 DVDs, the maximum capacity of one server, that will be $33,000. If you have two TVs, figure another $4,000 for another movie player—and, after all, a good DVD drive would cost $100 or so!
By the way, the lab tests of the unit were “slightly disappointing,” given that it emulates a progressive-scan DVD drive. Well, you know, for a mere $26,500 more than a first-rate DVD player would cost, or $23,000 more than 160 DVDs and a first-rate player, what do you expect? Perfection?
I guess I’m not really affluent after all. We own more than 70 DVDs, but so far keeping track of them hasn’t been an issue. If it was, I think I could bring myself to key the necessary information into Access or Excel so I could do all those fancy sorts. At least to save $26,000, I could!
OK, “four foot shelf” for 100 DVDs was wrong. That might be true for individual traditional packages, but I have a set of 60 DVDs that fit in a 6”x5.5” space, and modern slimpack TV sets typically offer 6 DVDs in a ¾" package. Newer Kaleidescape units can cost as “little” as $13,000—but they’re still absurdly expensive. Oh, and the DVD folks sued Kaleidescape for violating the DVD license…somehow, Big Media’s afraid that somebody who blows $13K and up on a player will borrow $20 DVDs and rip them to save money. So far, Kaleidescape’s winning.
But then there’s a somewhat ugly followup. Kaleidescape’s now pushing the notion that their hot new DVD chips make regular DVDs look just as good as Blu-ray—much as Toshiba intimates that their upconverting DVD player offers picture quality equivalent to Blu-ray. We know why Toshiba’s pushing this physical impossibility (after all, they’re still bruised from their singular support of HD DVD). Kaleidescape? Hard to say. Do note “physical impossibility”: It is literally impossible for any DVD upscaler to offer actual picture quality equal to a Blu-ray DVD. You can interpolate, you can upsample—but you can’t generate information that isn’t there.
Some pundits never change—and while the internet certainly isn’t “the undoing of society and civilization,” it may (or may not) have been the undoing of PC Magazine as a print magazine:
“The Internet will prove to be the undoing of society and civilization as we know it.” Why? Because of “the Web’s natural ability to remove normal interpersonal structures that prevent society from falling into chaos.” Hmm? “Almost everyone on the Net is anonymous.” “Haughty bloggers” who “hide behind a good online template” are taken seriously and “may even become famous” if he/she stays hidden long enough.” The entire political scene has become totally dichotomous, and that’s “thanks to the net.” “If it were up to me, I’d shut down the Net tomorrow and make people get out of the house and mingle.”
Who’s writing this over-the-top screed? John C. Dvorak, or some whack job posing as Dvorak successfully enough to take over Dvorak’s PC Magazine column (23:19, p. 61). And, of course, Dvorak has a special weekly column that only appears on…the Web. For which I suspect he makes very good money. Little wonder that the best letter four pages earlier in the issue offers “proof positive that John Dvorak is the complete idiot that I’ve believed him to be all these years” for claiming that the “D” in Class D audio amplification stands for “digital.” (It doesn’t, and Class D amplifiers have been around for a long time.) The last line of the letter was good enough to be the callout for the letters page: “John Dvorak’s column is a vastly entertaining piece of highly opinionated fiction.” Except it’s rarely entertaining these days.
Here’s a segment of Trends & Quick Takes that’s still relevant…
Patent Holding Companies
A December 16 (2004) news.com story by John Borland notes that Acacia Research is buying Global Patent Holdings. So what? So this: Global Patent Holdings is one of those beloved companies whose only products appear to be litigation and licenses—companies that buy patents developed elsewhere, then make the broadest possible claims and threaten to sue any company deemed in violation of the patents.
As you should know, some technology-related patents are wildly overbroad—but for many companies, paying for a license is less expensive and less hassle than going to court and attempting to invalidate the patent. The story begins, “In the streaming media business, a letter from Acacia Research usually means one thing: the threat of a patent lawsuit.” The purchase will make Acacia more of a “patent powerhouse”—the CEO explicitly says the goal is “becoming the leading technology licensing company.”
Not “the company that creates the best technology and licenses it.” Creation—“the progress of science and useful arts” as the Constitution calls it in the copyright-and-patent clause—isn’t what these companies are all about. These companies produce licenses and litigation. (Former Microsoft CTO Nathan Myhrvold has founded a similar company, Intellectual Ventures, with close to a thousand patents already.)
I’m not wild about patent holding companies. Edward Felten disagrees, in a January 12, 2005 Freedom to tinker posting: “From a policy standpoint I don’t see a problem.” He makes some good points, if we’re dealing with legitimate patents. Patent holding companies can provide a level ground for smaller inventors: True. Inventors should be able to focus on invention, not on extracting royalties: Also true.
As Felten says, “those who support rational patent policy should focus on setting up the right patent rules (whatever they are), and applying those rules to whoever happens to own each patent.” He’s right, of course: My outrage at patent holding companies is based on the kind of patents we hear about and the overbroad claims. If smaller companies and inventors actually do rely on patent holding companies to gain justifiable rewards for their real inventions, there’s no reason to object.
The two comments I saw on the posting when I downloaded it (the day it was posted—there may be more since) both acknowledged this. Grant Gould noted what’s needed to make the patent system “economically efficient” (and just from a policy perspective): “strong prior-art investigations, a more objective obviousness criterion tied to the likelihood of reinvention during the patent term, an independent reinvention defense to infringement claims, increasing renewal fees tied to the price of a license.” “Skopo” says Felten “misses the point”—which is not that the holding companies have no other business but that some of the patents being enforced are overbroad. I don’t know that Felten misses that point, but it’s a good one. I withdraw my general outrage over companies whose only business is to enforce patents they purchase, although their aggressiveness may itself be a problem. The bigger problem is patents that are too broad and, in many IT-related cases, should never have been issued.
I didn’t coin “Life trumps blogging”—although I’d love to take credit for it. That’s the title for the lead Perspective in this issue, an issue in which the entire remainder is one very large Perspective springing from a long piece at LISNews.
I’ll recommend the latter piece, but it doesn’t excerpt neatly. I think it stands up better and better as we near 2010…
I won’t quote the “Ltb” piece either, but I will quote the last two paragraphs, for those who skimmed the earlier piece (or read only some snarky reactions) and somehow think it was actually slamming blogging:
Why do you blog? Farkas’ survey of the biblioblogosphere revealed a number of interesting reasons. I’ll argue that fame and fortune should never be motivations for library blogging. Otherwise, almost any reason will do—except, I believe, “because everybody should have a blog.”
Life trumps blogging. For that matter, life usually trumps writing. But for most of us, most of the time, life has room for secondary pursuits. All the writers noted have continued to blog or have come back to blogging, because they still have something to say.
In lieu of trying to summarize “Scan This Book?” (why bother? Kevin Kelly is another of those who are always treated as brilliant futurists no matter how often they’re wrong or how silly they are), I’ll quote a brief item from My Back Pages that may help show why I shed no tears when Business 2.0 disappeared:
Ethics are for Suckers
The article title is “Tricking out those parked domains,” in the “What’s cool” section of the May 2006 Business 2.0. It’s a story about websites that are nothing but links to advertisers. They’re con jobs: They serve no purpose other than to garner ad revenue when someone clicks on a link. Now, they’re getting fancy: Services will add a few hundred words of “content” to try to improve the chances of landing on one of these sites, by foiling web search engine algorithms.
Many of the sites are domain names that might be plausible, or domains snatched because their original owners didn’t renew them promptly, or domains that spell words slightly differently.
The article isn’t denouncing these sites. It’s offering “A few cheap and easy secrets [that] can help you capture a bigger share of the Internet ad boom.” Next to “What’s cool” at the top of the page it says “Playing the angles.” After all, you might make money.
I’ll let this one go with part of the lead item in Bibs & Blather, an item that may say more about the state of library literature than anything else.
On Being Cited
I saw it as the first item on my chatterwall on the Library 2.0 Ning, from Marcus Elmore on March 21:
Hi Walt—The new issue of C&RL arrived and I opened it only to discover that you’re one of the 28 most frequently cited LIS scholars of the past decade—congrats!
“Well, that’s interesting,” I thought—particularly given that I’m not a scholar at all. Not having C&RL at hand, I contacted editor Bill Potter, who was kind enough to send the table of “Most Cited Personal Authors, 1994-2004” from “Analysis of a decade in library literature: 1994-2004,” by Kelly Blessinger and Michele Frasier, College & Research Libraries March 2007, pp. 155-169.
When I first looked at the table I noted a couple of things (after sending a note about this recognition to select superiors and coworkers):
Ø I’m one of only two on the list (31 names—28 ranks but with three ties) who aren’t academic librarians. The other: Maurice Line, director of the British Library. For that matter, it appears that 25 or 26 of the 31 are library school faculty…
Ø As far as I can tell, only eight of the 31 are women, in a woman-dominated profession.
A couple of caveats: I’m not the 27th most widely cited author for that period—I’m the 27th most widely cited in 2,220 journal articles from ten of 28 LIS journals meeting the study’s criteria. It’s quite possible that I’d fall out of the top group if all 28 were studied…
I’ll save the actual Issue 100 for the last section.
Cites & Insights is sponsored by YBP Library Services, http://www.ybp.com.
Opinions herein may not represent those of PALINET or YBP Library Services.
Comments should be sent to email@example.com. Cites & Insights: Crawford at Large is copyright © 2009 by Walt Crawford: Some rights reserved.
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.