Trends & Quick Takes
There’s a problem with patents—several problems, actually. One set of problems has mostly to do with software patents, and one easy solution would be to return to the days when you couldn’t patent an algorithm. That’s not likely to happen. Short of that, there are at least two overlapping problems:
▪ Too many software patents and business method patents are issued for things that were either obvious or already in play.
▪ Too many “companies” have found it profitable to buy up such patents and license them on threat of lawsuit.
I use “companies” in scare quotes deliberately: To my mind, there’s something unsavory about a corporation whose only product is “intellectual property” that the corporation didn’t create and doesn’t use except for lawsuits and licenses.
The August 2008 PC Magazine has a half-page commentary on the tech industry’s calls for Congress to reform patents. It includes a telling statement from a Cisco spokesperson: From 2005 to 2007, of 30 patent lawsuits Cisco battled in court, only one was brought by a company that makes anything. The rest were all “patent trolls,” to use one name for pure-IP companies.
Unfortunately, PC Magazine blows it in the first two sentences:
One tech gadget can contain several thousand components, all of which must have individual patents. Tech companies count on the U.S. Patent and Trademark Office (PTO) to protect their products.
But it’s not just products, and maybe not primarily products; many of the patents involved are for software or business methods. And it’s most certainly not the case that every component in a computer must have its own patents. Many don’t have patents at all—e.g., the patents on most screws ran out a long time ago.
I’m two years late getting to this one—Pew Internet & American Life’s September 24, 2006 report, The Future of the Internet II. (You can reach it from www.pewinternet.org; it’s a 104-page PDF.) Why? Several reasons:
▪ It’s huge—104 pages of relatively small type. I didn’t make time to prepare a coherent commentary.
▪ I increasingly find that futurism works best in My Back Pages—and this is 14-year-out futurism (predictions for 2020 in 2006), safely removed from real-world consequences. Not that I’ve ever seen negative consequences for being consistently wrong about short-term projections! It doesn’t seem to interfere with the big-ticket speeches and being treated as gurus. Once you’re a guru, you’re always a guru.
▪ Did I mention the sheer size of the beast?
▪ Early on, this statement appears: “The Pew Internet & American Life Project and Elon University do not advocate policy outcomes related to the internet.” I’m sorry, but given the wording of the scenarios, the groups invited to respond and Pew’s objectionable naming in other reports, I no longer accept that neutrality claim at face value. Would that it was true, but Pew comes off as an advocate.
I won’t attempt a coherent overall commentary. I will note that the survey involved leading questions—e.g., suggesting “Luddites will commit acts of violence and terror” (with that lovely word “Luddite,” presumably conveying Pew’s meaning of “not as committed to technology as we think they should be). It was a survey of “technology thinkers and stakeholders”—550 “select internet leaders” and other members of the Internet Society, Association for Computing Machinery, World Wide Web Consortium, Working Group on Internet Governance, Institute of Electrical and Electronics Engineers, Association of Internet Researchers and Internet2. We’re informed that the original set of 550 includes “both stakeholders and skeptics”—but I’m guessing there aren’t a whole bunch of skeptics among the “internet leaders,” and it’s fair to assume the membership groups involved tend much more toward stakeholder than skeptic.
I’m only a little surprised to see 58% of the 742 respondents agree that “Some Luddites/Refuseniks will commit terror acts”—and that these “refuseniks” (another wonderful Pew value-neutral term) will “self-segregate from ‘modern society.’” Here’s a perfect leading suggestion: “Transparency builds a better world, even at the expense of privacy.” Oddly, fewer people agreed than disagreed—and nobody had the third option: “We can improve functional transparency without giving up privacy.” How’s this one for a loaded scenario—remember, a scenario as offered by Pew, not put forth by the respondents?
Virtual reality is a drain for some: By the year 2020, virtual reality on the internet will come to allow more productivity from most people in technology-savvy communities than working in the “real world.” But the attractive nature of virtual-reality worlds will also lead to serious addiction problems for many, as we lose people to alternate realities.
Let’s say you’re sure some people will spend way too much time in virtual reality—but you don’t believe most people will be “more productive” in virtual reality than in the real world? Do you say Yes or No? In this case, 56% agreed—but I’m not sure just what they were agreeing to. And 52% agreed that by 2020, current national boundaries would “completely blur” as they become replaced by “city-states, corporation-based cultural groupings and/or other geographically diverse and reconfigured human organizations tied together by global networks.” That one stuns me: More than half of these supposedly knowledgeable people believe that nations will become irrelevant by 2020? Really? We’re doing so well with the mixes of cultures and ethnic groups in Eastern Europe and Africa and elsewhere…
Maybe it’s the same level of digital utopianism that results in 56% agreeing that, 12 years from now, “mobile wireless communications will be available to anyone anywhere on the globe at an extremely low cost.” “Extremely low cost” to whom? If 56% of those surveyed believe every person in Africa, Asia and the Middle East will be able to afford usable mobile wireless communications by 2020 (which, presumptively, means they have enough to eat, clothes to wear, access to medical care and shelter from the storm—unless you believe mobile wireless is more important than food, health and shelter)—well, wouldn’t it be loverly? But even with the Gates Foundation’s best efforts, I just don’t see how it could happen.
On the flip side, I’m a little surprised that 42% agreed with a scenario that, by 2020, intelligent agents and distributed control “will cut direct human input so completely out of some key activities…that technology beyond our control will generate dangers and dependencies that will not be recognized until it is impossible to reverse them.” Really? 42%?
Those are the Big Picture scenarios. There are others, several with such odd mixes of stuff within the scenario that it’s surprising there were never more than 7% who didn’t respond yes/no.
Instead of trying to grok the whole thing, I thought I’d mention a few of the comments from within the report. Some I find bizarre, some realistic, some hopeful in a plausible manner, some…well, you judge.
▪ Hal Varian: “Privacy is a thing of the past. Technologically it is obsolete. However, there will be social norms and legal barriers that will dampen out the worst excesses.”
▪ Michael Dahan: “Before 2020, every newborn child in industrialized countries will be implanted with an RFID or similar chip. Ostensibly providing important personal and medical data, these may also be used for tracking and surveillance.”
▪ Douglas Rushkoff: “Real interoperability [that is, universal low-cost wireless access] will be contingent on replacing our bias for competition with one for collaboration. Until then, economics do not permit universal networking capability.”
▪ John Quarterman: “Internet resources will permit some languages to thrive by connecting scattered speakers and by making existing and new materials in those languages available.”
▪ Bob Metcalfe: “A lot of 2020 English will sound Mandarinish.” (Both of these notes relate to a scenario in which English becomes so indispensable for the internet that it displaces some languages. Only 42% agreed.)
▪ Seth Finkelstein on out-of-control autonomous technology: “This is the AI bogeyman. It’s always around 20 years away, whatever the year.”
▪ Amos Davidowitz: “The major problem will be from providers and mining software that have malignant intent.”
▪ Douglas Rushkoff: (Re out-of-control autonomous technology) “If you look at the way products are currently developed and marketed, you’d have to say we’re already there: human beings have been taken out of the equation.”
▪ Bob Metcalfe on privacy: “The trick is not to do anything you’re ashamed of.”
▪ Marc Rotenberg: “The cost of unlimited transparency will not simply be privacy. It will be autonomy, freedom, and individuality. The personal lives of prisoners are transparent. So, too, is the world of the Borg.”
▪ Barry Wellman: “The less one is powerful, the more transparent his or her life. The powerful will remain much less transparent.”
▪ Barry Wellman on “complete blurring” of national boundaries: “We still have bodies; we, states and organizations still have territorially-based interests (in the political sense of that word).”
▪ Fred Baker: “Gee, I’d love to see world peace, but I don’t believe that the internet alone will be able to accomplish it.”
▪ David Weinberger: “The world is flat, but it’s also lumpy. We cluster together.”
There’s lots more in the report. Just for fun, I noted the genders of those who chose to identify themselves, where gender was clear. I came up with 180 men, 48 women, and about 10 cases where it wasn’t clear. All things considered, I suppose 79% men/21% women could be worse…but it could be a lot better. (Of the handful of people recognizably from “our field”—librarianship and related areas—a considerably higher percentage are women.)
I’ve been mildly critical of Pew—mostly Internet and American Life, but also some of the other projects—from time to time. I’ve been very critical of the Naming Names study, whatever it might have been called, the one that explicitly demeaned those of us who aren’t online and multitasking enough to suit Pew. That wasn’t a fluke. I’ve read reports from conference after conference where Pew’s folks delight in using their oh-so-clever terminology.
Realistically, I’m wholly ineffective at giving Pew a bad time. Nobody from Pew has ever responded. As far as I know, nobody at Pew is aware that I exist. Certainly nobody from Pew has attempted to defend their terminology.
So why do I give Pew a bad time? Two reasons:
▪ Disappointment: Pew’s projects appear to be well funded and clearly employ intelligent people. I believe they could do much more good for our ability to understand ourselves and one another if they acted as investigators and observers rather than advocates—if they tried a lot harder to avoid leading questions and if they dropped the biased terminology in stating results.
▪ Irritation: Is my life going to be damaged by Pew calling me a Lackluster Veteran? Probably not. The people who didn’t hire me last year probably wouldn’t have anyway. I doubt that library groups are thinking of asking me to speak but shy away because they find out that I’m not as fond of shiny new things as they’d like their keynoters to be. And, of course, nobody required me to come out and label myself as a Lackluster Veteran. It is, nonetheless, irritating—but not quite as irritating as the second- and third-level results that I see in Pew studies, and the results of slanted questions, being presented as more valid than I believe them to be. There’s a lot wrapped up in this little paragraph, and it’s unlikely I’ll unwrap it any time soon. Let’s just say that it’s always refreshing, if odd, to see Consumer Reports reliability charts that effectively say “there are no significant differences here”—with that clearly spelled out. When you have a sample of 2,000, the first-level choices are probably reasonably meaningful (if there are few enough of them)—but once you start stating choices of smaller and smaller subsets of that 2,000 sample, the meaningfulness of the results goes down very rapidly.
In short, I give Pew a bad time because the projects could be so much better than they are. Oh, and because people take the results so seriously…
Just keeping things up to date on Blu-ray and related developments. Some public libraries already buy and circulate these discs; more will in the future. Meanwhile, Sony’s turned out two new reasonably priced Blu-ray players that (finally) have Ethernet connections for internet content ($400 and $500)., and another manufacturer is making low-end Blu-ray players that sell for $250 to $299 (and maybe less) under several brands. Home Theater, moving away from its HD DVD bias, now comments on the “pundits, bloggers, and drunks who provide color commentary for format wars” and their speculations on what would kill Blu-ray. Downloads are the obvious answer, but HD downloads don’t work very well, particularly given the low broadband rates in most of the U.S. At 1.5mbps (that’s megabits, not megabytes), how long would it take to download a 30GB movie (that’s gigabytes, not gigabits)—when other people are also trying to download other movies? I come up with a little over 44 hours, assuming the full 1.5mbps channel is uninterrupted and has no overhead.
The June 2008 Home Theater has a three-page “In memoriam: HD DVD,” fitting given the magazine’s clear preference for that format. The writer thinks the format war should have continued and attributes HD DVD’s failure to “lousy marketing” and stressing low price over quality.
Another article in that issue looks at download possibilities, focusing on what’s available now. It took the reviewer 12 hours to download a high-def movie from iTunes on Apple TV (figure $6 to get a 24-hour viewing period) and 15 hours to download the same movie to an Xbox 360 (similar price). What about video quality? Well, if you’re not paying attention, the movies looked about the same—but if you’re serious about picture quality in a movie, “Blu-ray is the only choice.” And, of course, renting via Netflix (which offers Blu-ray for the same price as regular DVDs) is cheaper and doesn’t limit you to a 24-hour viewing period.
PC World looks at high-def options in its July 2008 issue. Among Blu-ray players, it gives the Best Buy to Philips’ $400 BDP7200/37, beating out the PlayStation 3—and a sidebar covers download options, loaded with “gotchas” and clearly not equal to Blu-ray quality.
The July 2008 Home Theater reports the wholly-unsurprising news that Universal was starting to release Blu-ray discs—and, true to form, the writer questioned whether Universal’s Blu-rays would be as good as their HD DVDs.
Marylaine Block’s August 17, 2007 ExLibris offers a nice counterpoint to my grumpy “when did creative work become worthless?” (Bibs & Blather, C&I 8:7, July 2008). You’ll find Block’s column at marylaine.com/ exlibris/xlib304.html and a followup at marylaine.com/ exlibris/xlib305.html.
A little of Block’s original column:
We are endlessly told that “Information wants to be free,” and we so take for granted getting our information for free online that that idea actually seems to make sense. We forget that “free to users” doesn’t mean it’s free, or even inexpensive, for the people who put it there.
Information doesn’t put itself online and pay the freight for doing so. Human beings, and the organizations they run, pay the real costs of making information free: the labor, server charges, connections, bulk mail services, and the salaries of tech gurus…
That’s why I always recommend training students to ask, of any web site, “Why are the site’s creators giving this away for free? What are they getting out of it that’s worth the costs of putting it online?”
She offers seven possible answers, including some that are entirely praiseworthy and some that may be less so. Then she considers her own free information, with the note that she spends $3,000 a year on her server and bulk mail. (I spend a lot less than that, by the way—by more than an order of magnitude. Did I mention LISHost lately?) Why is it worth it to her?
Passion for the cause is probably my strongest reason. I’m selling a viewpoint: I want libraries to survive, I believe they have to change and adapt in order for that to happen, and I point librarians to ideas and practices that will help them do that.
And since I am a librarian, giving away free information is simply what I do. When I come across a nifty web site I could have used in my days at the reference desk, I just have to share it. When I see libraries doing wonderful things that other libraries could imitate, I have to tell you about them.
I do have to survive, though, and the money I live on comes from my writing…and from the presentations I deliver to librarians’ organizations..
I have been invited to do both of those things because I built my reputation by giving my ideas away for free online. Writing is how I organize my ideas and understand what I think…
Her list of possible motives for “free” is interesting. Her own take, also.
I’ll miss ExLibris—which Block stopped writing in April 2008, although the archives are still available.
That’s the title of Clement Vincent’s “first person” piece in the Chronicle of Higher Education Careers section; you’ll find it at chronicle.com/jobs/news/2007/ 07/2007071601c.htm. It’s an odd story. Briefly:
▪ Vincent assembled a bibliography dedicated to a minor figure in early modern studies, some years back, as his first “postdissertation project.” He put the bibliography online.
▪ Over the years he updated the bibliography, with a slowdown when he and his wife took teaching assignments in their university’s study-abroad program.
▪ When he returned, he found that a foreign publisher had brought out a bibliography on the topic and volunteered to review the book. When he got the review copy, he was astonished that they’d found pretty much everything he’d put in his own bibliography.
▪ And then he realized something was amiss:
My eyes alighted on a strange entry on page 376 of the book. A few seconds later, my very pregnant wife, propped up with pillows on the couch in our living room, heard me shout excitedly from our study, “I’ve caught them! I’ve got ‘em! They took from my Web site!”
The entry in the volume that caught my attention identified a particular article as falling on pages “**-70.” I had listed that article on my online bibliography in the exact same way. Before writing the entry for my Web site, I had lost the first few pages of the article, so I had used two asterisks as placeholders until I could track down the article again. My idiosyncratic reference had been repeated verbatim in the published bibliography!
So he went looking for other examples—and found them. “Every error, omission, or idiosyncratic entry that appeared on my Web site also appeared in the volume.” As did, he realized, editorial material that he’d added to the bibliography, material that wasn’t from the sources. “My annotations were mistakenly taken to be part of the titles of the works and were presented as such in the volume.”
There was more to the book than his bibliography—but he felt that his work had been appropriated, pretty much in its entirety, without credit. There was no footnote or any other acknowledgment of his online bibliography, and he says he would have been pleased with such a footnote.
I finally wrote a letter to the authors and the publisher asserting the dependency of the book on my Web site and appended a 17-page table of evidence. I requested that the publisher republish the book, or a portion of it, with credit to me as a co-editor. I sent the letter by e-mail message as well as by overseas mail and then waited for a response, half worrying that I would be totally ignored by all parties.
Within two days, however, I received an e-mail message from the publishing house inviting discussion regarding two legal issues. The publisher questioned, first, whether a copyright could be asserted for a Web site; and second, whether a bibliography as such could be copyrighted since, presumably, all bibliographies are compilations of previous bibliographies. The message closed with the promise to contact the authors to hear their responses to my letter…
Two weeks later, I received a second e-mail message from the publisher. One of the authors of the volume had “confirmed his regret for what has happened” and noted that a rush in correcting the proofs had “caused the omission” of any reference to my work. I found the author’s explanation to be diplomatic at best, but I was gratified at the admission. The publisher followed with an offer to reprint the last portion of the book with my name stated as co-editor…
What have I learned from the experience? My thoughts on l’affaire bibliographique are varied, and I am left with two unresolved worries.
First, I am not sure I can call my experiment in open-access publishing a success. I have been thinking about starting a bibliography on another topic. Should I also put it online? I don’t know.
Second, there is the issue of the status of the reprint with my name on it. Will it ever be seen by anyone other than my wife, my son, and me? The reprint is not cataloged in any library, and, as far as I can tell, the publishing house isn’t selling it… Sure, I’ll be going up for tenure in about a year and a half, so I can include it in my tenure packet, but in the larger scholarly community, my name will probably never be associated with the print version of the bibliography.
Vincent (a pseudonym) wasn’t asking for money. He was asking for attribution, the most fundamental form of respect for someone else’s work. Use without attribution is, in essence, plagiarism. Copyright really shouldn’t be the issue here; scholarly integrity should be. And yet, when it came time to write the review, Vincent chose to ignore the whole affair, “listed its strengths and weaknesses, and predicted that the volume would become the standard reference work in the field.”
Sad, in a way, but predictable: “The Sony Trinitron is no more.” The patents expired some time back—after all, the first Trinitron appeared in 1968—and Sony stopped producing Trinitrons in 2006 for Japan, in 2007 for the U.S., and now it’s dropped them entirely. Not surprising—these days, what few CRT-based TVs are built are low-end or rear-projection—but still, it’s the end of an era. The Trinitron was by far the most accurate and best-looking TV for decades. My first TV was one of those little 13” Trinitrons, and our current TV is an 11-year-old 32” Trinitron. It’s still got such a great picture that we’re late adopters for HDTV…
▪ I’m usually none too fond of the endless “tips & tricks” articles in PC magazines. They seem like a cheap way to fill space, you’re probably not going to clip them for later reference and most of the tips are things you won’t use often enough to remember. That said, “501 tips for better computing” in the June 2008 PC Magazine may be the best of breed. I was surprised at the range of suggestions, including things I really don’t think you’d stumble upon yourself (e.g., Vista will generate a well-organized diagnostic report—and you can improve the chances of making legacy software work with some right-click settings).
▪ In May 2007, John Miedema considered new technology and how it might change our physical relationship with information. I’d point you to the post, but it seems to have gone missing. He considers the extent to which digital is better for finding information and books (etc.) are better for sustained reading—and wonders whether new technologies could bridge that gap. But he also notes the desire for fixity: For capturing information statically. Does displaying information in a static manner play a role here as well? Maybe so: Maybe, even for those who adopt and love ebook readers, there will still be a place for those books on the shelves. Just as TV and the endless mixable images on the internet haven’t destroyed the markets for fine art and printed photographs.
▪ Also in May 2007, Nicholas Carr considered a move by YouTube to split contributors into two groups: superstars who get paid for their videos…and everybody else. The “select group of content creators” get promoted and get help “monetizing” their content. As Carr says, “so much for the myth of the social collective.” He recounts a bet with YochaiBenkler, who argues that social media will bring “a quite basic transformation in the world around us” away from paid, professional labor—where Carr believes that social media has avoided pricing only because there wasn’t yet a market. “We weren’t yet able to assign a value—in monetary terms—to what these workers were doing; we weren’t even able to draw distinctions between what they were contributing. We couldn’t see the talent for the crowd. Now, though, the amateurs are being sorted according to their individual skills, calculations as to the monetary value of those skills are starting to be made, and a market appears to be taking shape.” Is this happening? Certainly in the blogging field, although it’s not clear that talent has much to do with success… And it’s certainly the case that YouTube does now have two castes, the paid YouTube Partners and everybody else. The basic question: Is Wikipedia—where, essentially, all effort is unpaid—an exception or a case study? As a writer, my inclination is to believe it’s an exception. As a blogger…
▪ Laura Crossett at lis.dom (www.newrambler.net/ lisdom/) raised a question in an August 15, 2007 post about attempts to label information, to identify sources as authoritative. She notes traditional labels—”this is fair trade coffee,” “this won the National Book Critics Circle Award”—and that there’s really no equivalent label for information:
An algorithm might help you trace an IP address and learn the probably identity of a contributor to a wiki, but you’ll still need to know something about who that person or entity is and what their biases are before you can know whether their statements are trustworthy. I won’t even get into the profound political implications of slapping an “authoritative” label on information, as I trust you’ve all read Orwell and school history textbooks and so on. But there are days when I think that’s what Google is trying to do–not organize the world’s information and make it universally accessible and useful, but organize and filter and, in doing so, suggest an authority to those first ten search results that they may or may not possess. It’s almost as if the purpose of organizing all that information is to prohibit critical thinking, not to promote it.
It’s an interesting and somewhat scary notion, particularly given the quality of many Google (and other search engine) results for anything other than proper names. Add that to recent studies suggesting that availability of more sources online may be narrowing actual reading and citation, and you get some uncomfortable thoughts…
Cites & Insights is sponsored by YBP Library Services, http://www.ybp.com.
Opinions herein may not represent those of PALINET or YBP Library Services.
Comments should be sent to email@example.com. Cites & Insights: Crawford at Large is copyright © 2008 by Walt Crawford: Some rights reserved.
All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.