Cites & Insights: Crawford at Large
ISSN 1534-0937
Libraries · Policy · Technology · Media


Selection from Cites & Insights 12, Number 2: March 2012


The Middle

Not Quite Dead Yet

The headline in Steve Fox’s “TechLog” in the October 2011 PC World is actually “Desktop Software: Not Dead Yet” and the callout is “Cloud-based applications may be receiving all the attention, but we still can’t live without locally installed software.”

Setting aside “we” as an exaggeration—well, maybe not for PC World readers—and “can’t live without” as perhaps slightly exaggerated, it’s still a good editorial. Not that PC World doesn’t care about cloud apps; the same issue has an article on cloud-based office suites. But I’m inclined to agree that “if convenience, utility, and performance—as opposed to glitz—are your criteria, local software is hard to beat.”

His six reasons are access (some of us aren’t always online), self-determination (it’s easier to personalize desktop software—and it’s typically not possible to postpone updates for online software), trust, versatility, suitability to the task and speed. The paragraph on each is worth reading; I’ll just quote one that bothers me a lot when it comes to overreliance on the cloud:

Trust: Web apps know a lot about you. They might even hang on to your data. You may trust them, but what happens if they go out of business or are acquired by someone less scrupulous?

If you’ve never had an online application or service disappear from under you, you’re lucky. In any case, I agree: “let’s hold off on the obituaries for client-side software. There’s still plenty of life left in those old bones.” As can be said for most things for which deathwatches run rampant.

But Sometimes…

That same editorial column in the December 2011 PC World points to the annual “100 best products” feature—and discusses “tech losers.” As Steve Fox notes, some things that technically ended in 2011 were effectively gone years earlier (yes, AltaVista is officially dead now) and “it’s hard to get too choked up over their official expiration.” But there are cases where genuine regret may be appropriate: “good products lost in the ferocious market of 2011; initiatives that became too expensive to continue funding; even well-engineered gear that never fully caught on with the public.” Here’s his list of seven “tech goners that we at PC World are truly going to miss” with my own comments.

·         Flip camcorders. Pure Digital pioneered the field. He blames the shutdown on the sale to Cisco and “the rise of video-capable smartphones”; that might be true, or it might be that Cisco just wasn’t very good at retail mass marketing.

·         Verizon’s unlimited-data plan. There are still no-limit plans, but yes, at $10 per gigabyte per overage…well, you’d better really love that movie you’re watching on the small screen.

·         The white MacBook. I don’t get this one at all; Fox mostly seems to hate the possibility that Apple will phase out white products.

·         HP WebOS. Did it ever really gain acceptance?

·         Symbios. Remember Symbios? Nokia used it (and probably still does on some phones), so Nokia’s move to Windows mobile versions killed it.

·         Zune HD. Well…I appreciate Fox’s “two reactions,” with a “(few) faithful who deemed the Zune superior to the iPod” upset and most people being surprised that Microsoft was still producing it. But, y’know, Steve, there were and are other excellent portable players, especially for those of us not in the iCamp. Sandisk still produces excellent, well-priced players under the Sansa name, including the Clip+ and the Fuze+, although it’s discontinued the neat little Express, the overgrown flash drive I used to use. (I’m delighted with my 8GB Fuze—and since it’s from Sansa, I can always expand it with a microSDHC card from them or some other actual flashRAM manufacturer.

Or is the Desktop Dead After All?

The editorial that began this little essay appeared in the October 2011 PC World—but Melissa J. Perenson’s “Let the Tablet Revolution Begin” in the April 2011 is one of those single-minded deathwatch-oriented pieces that admits no doubt: Steve Fox may be the magazine’s editor, but since we’re entering the “post-PC era” desktop software must be dead.

Perenson says “the tablet is fast becoming today’s PC”—not tomorrow’s, not “for some,” but it’s pretty much a done deal. The first sentence: “The tablet computer will undoubtedly revolutionize computing, and 2011 may be year one of this uprising.” (Emphasis added.) She uses that pat phrase “post-PC era” three times in a short story. Why are we on a “march to a post-PC era”? Because combined notebook and desktop sales are falling rapidly? Nope; not true. Because tablets now outsell, say, netbooks, much less notebooks and desktops? Nope; not true. It’s true because it’s true. The facts have nothing to do with it.

Smarter, Dumber or Both?

Remember the controversy in 2010 over whether the Internet was making us dumber (Nick Carr’s route to bestsellerdom) or smarter (Clay Shirky’s ongoing notion and, ahem, route to betsellerdom)? In a June 6, 2010 piece at GigaOm, Mathew Ingram discusses the controversy—although his conclusion is foreshadowed by the title of the piece: “Is the Internet Making Us Smarter or Dumber? Yes.”

Ingram says the reader might find something worthwhile in both viewpoints and both of them are right. Since I’ve always found both perspectives singularly dumb and since my favorite answer to multiple-choice questions is “yes,” I’m predisposed to appreciate Ingram’s perspective here. His brief summary of each argument is interesting. Specifically, Shirky’s apparent overall thesis (undermined by his bad statistics—e.g., maybe watching less TV would provide a huge “cognitive surplus,” but while Americans may be watching less network TV, we—and the rest of the world—are actually watching more TV overall, most of it even stupider than network TV) isn’t that the Internet makes you or me or Carr or Ingram smarter, but that it makes society smarter, whatever that might mean. I think we can look at the current electoral debates for some indication of the quality of that particular claim. If this is smarter, I shudder to think what dumber might be.

I have been reading Carr’s blog, and one of his underlying claims is that the Internet makes us “less interesting” whether or not it makes us dumber. I believe that’s true for Carr: His blog has become steadily less interesting over time. He thinks it’s because we won’t contemplate as often. I think it’s pretty clear that Nick Carr is becoming more distracted by ephemeral things; I’m less convinced that Carr is the Universe. But I’ll quote a key paragraph in Ingram’s essay:

Anyone who has spent much time on the Internet—especially using tools such as Twitter or any *other social media outlet—can probably sympathize with Carr’s comments about how he has felt himself becoming more distracted by ephemeral things, more stressed, less deep. And the idea that multitasking is inherently impossible is also an attractive one. But are these things making us dumber, or are they simply challenging us to become smarter in new ways? I would argue they are doing both. To the extent that we want to use them to become more intelligent, they are doing so; but the very same tools can just as easily be used to become dumber and less informed, just as television can, or the telephone or any other technology, including books.

Damned if I can find much to disagree with in that paragraph.

Ingram asks readers what they think—resulting in 23 comments. On one hand, they’re a much politer group of comments than you see on too many sites (the relatively small number may have something to do with that). On the other, there’s not a lot of new insight. Two or three of them are just brief snark, or in one case the apparent inability to actually read a commentary (the most recent one, responding to “Smarter or dumber?—I think the answer is yes” with “That doesn’t make much sense to me”). Several make essentially the same point, that the internet is not homogeneous and can make some people smarter and other people dumber, much like the local library (an analogy explicitly used once). A couple point out that it was ever thus: That rock’n’roll was making us dumber after TV started making us dumber after radio started making us dumber after cheap books started making us dumber. One person takes a cheap shot at Carr—“Nick carr is full of shit. What he says is basically 21th century luddism.” and another takes a cheap shot at Shirky—“Shirky is a self-serving virtual boulivardier..” Both Carr and Shirky deserve better.

Closing the Digital Frontier

According to Michael Hirschorn’s article of that name in the July/August 2010 Atlantic Magazine, the “era of the Web browser’s dominance is coming to a close.” Why? Because “things are changing all over again.”

The shift of the digital frontier from the Web, where the browser ruled supreme, to the smart phone, where the app and the pricing plan now hold sway, signals a radical shift from openness to a degree of closed-ness that would have been remarkable even before 1995. In the U.S., there are only three major cell-phone networks, a handful of smart-phone makers, and just one Apple, a company that has spent the entire Internet era fighting the idea of open (as anyone who has tried to move legally purchased digital downloads among devices can attest). As far back as the ’80s, when Apple launched the desktop-publishing revolution, the company has always made the case that the bourgeois comforts of an artfully constructed end-to-end solution, despite its limits, were superior to the freedom and danger of the digital badlands.

So we have one of those “shifty” articles—where we all move from one paradigm to another paradigm, with no room for both, for people who use smartphones, apps and iPads but also notebooks and browsers.

But as I read it, this doesn’t seem to be as much about the web in general as it is about traditional media and their relationship to the web. Even there, I think the thesis is overstated—and with an odd countergenerational overtone: “for under-30s whelped on free content, the prospect of paying hundreds or thousands of dollars yearly for print, audio, and video (on expensive new devices that require paying AT&T $30 a month) is not going to be an easy sell.” But, Hirschorn says, that won’t stop “the rush to apps” because, especially with Apple as semi-benevolent overlord, “there’s too much potential upside.”

I find the article bemusing. We learn that Twitter barely cares about, well, Twitter—that the smartphone version is more fully featured. It’s clearly an “or” situation: Apps can only rise at the expense of the browser. The grand finale? Harking back to the American frontier, Hirschorn concludes:

Now, instead of farmers versus ranchers, we have Apple versus Google. In retrospect, for all the talk of an unencumbered sphere, of a unified planetary soul, the colonization and exploitation of the Web was a foregone conclusion. The only question now is who will own it.

As Sue Kamm has said in another context, “In the words of the immortal Nero Wolfe, ‘Pfui.’” It doesn’t help to read the byline: Hirschorn runs a TV production company. I suspect, particularly based on rereading the article, that he views the world in media terms: There are producers and consumers, and that’s just the way it is.

Relatively few comments over the past year, the first of which rushes to Apple’s defense—followed by one that posits that, you know, people can and probably will use both “walled gardens” and the open web. A few items down, we get a reasonably sound comment that begins with this subtle paragraph: “This is absolute rubbish.”

I’ll quote Dale Dietrich’s comment in full (typos and all—and since Dietrich was probably typing on a virtual keyboard, an occasional typo’s forgivable), as I think it speaks to the truth if you’re dealing with something more than corporate media:

The app does NOT diminish the importance of the browser. The app merely extends the web to more devices that it was hitherto inaccessible to. The App, as first popularized on the iPhone, wrested contol of what can be done on mobile devices from big telco to the individual. Like the browser-based web did before it, the app gave control to the end user. The author would do well to consider that all modern smart phones include browsers that are heavily used both independenty by users and by mobile apps that frequently embed the browser within the app. Case in point, I am viewing and responding to this silly article within the Safari browser that is embedded within my iPad's Twitterific app. Hell, Twitter-based apps INCREASE my viewing of browser-based content by curating the web for me by the trusted folks I follow.

And, a bit later, this from David McGavock:

All of this assumes that the people who are participating in the read-write-create web will walk away and let apps dominate all their interactions. This dichotomy of apps vs. browser seems false to me in light of the fact that both have their strengths and weaknesses. This entire article assumes that the billions of people that are creating their own digital footprints will give it up for paid service. There is an explosion of legal sharing going on here. Are we all going to pack it up and go home because of the apps we use. I think not.

Then there’s a strange comment from “John_LeB” who apparently is aware of something I’m not:

It is true that some information remains free on the Web, but much research-based scholarship definitely does not. With on-line fee-based jobbers such as Taylor & Francis, Elsevier, Blackwell, Springer, etc., research that used to be freely distributed on the Web now carries a subscription fee. All well and good, perhaps; academic researchers are entitled to compensation for their scholarly production—but wait! Access fees rarely trickle down to their producing authors. Their reward lies in the "points" they can embed in their CVs for tenure or promotion. The jobbers are running free with the pecuniary revenue. One unfortunate spin-off is that access to research is foreclosed where it's needed the most, in the developing world where the contemporary price of a journal article can represent a week's worth of food. (Food for the stomach, that is.)

Ah, the good old days when research articles were always freely distributed on the web, back before those young upstarts like Elsevier grabbed it all… That’s the complete comment. The writer’s probably as ignorant of open access as he is of the history of web access to research articles.

Mike Masnick does a pretty fair fisking of Hirschorn’s article in “Another Journalist Seduced By App Madness Predicts The End of the Web,” posted July 1, 2010 at techdirt. I won’t bother to excerpt his commentary: It’s free and you can go read it yourself, unless you’re reading this on a smartphone that lacks any form of browser (a combination that seems somewhere between unlikely and impossible). Of course, if your only access to e-stuff is through such a smartphone or some truly locked down tablet, then you’re not reading this anyway, are you? (Oddly, in comments on Masnick’s piece, Hirschorn objects that his piece is “largely an attack on Apple’s efforts to curtail that freedom…”—which, if true, means that Hirschorn is an inarticulate writer, since I certainly didn’t read it that way. Even in this response, Hirschorn’s an Only One Future man: “Also clearly and obviously, the rise of mobile computing will result in less non-mobile-computing and the center of power will move from the browser to the smartphone/ipad experience.” Right. And neither smartphones nor tablets have browsers. Now, if Apple had a browser—oh, let’s give it some fanciful name like Safari—that would really change the inevitable future. But that’s as silly as it would be for Amazon to add a browser, say one with an even sillier name like Silk, to its walled-garden Kindle Fire.)

If you do read Masnick’s piece, scroll through at least some of the comments. Hirschorn starts doing a complex “that’s not what I was intending/that’s not what I really wrote” dance that leads me more and more to believe that he really is inarticulate or incoherent. As you probably already know, I’m definitely not one of those who regard traditional journalism and media as irrelevant (as some commenters do)—but neither do I regard them as the whole of the landscape.

Why mention this now, almost two years later? Because we haven’t gone All Apps, All The Time. Because traditional real-world media continues to do better than a lot of digital junkies realize (for example, did’ja know that there are more than 300 million print magazine subscriptions in the US, and that 100 million people in the US still read print newspapers? hmm?). Because the world continues to evolve mostly in “and not or” ways, with more choices complementing one another rather than One Triumphant Paradigm…and because this sort of “journalism” continues to be prevalent.

“Good Implementation of a Bad Idea”

Here’s one that would have been in “Interesting & Peculiar Products” if that section still existed: the Acer Iconia 6120, reviewed in the July 2011 PC World. An $1,199 14" laptop with one big difference: Instead of a physical keyboard on the bottom half, it has a second 14" multitouch capacitive screen.

As a laptop without that hot feature, it’s simply overpriced for a first-generation Core i5-480M CPU, integrated graphics, 4GB of RAM and a 640GB 5,400RPM (notebook speed) hard disk. Those specifications for a 14" notebook sound like a $500 unit to me—or at least Gateway (which is Acer with a different label) would sell such a notebook for around $450-$550. (I’m writing this in February 2012; maybe in July 2011 it would have been more like $650-$700.) So figure you’re paying at least $400 to $600 for that second screen and multitouch capability. That’s not quite right: The unit also lacks an optical drive, making it significantly underpowered except as a thin-and-light notebook—neither of which it is.

The reviewer found that typing on the screen was slower and less accurate than on a physical keypad. Otherwise, well, the touch-sensitive control hub and applications worked as advertised, but “using them on a lower touchscreen doesn’t save much time or effort.” Additionally, the second touchscreen makes the unit bulky (1.4" thick, for a unit with no optical drive) and heavy (6lb., again with no optical drive)—and makes for short battery life. “It’s good to see Acer trying designs as aesthetically pleasing as the Iconia’s, but as a practical matter it simply doesn’t make sense to replace the lower deck of a laptop with a touchscreen.”

An Interesting “Great Gifts” List

The December 2011 Sound and Vision devoted 10.5 editorial pages to “Expert’s Guide to Great Gifts 2011”—which is interesting partly because the 84-page issue only has 39 editorial pages (some of which are full-page pictures), partly because none of these gifts appear to have been tested or formally reviewed. There are no ratings as such, just informal, subjective commentary. (If you think 10.5 pages out of 39 is a lot, that’s followed by another 3.5 pages of DVD/Blu-Ray/CD box sets; it’s effectively 15 pages, or more than a third of the issue.)

So what’s sure-fire? There’s a $499 iPod dock from Monitor Audio; some headphones ($300 professional ‘phones, $120 over-the-ear noise-canceling phones from Audio-Technica rather than Bose), the strange $60 SRS Labs iWow 3D, a plug-in for iStuff that claims to give expansive sound quality.

One of the most interesting, in a slightly strange way, is the $2,490 Magnepan Mini Maggie Desktop Speaker System—for people who want really good sound on really big desks, since the two desktop speakers are each 14" tall and 9.5" wide (and 1.5" thick). (The subwoofer’s 22.5" x 19.25", and since it’s only 1.5" thick, I trust it has one heck of a deep and stable base if there’s a cat in the house.) You’d need a pretty good receiver as well: These aren’t self-powered speakers and they’re relatively insensitive. (If you’re wondering: You can’t hang those big speakers on the wall; Maggies require a fair amount of space behind the speaker in order to function properly.)

I’m certainly not making fun of all these. The Audio-Technica headphones look like winners, for example, as does the $249 Blue Microphones Yeti Pro USB Microphone for people wanting high-quality stereo recording in a simple home environment. The point of the $349 Nuforce Icon-2 Integrated Desktop Amplifier seems to be compactness, and the $300 NAD DAC 1 Wireless USB DAC—well, if you understand that product name, you may have an idea why you might want it.

I find a touch of silly season in the $149 Gunnar Premium 3D Glasses, which are intended for use in movie theaters or with passive 3DTV, not the active 3D sets you’ve probably heard about (with the expensive glasses). They look like regular glasses; I suspect they’ll work over existing regular glasses even worse than others, but hey… I’m also a little uncertain about the $300 iHome iW1 AirPlay wireless speaker system—I mean, just how much sound are you going to get out of two 3" speakers powered by a lithium-ion battery? The writeup says 13 watts per side, and if that’s anywhere near right, you can plan on recharging that battery a lot.

There are a couple more, culminating in the $500 Vivitek QUMI LED pocket projector, a teeny-tiny “HD” projector (it doesn’t project full 1080P HD). It’s not a true pocket projector—it’s AC powered and a little too big—and it’s pretty dim if you actually want a large picture, and the speakers provide “audible, but just barely” sound. It sure does look like a neat toy, though.

The whole effort strikes me as odd, but I’m not in charge of putting out a mass-circulation magazine with as little editorial effort as possible. If somebody wanted to buy me anything from this list, I’d probably take the Audo-Technica ‘phones. (My computer desk is enormous, but there is no way I could situate that speaker system so it would work properly. Nor, for that matter, would I want to.)

In Praise of Libraries

Once upon a time (in October 2007), futurist Richard Watson—the only futurist whose blog (What’s Next: Top Trends) I follow—did an extinction time line. I thought it was massively silly for the reasons most deathwatches are silly, but I don’t remember commenting on it. (Finding it now, I see in the post leading to the PDF timeline itself that he calls it “in part a bit of fun” and clarifies that “extinction” doesn’t mean extinction; it means relative rarity. Thus, by Watson’s standards, Macs have been extinct for a very long time and LPs, despite increasing sales, continue to be extinct. The timeline actually says “existence insignificant beyond this date.”)

One item struck me as particularly outrageous: He included libraries with an extinction date of 2019, a couple of years after retirement and a year before copyright. As to overall veracity, he has landline telephones extinct by 2011 and newspaper delivery extinct by 2012; there are at least 100 million Americans who would disagree on both counts. Worse, libraries didn’t even get boldface: it was one of the minor notes, apparently not worth much thought.

In late 2011—apparently earlier in the year but repeated on December 28—he posted an essay with the title above, taking back the prediction for public libraries and librarians. Portions:

Some time ago I created an extinction timeline, because I believe that the future is as much about things we’re familiar with disappearing as it is about new things being invented. And, of course, I put libraries on the extinction timeline because, in an age of e-books and Google who needs them.

Big mistake. Especially when one day you make a presentation to a room full of librarians and show them the extinction timeline. I got roughly the same reaction as I got from a Belgian after he noticed that I’d put his country down as expired by 2025.

Fortunately most librarians have a sense of humour, as well as keen eyesight, so I ended up developing some scenarios for the future of public libraries and I now repent. I got it totally wrong. Probably. [Emphasis added.]

I emphasized that sentence—even with the qualifier—because it’s so astonishing for any futurist, even a semi-skeptical one. He got it wrong, and he’s admitting it. Sort of.

Whether or not we will want libraries in the future I cannot say, but I can categorically state we will need them, because libraries aren’t just about the books they contain. Moreover, it is a big mistake, in my view, to confuse the future of books or publishing with the future of public libraries. They are not the same thing.

I would interject here that Watson still seems to think that books, or at least print books, are largely irrelevant for the future. Given that he seems to take most of his futurism lightly, maybe that’s OK. Revisiting (and seemingly accepting) the notion that we don’t need libraries when you “can download any book in 60-seconds…or instantly search for any fact, image or utterance on Google” he answers his own question as to “why bother with a dusty local library?” [What makes local libraries “dusty”? Well, he’s still a futurist…]

I’d say the answer to this is that public libraries are important because of a word that’s been largely ignored or forgotten and that word is Public. Public libraries are about more than mere facts, information or ‘content’. Public libraries are places where local people and ideas come together. They are spaces, local gathering places, where people exchange knowledge, wisdom, insight and, most importantly of all, human dignity.

A good local library is not just about borrowing books or storing physical artefacts. It is where individuals become card-carrying members of a local community. They are places where people give as well as receive. Public libraries are keystones delivering the building blocks of social cohesion, especially for the very young and the very old. They are where individuals come to sit quietly and think, free from the distractions of our digital age. They are where people come to ask for help in finding things, especially themselves. And the fact that they largely do this for nothing is nothing short of a miracle.

There’s quite a bit more—this is a fairly long post—and it’s not a bad discussion. More of the good stuff before Watson starts going all “inevitable digitization” on us.

In a world cluttered with too much instant opinion we need good librarians more than ever. Not just to find a popular book, but to recommend an obscure or original one. Not only to find events but to invent them. The internet can do this too, of course, but it can’t look you in the eye and smile gently whilst it does it. And in a world that’s becoming faster, noisier, more virtual and more connected, I think we need the slowness, quietness, physical presence and disconnection that libraries provide, even if all we end up doing in one is using a free computer.

Public libraries are about access and equality. They are open to all and do not judge a book by its cover any more than they judge a readers worth by the clothes they wear. They are one of the few free public spaces that we have left and they are among the most valuable, sometimes because of the things they contain, but more usually because of what they don’t.

What libraries do contain, and should continue to contain in my view, includes mother and toddler reading groups, computer classes for seniors, language lessons for recently arrived immigrants, family history workshops and shelter for the homeless and the abused. Equally, libraries should continue to work alongside local schools, local prisons and local hospitals and provide access to a wide range of e-services, especially for people with mental or physical disabilities.

In short, if libraries cease to exist, we will have to re-invent them.

I could push at some other items in the essay, but I’m mostly astonished by “I was wrong” and by a futurist recognizing that public libraries matter—for far more than books, although I continue to say that the books will continue to matter.

For some reason, Brian Kelly of UKOLN seems intent on the doom of books in his comment:

Reading your post it strikes me that you’re not really saying your prediction was incorrect – you are simply redefining a public library as a community space. You seem to still believe that the public library as a place for borrowing books is doomed. Is this not the case? And whilst I agree that public libraries will need to change in order to respond to that new challenges of the digital age, I know that others will argue that public libraries are fundamentally about physical books, and your suggestion that libraries will be reinvented is simply saying that public libraries, in their current form, are doomed. Yes?

There’s no response. I certainly read that comment as being from “others”—that Kelly is himself arguing that public libraries as such are properly doomed. There’s not a word in the post (that I could find) implying that public libraries have no future as book-lending places, only that they’re much more than that.

Caring for Your Introvert

The nice thing about The Middle as a section name is that, even more so than Trends & Quick Takes, it can be about almost anything—basically, anything that’s not about C&I or my books [The Front] or mostly snark [The Back]. I could see the possibility of C&I issues consisting of nothing but those three sections…and they might be some of the most interesting or best-read issues.

Take Jonathan Rauch’s lovely piece, “Caring for Your Introvert,” and the followup provided starting with the online posting of an article that apparently originally appeared in the March 2003 print version of The Atlantic. I tagged the article in June 2010, and didn’t realize it was seven years old. Nor does that much matter.

Do you know someone who needs hours alone every day? Who loves quiet conversations about feelings or ideas, and can give a dynamite presentation to a big audience, but seems awkward in groups and maladroit at small talk? Who has to be dragged to parties and then needs the rest of the day to recuperate? Who growls or scowls or grunts or winces when accosted with pleasantries by people who are just trying to be nice?

If so, do you tell this person he is "too serious," or ask if he is okay? Regard him as aloof, arrogant, rude? Redouble your efforts to draw him out?

If you answered yes to these questions, chances are that you have an introvert on your hands—and that you aren't caring for him properly.

I go on hikes on most Wednesday mornings with a great group of people, most of them even older than I am. After the hikes, a few of them go to a local brewpub for lunch and a beer. I’ve never joined them. Instead, I go home, change clothes, and go out to eat. By myself. With a science fiction magazine to read. (In fact, so far I’ve never tried the First Street Alehouse, even though it’s supposed to have the best burger in town.)

Why don’t I join them, other than preferring fresh clothes after a sweaty hike? Simple: After two to four hours, I’m pretty much socialed out. I need some time to recuperate. And, sure enough, for years I had trouble with my manager at work because I wasn’t going around chatting with other people enough, I wasn’t at enough of the casual events, I was…too serious.

My name is Walt and I’m an introvert. I’ve given some pretty good presentations. I can and will talk about most anything. I was even president of an ALA division. But I’m still an introvert.

Science has learned a good deal in recent years about the habits and requirements of introverts. It has even learned, by means of brain scans, that introverts process information differently from other people (I am not making this up). If you are behind the curve on this important matter, be reassured that you are not alone. Introverts may be common, but they are also among the most misunderstood and aggrieved groups in America, possibly the world.

[Yes, of course Rauch follows that by using the same xA “My name is Y and I’m an x” cliché I just used. Some things just seem natural.]

It’s a charming article (although portions are overstated, presumably for humor), one that I think could only have been written by an introvert. I call myself shy, but that’s only partly true (true back in dating days, true enough at most big parties)—but I’m not “anxious or frightened or self-excoriating in social settings”; I’m just not a hale fellow well met.

I won’t quote the paragraph starting “Are introverts misunderstood” because I don’t want to exceed fair use, but boy, do I agree with it. “Extroverts have little or no grasp of introversion. They assume that company, especially their own, is always welcome. They cannot imagine why someone would need to be alone; indeed, they often take umbrage at the suggestion.” On the other hand, I’m not willing to claim oppression. Do I believe I would have made more money, been more successful and probably dated a lot more if I’d been an extrovert? Absolutely. Do I regret being an introvert? No—and in any case, I doubt that it’s any more of a conscious choice than, say, sexual orientation. In both cases, you can fight against your nature, you can probably appear to be what you’re not—but you’ll damage yourself in the process.

The online version of the article has 626 comments as of this writing. I did not attempt to read all of them. (The discussion continues: Since the website uses Disqus, I could go to newest-first, and the most recent comment is only three days old. That’s remarkable.) Some comments from extroverts are remarkably hateful (and some have been removed from the thread), but most of what I read was reasonably coherent.

The followup is a deliberate attempt at “introversy”—controversy among introverts. Specifically, it raises the question “In looking for a mate, are introverts better off pairing up with extroverts or with fellow introverts?” As you can probably guess, my own answer is Yes. The piece is segments of email responses to the question. An interesting lot. I find it telling that one response (from an extroverted woman married to an introverted man) includes this sentence: “On the other hand, my poor husband is a classic, closet introvert.” Your poor husband? Hmmm… And, come to think of it, the next one—from “an extrovert with lots of introvert friends”—refers to introverts, or at least some of us, as “petulant.”

The internet: Everything you ever need to know

That’s a startlingly arrogant title, and I’m willing to believe that John Naughton didn’t actually choose it for this June 19, 2010 essay at The Guardian. Not that Naughton isn’t ambitious: He claims to offer the “nine key steps to understanding the most powerful tool of our age—and where it’s taking us.”

I was a little put off by the introduction, but then remembered that I live in Northern California and Naughton is writing for a British newspaper. For example, he seems to think that most “mainstream media” coverage of the internet is negative:

It may be essential for our kids' education, they concede, but it's riddled with online predators, seeking children to "groom" for abuse. Google is supposedly "making us stupid" and shattering our concentration into the bargain. It's also allegedly leading to an epidemic of plagiarism. File sharing is destroying music, online news is killing newspapers, and Amazon is killing bookshops. The network is making a mockery of legal injunctions and the web is full of lies, distortions and half-truths. Social networking fuels the growth of vindictive "flash mobs" which ambush innocent columnists such as Jan Moir. And so on.

Around here, at least, most of the “mainstream” media coverage I see related to the internet is positive and far more nuanced. But then, most folks around here treat the internet as infrastructure: by itself, the internet is neither good nor evil, nor really much of anything. (Naughton seems to find this appalling: “The internet has quietly infiltrated our lives, and yet we seem to be remarkably unreflective about it. That's not because we're short of information about the network; on the contrary, we're awash with the stuff. It's just that we don't know what it all means.”) Naughton’s arguing for a “more balanced view of the net”—which, after reading the essay, I’ll translate to “a far more net-centric and worshipful view.”

So Naughton concludes that we need “a smallish number of big ideas” to properly understand appreciate worship the internet. He comes up with nine because it’s the outer limit of “seven plus or minus two” and thus a magic number.

What are the nine big ideas that will tell us “everything we ever need to know” about the internet? Without the lengthy discussions, they are: Take the long view; the web isn’t the net; disruption is a feature, not a bug; think ecology, not economics; complexity is the new reality (a discussion that would be more convincing if Naughton accepted the complexity that existing analog media and systems are likely to complement digital systems—but that’s not the complexity of which he writes); the network is now the computer; the web is changing; Huxley and Orwell are the bookends of our future; our intellectual property regime is no longer fit for purpose.

Do these “big ideas” tell you all you need to know about the internet? Not to me, not even after reading the complete discussions. I find one of them positively startling in its oversimplification, bad history and handwaves. Here’s probably the shortest discussion of the nine, the entirety of “the web is changing”:

Once upon a time, the web was merely a publication medium, in which publishers (professional or amateur) uploaded passive web pages to servers. For many people in the media business, that's still their mental model of the web. But in fact, the web has gone through at least three phases of evolution – from the original web 1.0, to the web 2.0 of "small pieces, loosely joined" (social networking, mashups, webmail, and so on) and is now heading towards some kind of web 3.0 – a global platform based on Tim Berners-Lee's idea of the 'semantic web' in which web pages will contain enough metadata about their content to enable software to make informed judgements about their relevance and function. If we are to understand the web as it is, rather than as it once was, we need more realistic mental models of it. Above all, we need to remember that it's no longer just a publication medium.

There’s so much wrong with that “in fact”—about the simplicity of the early days, about the reality of today, and about the likelihood that the semantic web will conquer all—that I don’t know where to begin. Here’s what we need to remember: the web was never one medium and it never will be.

I’ll give Naughton credit: After overpromising in the introduction, he does add a postscript:

It would be ridiculous to pretend that these nine ideas encapsulate everything that there is to be known about the net. But they do provide a framework for seeing the phenomenon "in the round", as it were, and might even serve as an antidote to the fevered extrapolation that often passes for commentary on developments in cyberspace. The sad fact is that if there is a "truth" about the internet, it's rather prosaic: to almost every big question about the network's long-term implications the only rational answer is the one famously given by Mao Zedong's foreign minister, Zhou Enlai, when asked about the significance of the French Revolution: "It's too early to say." It is.

It’s hard to argue with the last part of that paragraph. At the time, Naughton was working on a book about “the internet phenomenon”—now there’s a shocker. I would assume that book is From Gutenberg to Zuckerberg : what you really need to know about the Internet. For all I know, it may be a very good book.

Falsehoods Programmers Believe About Names

This one’s just plain neat, speaking as a former systems analyst/designer/programmer. It’s by Patrick McKenzie, posted June 17, 2010 at Kalzumeus. It begins (emphasis in original):

John Graham-Cumming wrote an article today complaining about how a computer system he was working with described his last name as having invalid characters. It of course does not, because anything someone tells you is their name is—by definition—an appropriate identifier for them. John was understandably vexed about this situation, and he has every right to be, because names are central to our identities, virtually by definition.

McKenzie worked as a programmer for several years in Japan and has worked with “Big Freaking Enterprises,” and says he’s “never seen a computer system which handles names properly and doubt one exists, anywhere.” He offers 40 false assumptions about names (some of them variations of others). I’d happily quote the entire list, but, well, copyright… A few of them:

6. People’s names fit within a certain defined amount of space.

10. People’s names are written in any single character set.

11. People’s names are all mapped in Unicode code points.

14. People’s names sometimes have prefixes or suffixes, but you can safely ignore those.

15. People’s names do not contain numbers.

18. People’s names have an order to them. Picking any ordering scheme will automatically result in consistent ordering among all systems, as long as both use the same ordering scheme for the same name.

19. People’s first names and last names are, by necessity, different.

31. I can safely assume that this dictionary of bad words contains no people’s names in it.

37. Two different systems containing data about the same person will use the same name for that person.

39. People whose names break my system are weird outliers.  They should have had solid, acceptable names, like 田中太郎.

40. People have names.

Go read the list. Especially if you’re a programmer who designs data entry forms. Especially if those forms “validate” names. (My systems, of course, never had problems along those lines. Never ever. And I am the King of Livermore.)

Bits & Pieces

The August 2011 PC World includes a full review of a retail Chromebook—that is, one that’s for sale, not handed out free by Google. It’s a Series 5 from Samsung and costs $499 with Wi-Fi and 3G support ($429 for Wi-Fi only). It comes with a 12.1" screen, runs an Intel Atom CPU, has a 16GB solid-state drive and 2GB RAM. There is a Webcam. It’s fast to boot—but it’s as slow as a netbook and considerably heavier (3.3lb.) and more expensive. You’re limited to a Chrome browser for all your applications (there’s a media player and file browser, but the review describes them as “so badly designed and feature-poor that they are practically unusable”). Oh, and there’s just the one window—after all, you’re always in Chrome. Period. The review is negative enough that the 2.5-star rating seems generous.

·         Do cleanup utilities actually speed up your PC? That’s the question asked in an August 2011 PC World article, using four Windows optimizers (CCleaner, System Mechanic, System Speedup and WinOptimizer 7) on “cluttered old PCs”—ones that had been in use for years without any cleanup. The overall answer: No. None of the utilities made much difference—and some utilities resulted in slower response after being run. On an old Dell notebook running Windows XP Professional with 1GB RAM, every optimizer seemed to do more harm than good. They did, by and large, speed up boot time—but not by much. Conclusion? “You might feel better after running a utility—but judging from our testing, your PC’s overall performance is unlikely to change much.”

·         The April 2011 PC World looks at three “wireless chargers” in a comparison that is much less fervent than previous stuff I’ve seen about electricity through the air, although even this one wholly fails to deal with actual efficiency issues. The writer calls this—charging mobile devices wirelessly using power mats—“cool and convenient” but says “the technology still has some maturing to do.” I suppose it’s convenient to add a sleeve or something to each of your mobile devices, plug in a new flat charging mat, and set the mobile devices directly on the mat (since that’s the only way reasonably efficient inductive charging can work), as opposed to, you know, plugging the devices directly into chargers. I’m not sure just how that’s true, but I don’t have dozens of mobile devices constantly requiring charging. Maybe the sleeves and other add-ons don’t get lost as readily as chargers do? (One of the systems doesn’t even use inductive charging: It’s a sheet with metal strips that contact other metal strips on adapters, “but the mat is engineered so that it’s safe to touch.”) Indeed, despite the callout saying this is “cool and convenient,” the article concludes “it’s not more convenient”—and it certainly adds a new set of costs.

·         The June 13, 2010 Chronicle of Higher Education has a Jeffrey R. Young essay, “The Souls of the Machine,” that’s mostly extolling Clay Shirky and his supposed Internet revolution—you know, how we’re all going to use huge quantities of excess creative energy because we don’t watch TV as much, and that creative energy combined with chaos will work wonders and disrupt industries. I marked it for some fact-based rebuttal (we’re not watching less TV as a society, just less broadcast-network TV; many of us really use TV for relaxation and wouldn’t transfer that energy to creative pursuits; if social networks are the prime example of “creativity,” I don’t find the results all that convincing…and so on). But after seeing Shirky’s way of responding to critics (he dismisses them as being wrong, asserting that his facts are the real facts, as any proper Guru would, I guess) and the tone of the discussion, I guess this falls into the “life’s too short, and oversimplifying gurus who select their ‘facts’ carefully will always win, once they have a platform” category. Heaven knows, Shirky’s still spouting his stuff, still getting huge book sales and probably fat speaking fees…and TV viewing time continues to increase. If someone wants to tell me that billions of hours watching YouTube in addition to professional video entertainment somehow count toward societally positive creativity, well…I tend to disagree.

·         I’ve written before about being wrong, and how astonishing (and refreshing) it is when a public figure, especially a guru, admits that they’ve been wrong. I flagged “Hoodoos, Hedge Funds, and Alibis: Victor Niederhoffer on Being Wrong” because of that. It’s by Kathryn Schulz and appeared June 21, 2010 at Slate. But what I think I was really tagging was “The Wrong Stuff: What it Means to Make Mistakes,” a series of discussions with various people who will admit to having been wrong. This particular example is a hedge-fund manager who was spectacularly wrong twice. Others include James Bagian (an astronaut turned patient safety expert). Once you’re at any specific discussion, you can page to previous or next discussions. You might enjoy the “exit interview” noting some dead people Schulz would have loved to interview; you may find a number of the interviews worth reading. You may find some—Chuck Colson?—incredibly self-serving. It appears that the series ends in December 2010. Worth checking out.

Cites & Insights: Crawford at Large, Volume 12, Number 2, Whole # 146, ISSN 1534-0937, a journal of libraries, policy, technology and media, is written and produced irregularly by Walt Crawford.

Comments should be sent to waltcrawford@gmail.com. Cites & Insights: Crawford at Large is copyright © 2012 by Walt Crawford: Some rights reserved.

All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons. org/licenses/by-nc/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

URL: citesandinsights.info/civ12i2.pdf