04 May 2007

Outsourcing Mundane Thinking

I just caught the cover of the 28 April edition of The Economist. The headline is “When everything connects” and the cover sports various things giving off information with the assistance of the chips they have embedded in them. As the headline suggests, everything has a chip – the dog, the car, a can of soup (or beans?), the ground, everything.

I haven’t read the article. I will, but something came to mind. The important question, I think, is not what will things look like “when everything connects,” but rather do we want everything to connect? The following are all examples taken from the cover mentioned. Do we really want a can of soup in the grocery store to announce to us when it is half-price? Do we really want a computer chip in the dog to tell us when to take it outside? Do we really want a chip inside our bodies letting us know when our blood pressure is too high?

There seems to be a willingness to use technology to do anything that a human can do. I call this the gratuitous use of technology. As the examples above indicate, data passing technology and wireless connectivity can be theoretically taken as far as the mind wants to go. Why learn how to take someone’s blood pressure if there’s a chip that can be implanted which provides constant blood pressure monitoring? Why pay attention to the dog if I can get an IM, an SMS or an email letting me know that the dog wants to go outside?

One possible, plausible result of all of this gratuitous technology might be the dampening of our collective ability to pay attention to the world around us without prompting. It’s imaginable in a world of “complete connectivity” that people would become so accustomed to being prompted with actionable information that they lose, at least to some extent, the ability to discern when to act on something without being prompted. Taken further, the lack of practice anticipating needs (like the dog wanting to go out) or reacting without prompts would necessarily lesson thinking capabilities.

I really like technology, which must be obvious as I’m writing in a format developed from and for our wired world. But there are should be at least some thought on the value added and ethics involved in technology and its applications. And these go beyond the obvious debates about implanting chips in people, especially with regards to privacy. I think the more important questions lie in how we do our thinking, or what facility we task to do our thinking. Should we task ourselves to think for ourselves, even about the most mundane subjects, or should we become increasingly dependent on machines to do our mundane thinking for us?

I’d prefer to use my own nugget for as much thinking as I can handle. Studies into Alzheimer’s patients have shown, after all, that exercising the brain is important. The brain is, after all, like a muscle, and I’d rather keep mine off of “performance enhancing technologies,” for lack of a better term. And anyway, there’s always the obvious point that machines, even the most reliable of machines, will break down at some point. If I rely on a machine to do my thinking for me and that machine breaks down, I’m lost. If I rely on myself to do my own thinking and I break down, well, it won’t really make much difference to me then, will it?


Anonymous said...

In the final sentence, "If I reply on myself to do my own thinking and I break down, well, it won’t really make much difference to me then, will it?" you wrote the word "reply", but I think you mean rely. Just thought you might like to know.

Bob M. said...

Thanks for that...that'll teach me to only proof read once!