So, we geocache. Not as much as we used to, but over 15 years ago my family and I began the sport and really got into it for a while. Geocaching involves using a GPS to find a hidden object (“cache”) so that you can log the find. It’s all in good fun. The organization that hosts the website also offers the chance to log “trackables”—these are objects with a unique identifier that you sometimes find in caches and you get credit for logging your find. There are no prizes involved. We started several of these “travel bugs” ourselves, years ago. If you started one you got an email when someone logged it, and you could see how far around the world your little bug had gone. For many years we’ve not heard much about any of ours and assumed them to be MIA.
Recently I started getting several email notices about a resurrected travel bug. It was as if someone had finally found a cache somewhere deep in the Sahara where it’d been hidden for a decade. Then I had an email from a fellow cacher, in German. I figured it must be serious. The message was that a Facebook page was publishing trackable numbers so that anyone could claim to have found them. One of ours was on that list. I went to the page to look. It said, “Let’s face it, it’s all about the numbers.” And they proceeded to list hundreds of numbers so that you could claim to have “found” the pieces with your posterior solidly sunk in your favorite chair. This is annoying not only because we had to pay for the trackable dogtags, but also because it was cheating. I said as much on the page only to have my comment blocked.
How sad is it when people cheat at a game when there’s no gain? All they do is claim to have done something they haven’t, for no prize or recognition. A fun family pastime falls victim to the internet. Ironically, geocaching was really only possible because of the internet. It required a place where players could log their finds in a common database. Facebook, continuing its potential for misuse, allows someone to spoil it. I, along with my unknown German counterpart, reported the page to the powers that be. But since we live in a world where the powers that be don’t recognize any rules beyond inflating their own numbers, I shouldn’t be too optimistic of any results. I guess this is how Republicans play games.
I was searching for someone on the internet (surprisingly, not myself). Since this individual didn’t have much of a platform, I looked at MyLife.com. Such sites draw in the curious and you soon end up paying (I suspect) for any salacious information such as arrest or court records. In any case, what stood out is that we all presumably have a meter on the site that shows whether we’re good or bad. It’s like a Leonard Cohen song. Call me old-fashioned, but that’s what religion used to do. Some forms of Christianity (Calvinism comes to mind) tell you that you can never be good enough. Others are more lax (Episcopalians come to mind), as long as you go to mass enough and feel some guilt for misdeeds, you’ll get in. All the various groups, however, have metrics by which you’re measured, largely based on what you believe.
The odd thing—or one of the odd things—about religion is that it is now categorized as what you believe. Historically religions began as a kind of bellwether of what you do rather than what you believe. The two are related, of course. The motivation behind an action might well be good while the end result is less so. Secular justice regularly seeks to answer the question of why someone did something. Was there malice involved? Aforethought? Was it an unfortunate accident? Religion drives over this ground too. Without getting into the many shades of gray that are morality, value judgments as to the goodness or badness of an action (or a person) were traditionally the purview of religion.
The internet itself has become a kind of god. We turn to it for all kinds of answers. It’s both a Bible and encyclopedia rolled into one. When we want to know something about someone we google them. Some of us have tried to control the narrative about ourselves by making websites. (This, of course, presumes others will be interested in us.) Social media also injects us into larger arteries of traffic. People judge us by what we post or tweet. Often without ever meeting us or getting to know who we really are behind our physical walls. So this person I searched had left little to find. Scraps here and there. I didn’t believe everything I saw on MyLife. After all, not everyone wants to subject her or himself to the constant scrutiny of the connected world. Maybe it’s a religious thing.
You know what I’m talking about.
Disorienting, isn’t it?
One of my greatest bêtes noires is the email that only gives enough information to frustrate or irritate. I get them all the time, mainly from business-people. Look, I know you’re busy. We’re all busy. A single-sentence email that doesn’t explain anything is rude and exasperating. One of the reasons, if I might speculate, that I always received very good teaching evaluations boils down to a simple trick: good explaining assumes little on the part of the listener/reader. When I write an email, for work or for whatever life outside work is called, I explain why I’m emailing and I use common courtesies such as “Dear X,” and “Best wishes.” They take me all of seconds to type, and they make the receiver, I believe, feel human.
The other day I sent such an email and received a one-sentence response that assumed I knew a lot more about the topic than I did. It frustrated me so much that I had to write this blog post before going back to it and asking, yet again, that sender explain (in this case) himself. What was he trying to say? Who was he, even? I’d been asked to contact him by someone else. I had no idea who he was (I briefly explained who I was in my initial email). Electronic communication, IMHO, even if brief, need not be rude. If we’re all that busy maybe it’s time to step back and consider that life’s too short for generating hurt feelings and generating negativity. Emails without niceties are rude.
Of course, there are people you know well and that you contact frequently. I still try always to give them the courtesy of opening, body, and closing. I grew up in the generation of letter writing. One thing even businesses knew in those days was that rude behavior lost you customers and/or clients. Now in Generation Text rudely apocopated emails are standard and I have to wonder if anyone’s done a study on how much business money is wasted on the time it takes to recover from receiving a rude email. The writer may not be intending to be rude. Many of us were taught growing up that a “please,” “thank you,” or “I’m sorry,” went a long, long way in avoiding hurt feelings. Go ahead and call me a snowflake. But remember, it’s December. So I’ve just had to spend a quarter-hour of my busy day writing this rant before responding to an email that made me mad by its brevity. I’m not a texter, and I think I’m discovering why.
I’m a little suspicious of technology, as many of you no doubt know. I don’t dislike it, and I certainly use it (case in point), but I am suspicious. Upgrades provide more and more information to our unknown voyeurs and when the system shows off its new knowledge it can be scary. For example, the other day a message flashed in my upper right corner that I had a new memory. At first I was so startled by the presumption than I couldn’t click on it in time to learn what my new memory might be. The notification had my Photos logo on it, so I went there to see. Indeed, there was a new section—or at least one I hadn’t previously noticed—in my Photos app. It contained a picture with today’s date from years past.
Now I don’t mind being reminded of pleasant things, but I don’t trust the algorithms of others to generate them for me. This computer on my lap may be smart, but not that so very smart. I know that social media, such as Facebook, have been “making memories” for years now. I doubt, however, that the faux brains we tend to think computers are have any way of knowing what we actually feel or believe. In conversations with colleagues over cognition and neurology it becomes clear that emotion is an essential element in our thinking. Algorithms may indeed be logical, but can they ever be authentically emotional? Can a machine be programmed to understand how it feels to see a sun rise, or to be embraced by a loved one, or to smell baking bread? Those who would reduce human brains to mere logic are creating monsters, not minds.
So memories are now being made by machine. In actuality they are simply generating reminders based on dates. This may have happened four or five years ago, but do I want to remember it today? Maybe yes, maybe no. It depends on how I feel. We really don’t have a firm grasp on what life is, although we recognize it when we see it. We’re further even still from knowing what consciousness may be. One thing we know for sure, however, is that it involves more than what we reason out. We have hunches and intuition. There’s that fudge factor we call “instinct,” which is, after all, another way of claiming that animals and newborns can’t think. But think they can. And if my computer wants to help with memories, maybe it can tell me where I left my car keys before I throw the pants containing them into the wash again, which is a memory I don’t particularly want to relive.
Memory from a decade ago, today.
Recently I was left alone for the entirety of a Saturday. On rare days when I feel affluent, I’ll go and purchase supplies to take on the many tasks that need doing around the house—most of the books on my office are still stacked on the floor for lack of shelves. I can build them, but that takes money. Often when I have an unclaimed day I plan it out weeks in advance. Things have been busy enough of late that I didn’t even have the time to do that. All the sudden I woke up on a November Saturday with tabula rasa in front of me. Then I realized one of the constant pressures I face: TMI. One of my nieces—the one who started this blog, actually—first introduced me to Too Much Information (TMI). I don’t get out much, you see.
Like most people who flirt with tech, I snap photos with my phone. When we go somewhere that I suspect we’ll never be able to afford to go again, I take an actual camera and let fly like I work for National Geographic or something. Since my laptop’s on a data diet, all of these end up on a terabyte drive, hurriedly downloaded as IMG or DSCN files, waiting to be sorted later. Do this since the inception of digital photographs and you’ll get a sense of the magnitude of the problem. My laptop says it’s full and I have to delete images with that dire warning they’ll go away forever. I back them up. When was the last time I did this? I wrote it down, but I forgot where. What did I even name the file? Did I back it up or is it on my hard disc? Why are there eight copies of the same photograph? I spent the day sorting, virtually.
Before I knew it, the sun was beginning to set. I’d awoken at 4:00 (being a weekend I slept in), and after a day of organizing electronic photos into electronic folders, I’d barely made a dent. Deduping alone takes so much time. Some of the pictures, while nice, I couldn’t remember at all. I shudder, though, thinking about grandparents that burned old photos because nobody remembered who they were any more. Then I realized that our lives are the most documented of any in history (so far) but nobody really cares. You could learn an awful lot about some stranger just by going through their photos—where they’ve been, what they thought important, and just how obsessive they could be. As I wound up the day, I realized why I don’t get out much any more.
Tis the season for returning from the dead. Goodreads is one of the few websites that I allow to send me notices. I try to check them daily, and I even read their monthly updates of new books by authors I’ve read. I was a bit surprised when November’s newsletter began with The Andromeda Evolution by Michael Crichton. I really enjoyed The Andromeda Strain when I was in high school. The fact that I was in high school four decades ago made me wonder about the robustness of Dr. Crichton, especially since I knew that he had died over a decade ago himself. I don’t know about you, but the writing industry feels crowded enough without dead people keeping in the competition. It’s like those professors who refuse to retire, but also refuse to teach or do research. Some people, apparently, can never get enough.
We live in an era of extreme longevity. In the scope of human history, people haven’t lived so long since before the flood. Some of us—not a few, mind you—work in fields with limited job openings. We are the sort who don’t really get the tech craze, intelligent Luddites who’d rather curl up in the corner with an actual book. There are very few professorates available. Even fewer editorships. And anyone who’s tried to get an agent without being one of the former knows that there are far too many writers out there. Now the dead keep cranking ‘em out. I’ve got half-a-dozen unpublished novels sitting right here on my lap. Crichton’s gone the way of all flesh, but with an active bank account.
The end result of this Novemberish turn of events is that I want to read The Andromeda Strain again. I haven’t posted it to Goodreads since when I read it the internet itself wasn’t even a pipe dream, except perhaps in the teenage fantasies of some sci-fi fans. Since you can’t rate a book twice on Goodreads, and because paper books don’t disappear when you upgrade your device, I can do it. I can actually walk to the shelf and pull a vintage mass-market paperback off it. Even if the Earth passes through the tail of some comet and all networks are down. And I seem to recall that the original strain came from outer space. As did the strange radiation that brought the ghouls back to life on The Night of the Living Dead. Now if only some of the rest of us might get in on the action.
Whose computer is this? I’m the one who paid for it, but it is clearly the one in control in this relationship. You see, if the computer fails to cooperate there is nothing you can do. It’s not human and despite what the proponents of AI say, a brain is not just a computer. Now I’m not affluent enough to replace old hardware when it starts slowing down. Silicon Valley—and capitalism in general—hate that. I suppose I’m not actually paid well enough to own a computer. I started buying laptops for work when Nashotah House wouldn’t provide faculty with computers. Then as an itinerant adjunct it was “have laptop, will travel (and pay bills).” I even bought my own projector. At least I thought I was buying it.
I try to keep my software up to date. The other day a red dot warned me that I had to clear out some space on my disc so Catalina could take over. It took three days (between work and serving the laptop) to back-up and delete enough files to give it room. I started the upgrade while I was working, when my personal laptop can rest. When I checked in it hadn’t installed. Throwing a string of technical reasons at me in a dialogue box, my OS told me that I should try again. Problem was, it told me this at 3:30 in the morning, when I do my own personal work. I had no choice. One can’t reason with AI. When I should’ve been writing I was rebooting and installing, a process that takes an hour from a guy who doesn’t have an hour to give.
As all of this was going on I was wondering who owned whom. In college professors warned against “keyboard compositions.” These were literal keyboards and they meant you shouldn’t type up your papers the night before they were due, writing them on your typewriter. They should’ve been researched and “written” before being typed up. That’s no longer an option. This blog has well over a million words on it. Who has time to handwrite a million words, then type them up all in time to post before starting work for the day? And that’s in addition to the books and articles I write for actual publication. And the novels and short stories. For all of this I need my laptop, the Silver to my Lone Ranger, to be ready when I whistle. Instead it’s dreaming its digital dreams and I’m up at 3:30 twiddling my thumbs.