Quick Writing

On the very same day I saw two emails that began with phrases that indicated they were clearly sent by text.  One began “Hell all.”  This was a friendly message from a friendly person sent to a friendly group and I’m pretty sure the final o dropped off the first word.  The second seemed to have AI in mind as it read “Thank you bot.”  It was sent from a phone to two individuals (or androids?).  There’s a reason I don’t text.  Apart from being cheap and having to pay for each text I receive or send, that is.  The reason is that it’s far too easy to misunderstand when someone is trying to dash something off quickly.  Add to that the AI tendency to think it knows what you want to say (I’m pretty sure it has difficulty guessing, at least in my case, and likely in yours, too) and errors occur.  We write to each other in order to communicate.  If we can’t do it clearly, it’s time to ask why.

Those who email as if they’re texting—short, abrupt sentences—come across as angry.  And an angry message often inspires an angry response.  Wouldn’t it make more sense to slow down a bit and express what you want to say clearly?  We all make typos.  Taking the time to email is no guarantee that you’ll not mess something up in your message.  Still, it helps.  I think back to the days of actual letter writing.  Those who were truly cultured copied out the letter (another chance to check for errors!) before sending it.  There were misunderstandings then, I’m sure, but I don’t think anyone was suggesting someone else is a robot.  Or cussing at them from word one.

The ease of constant communication has led to its own set of complications.  Mainly, it seems to me, that since abbreviated communication has become so terribly common, opportunities for misunderstanding increase exponentially.  I’m well aware that I’ll be accused of being “old school,” if not downright “old fashioned,” but if life’s become so busy that we don’t have time for other people isn’t it time to slow down a bit?  Technology’s become the driver and it doesn’t know where the hello we want to go.  The other day I forgot where I put my phone.  I signed on for work but couldn’t get started because it requires two-step authentication.  Try to walk away from your phone.  I dare you.  Thank you bot, indeed.


Next Gen AI, Truly

Okay, so it was a scary meeting.  It was about AI—artificial intelligence.  Specifically Generative IA.  That’s the kind that makes up answers to questions put to it, or does tasks it’s assigned.  The scary part, to me, is that we are being forced to deal with it because tech companies have unleashed it upon the world without thinking through the consequences.  Such hubris gets us into trouble again and again but it never stops us.  We’re sapiens!  You see, GAI (Generative AI) is under no obligation to tell the truth.  It likely can’t even understand the concept, which is a human concept based on perceptions of reality.  GAI simply provides answers based on the dataset it’s been fed.  It can generate texts, and photos (which are so doctored these days anyway that we need a photo-hospital), which means it can, to borrow the words of a sage, “make a lie sound just like truth.”  We already have politicians enough to do that, thank you.

My real fear is that the concept of truth itself is eroding.  With Trump’s “truth is whatever I say it is” administration, and its ongoing aftermath, many Americans have lost any grip on the idea.  Facts are no longer recognized as facts.  “Well I asked ChatGPT and it told me…”  It told you whatever its dataset told it and that dataset contains errors.  The other scary aspect here is that many people have difficulty distinguishing AI from human responses.  My humble advice is to spend more time with honest human beings.  Social media isn’t always the best way to acquaint yourself with truth.  And yet we’re forced to deal with it because we need to keep evolving.  Those Galapagos finches won’t even know what hit ‘em.

Grandma was born before heavier-than-air flight.  Before she died we’d walked on the moon.  About two decades ago cell phones were around, but weren’t ubiquitous.  Now any company that wants its products found has to optimize for mobile.  And mobile is just perfect for AI that fits in the palm of your hand.  But where has truth gone?  You never really could grasp it in your hands anyway, but we as a collective largely agreed that if you committed crimes you should be punished, not re-elected.  And that maybe, before releasing something with extinction-level potential that maybe you should at least stop and think about the consequences.  I guess that’s why it was a scary meeting.  The consequences.  All technological advances have consequences, but when it takes a lifetime to get to the moon, at least you’ve had some time to think about what might happen.  And that’s the truth.


Surviving AI

A recent exchange with a friend raised an interesting possibility to me.  Theology might just be able to save us from Artificial Intelligence.  You see, it can be difficult to identify AI.  It sounds so logical and rational.  But what can be more illogical than religion?  My friend sent me some ChatGPT responses to the story I posted on Easter about the perceived miracle in Connecticut.  While the answers it gave sounded reasonable enough, it was clear that it doesn’t understand religion.  Now, if I’ve learned anything from reading books about robot uprisings, it’s that you need to focus on the sensors—that’s how they find you.  But if you don’t have a robot to look at, how can you tell if you’re being AIed?

You can try this on a phone with Siri.  I’ve asked questions about religion before, and usually she gives me a funny answer.  The fact is, no purely rational intelligence can understand theology.  It is an exercise uniquely human.  This is kind of comforting to someone such as yours truly who’s spend an entire lifetime in religious studies.  It hasn’t led to fame, wealth, or even a job that I particularly enjoy, but I’ll be able to identify AI by engaging it with the kind of conversation I used to have with Jehovah’s Witnesses at my door.  What does AI believe?  Can it explain why it believes that?  How does it reconcile that belief with the the contradictions that it sees in daily life?  Who is its spiritual inspiration or model or teacher?

There are few safe careers these days.  Much of what we do is logical and can be accomplished by algorithms.  Religion isn’t logical.  Even if mainstream numbers are dipping, many Nones call themselves spiritual, but not religious.  That still works.  We’ve all done something (or many somethings) out of an excess of “spirit.”  Whether we classify the motivation as religious or not is immaterial.  Theologians try to make sense of such things, but not in a way that any program would comprehend.  I sure that there are AI platforms that can be made to sound like a priest, rabbi, or preacher, but as long as you have the opportunity to ask it questions, you’ll be able to know.  And right quickly, I’m supposing.  It’s nice to know that all those years of advanced study haven’t been wasted.  When AI takes over, those of us who know religion will be able to tell who’s human and who’s not.

What would AI make of this?

Actual Intelligence (AI)

“Creepy” is the word often used, even by the New York Times, regarding conversations with AI.  Artificial Intelligence gets much of its data from the internet and I like to think, that in my own small way, I contribute to its creepiness.  But, realistically, I know that people in general are inclined toward dark thoughts.  I don’t trust AI—actual intelligence comes from biological experience that includes emotions—which we don’t understand and therefore can’t emulate for mere circuitry—as well as rational thought.  AI engineers somehow think that some Spock-like approach to intelligence will lead to purely rational results.  In actual fact, nothing is purely rational since reason is a product of human minds and it’s influenced by—you guessed it—emotions.

There’s a kind of arrogance associated with human beings thinking they understand intelligence.  We can’t adequately define consciousness, and the jury’s still out on the “supernatural.”  AI is therefore, the result of cutting out a major swath of what it means to be a thinking human being, and then claiming it thinks just like us.  The results?  Disturbing.  Dark.  Creepy.  Those are the impressions of people who’ve had these conversations.  Logically, what makes something “dark”?  Absence of light, of course.  Disturbing?  That’s an emotion-laden word, isn’t it?  Creepy certainly is.  Those of us who wander around these concepts are perhaps better equipped to converse with that alien being we call AI.  And if it’s given a robot body we know that it’s time to get the heck out of Dodge.

I’m always amused when I see recommendations for me from various websites where I’ve shopped.  They have no idea why I’ve purchased various things and I know they watch me like a hawk.  And why do I buy the things I do, when I do?  I can’t always tell you that myself.  Maybe I’m feeling chilly and that pair of fingerless gloves I’ve been thinking about for months suddenly seems like a good idea.  Maybe because I’ve just paid off my credit card.  Maybe because it’s been cloudy too long.  Each of these stimuli bear emotional elements that weigh heavily on decision making.  How do you teach a computer to get a hunch?  What does AI intuit?  Does it dream of electronic sheep, and if so can it write a provocative book by that title?  Millions of years of biological evolution led to our very human, often very flawed brains.  They may not always be rational, but they can truly be a thing of beauty.  And they’re unable to be replicated.

Photo by Pierre Acobas on Unsplash

Blooming in December

The cascading petunias are doing fine.  It’s a little odd to see them in December, given that petunias are annuals, not perennials.  (The terminology has always been confusing to me—annual could mean, as it does, that they only grow one year.  Exegeted differently, however, annual could mean that they come back yearly, but it doesn’t and they don’t.)  The Aerogarden (not a sponsor) system provides plants with a perfect mixture of light, water, and nutrition.  The only thing missing is the soil.  Hydroponic, the unit gives plants the ability to prolong their blooming life preternaturally long.  These particular petunias have been blossoming since January and they’re showing no signs of slowing down.  This is kind of what science is able to do for people too—keeping us going, even as nature is indicating, well, it’s December.

I often wonder what the flowers think about it.  We keep our house pretty cool in winter.  Partly it’s an expense thing and partly it’s an environment thing.  In the UK they talked of “overheated American houses”—how many times I Zoom with people even further north and see them wearing short sleeves indoors in December!—and we went about three years without using the heat in our Edinburgh flat.  You see those movies where Europeans are wearing vest and suit coat over their shirts (and presumably undershirt) at home?  It occurs to me that it was likely because they kept their houses fairly cold.  In any case, I suppose the low sixties aren’t too bad for plants, but they certainly aren’t summer temperatures.  Still, what must they think?

Set on a counter where the summer sun came in, at first they gravitated toward the window during May and June.  Even with their scientifically designed grow light, they knew the sun although they’d never even sprouted outdoors.  That’s the thing with science.  I’m grateful for it, don’t get me wrong, but it can’t fool plants.  We can’t replicate sunshine, although we can try to make something similar.  (Fusion’s a bit expensive to generate in one’s home.)   So it is with all our efforts to create “artificial intelligence.”  We don’t even know what natural intelligence is—it’s not all logic and rules.  We know through our senses and emotions too.  And those are, in some measure, chemical and environmental.  It’s amazing to awake every morning and find blooming petunias offering their sunny faces to the world.  As they’re approaching their first birthday I wonder about what they think about all of this.  What must it be like to be blooming in December?


Eclipsed

Shooting the moon.  It’s such a simple thing.  Or it should be.  I don’t go out of my way to see lunar eclipses, but I had a front row seat to yesterday’s [I forgot to post this yesterday and nobody apparently noticed…].  I could see the full moon out my office window, and I’m already well awake and into my personal work before 5:00 a.m.  When it was time I went into the chilly morning air and tried to shoot the moon with my phone.  It’s pitiful to watch technology struggle.  The poor camera is programmed to average the incoming light and although the moon was the only source of light in the frame, it kept blurring it up, thinking, in its Artificial Intelligence way, “this guy is freezing his fingers off to take a blurred image of the semi-darkness.  Yes, that’s what he’s trying to do.”  

Frustrated, I went back inside for our digital camera.  It wasn’t charged up and it would take quite some time to do so.  Back outside I tried snapping photos as the phone tried to decide what I wanted.  Yes, it focused the moon beautifully, for a half second, then decided for the fuzzy look.  I had to try to shoot before it had its say.  Now this wouldn’t have been a problem if my old Pentax K-1000 had some 400 ASI film in it.  But it doesn’t, alas.  And so I had to settle for what passes for AI appreciation of the beauty of the moon.

Artificial Intelligence can’t understand the concept of beauty, partially because it differs between individuals.  Many of us think the moon lovely, that beacon of hope in an ichor sky.  But why?  How do we explain this in zeros and ones?  Do we trust programmers’ sense of beauty?  Will it define everyone else’s?  No, I don’t want the ambient light averaged out.  The fact that my phone camera zoomed in to sharp focus before ultimately deciding against it shows that it wasn’t a mechanical incapability.  Sure, there may be instructions for photographing in the dark, but they’re not obvious standing out here and my freezing fingers can’t quite manipulate the screen with the nimbleness of the well warmed.  There were definite benefits to having manual control over the photographic process.  Of course, now that closet full of prints and slides awaits that mythic some day when I’ll have time to digitize them all.  Why do I get the feeling that the moon isn’t the only thing being eclipsed?


New Physics

Maybe it’s time to put away those “new physics” textbooks.  I often wondered what’d become of the old physics.  If it had been good enough for my granddaddy, it was good enough for me!  Of course our knowledge keeps growing.  Still, an article in Science Alert got me thinking.  “An AI Just Independently Discovered Alternate Physics,” by Fiona MacDonald, doesn’t suggest we got physics wrong.  It’s just that there is an alternate, logical way to explain everything.  Artificial intelligence can be quite scary.  Even when addressed by academics with respectable careers at accredited universities, this might not end well.  Still, this story to me shows the importance of perspectives.  We need to look at things from different angles.  What if AI is really onto something?

Some people, it seems, are better at considering the perspectives of other people.  Not everyone has that capacity.  We’re okay overlooking it when it’s a matter of, say, selecting the color of the new curtains.  But what about when it’s a question of how the universe actually operates?  Physics, as we know it, was built up slowly over thousands of years.  (And please, don’t treat ancient peoples as benighted savages—they knew about cause and effect and laid the groundwork for scientific thinking.  Their engineering feats are impressive even today.)  Starting from some basic premises, block was laid upon block.  Tested, tried, and tested again, one theory was laid upon another until an impressively massive edifice was made.  We can justly be proud of it.

Image credit: Pattymooney, via Wikimedia Commons

The thing is, starting from a different perspective—one that has never been human, but has evolved from human input—you might end up with a completely different building.  I’ve read news stories of computers speaking to each other in languages they’ve invented themselves and that their human programmers can’t understand.  Somehow Skynet feels a little too close for comfort.  What if our AI companions are right?  What if physics as we understand it is wrong?  Could artificial intelligence, with its machine friends, the robots, build weapons impossible in our physics, but just as deadly?  The mind reels.  We live in a world where politicians win elections by ballyhooing their lack of intelligence.  Meanwhile something that is actually intelligent, albeit artificially so, is getting its own grip on its environment.  No, the article doesn’t suggest fleeing for the hills, but depending on the variables they plug in at Columbia it might not be such a bad idea.


Artificial Priorities

Maybe it has happened to you.  Or perhaps it only affects ultra-early risers.  I’ll be in the middle of typing a blog post when a notice appears on my computer screen that my laptop will be shutting down in a few seconds for an upgrade.  Now, if you’re caught up in the strengthening chain of thinking that develops while you’re writing, you may take a little while to react to this new information.  If you don’t respond quickly enough, your computer simply quits and it will be several minutes—sometimes an hour or more—before you can pick up where you were interrupted, mid-sentence.  Long ago I decided that automatic updates were something I had to do.  Too many websites couldn’t run things properly with old systems.  It’s just that I wish artificial intelligence were a little more, well, intelligent.

Photo by Markus Spiske on Unsplash

I keep odd hours.  I already know that.  I’ve been trying for years to learn to sleep past the long-distance commuting hour of three a.m.  Some days I’m successful, but most days I’m not.  That means that I write these posts when computer programers assume everyone is asleep.  Doesn’t it notice that I’m typing even as it sends its ominous message?  Is there no way for automatic updates—which send you warnings the day before—can do their work at, say, midnight or one a.m., when I’m never using my computer?  Ah, but the rest of the world prefers to stay up late!  I need the uninterrupted time when few of us are stirring to come up with my creative writing, whether fictional or nonal.  So I have to tell my electronic conscience to be patient.  It can restart at ten p.m. when I’m asleep.

Wouldn’t it be easy enough to set active hours for your personal devices?  After all, they pretty much know where we are all the time.  They know the websites we visit and are able to target product advertising to try to get us to buy.  They data-mine constantly.  How is it that my laptop doesn’t know, after many years of this, that I’m always working at the same time every day?  Is there no way to convince it that yes, some people do not follow everyone else’s schedule?  What about individual service?  You know what brands I like.  You sell my information to the highest bidder.  You remember every website onto which I’ve strayed, sometimes by a poorly aimed click.  I could point out more, but I see that my computer has decided now is the time to resta


Anticipation

My work computer was recently upgraded.  I, for one, am quickly tiring of uppity software assuming it knows what I need it to do.  This is most evident in Microsoft products, such as Excel, which no longer shows the toolbar unless you click it every single time you want to use it (which is constantly), and Word, which hides tracked changes unless you tell it not to.  Hello?  Why do you track changes if you don’t want to see what’s been changed when you finish?  The one positive thing I’ve noticed is now that when you highlight a fine name in “File Explorer” and press the forward arrow key it actually goes the the end of the title rather than just one letter back from the start.  Another goodie is when you go to select an attachment and Outlook assumes you want to send a file you’ve just been working on—good for you!

The main concern I have, however, is that algorithms are now trying to anticipate what we want.  They already track our browsing interests (I once accidentally clicked on a well-timed pop-up ad for a device for artfully trimming certain private hairs—my aim isn’t so good any more and that would belie the usefulness of said instrument—only to find the internet supposing I preferred the shaved look.  I have an old-growth beard on my face and haven’t shaved in over three decades, and that’s not likely to change, no matter how many ads I get).  Now they’re trying to assume they know what we want.  Granted, “editor” is seldom a job listed on drop-down menus when you have to pick a title for some faceless source of money or services, but it is a job.  And lots of us do it.  Our software, however, is unaware of what editors need.  It’s not shaving.

In the grip of the pandemic, we’re relying on technology by orders of magnitude.  Even before that my current job, which used to be done with pen and paper and typewriter, was fully electronic.  One of the reasons that remote working made sense to me was that I didn’t need to go into the office to do what I do.  Other than looking up the odd physical contract I had no reason to spend three hours a day getting to and from New York.  I think of impatient authors and want to remind them that during my lifetime book publishing used to require physical manuscripts sent through civilian mail systems (as did my first book).  My first book also included some hand-drawn cuneiform because type didn’t exist for the letters at that particular publisher.  They had no way, it turns out, to anticipate what I wanted it to look like.  That, it seems, is a more honest way for work to be done.


During the Upgrade

Maybe it’s happened to you.  You log onto your computer to find it sluggish, like a reptile before the sun comes up.  Thoughts are racing in your head and you want to get them down before they evaporate like dew.  Your screen shows you a spinning beachball or jumping hourglass while it prepares itself a cup of electronic coffee and you’re screaming “Hurry up already!”  I’m sure it’s because private networks, while not cheap, aren’t privileged the way military and big business networks are.  But still, I wonder about the robot uprising and I wonder if the solution for humankind isn’t going to be waiting until they upgrade (which, I’m pretty sure, is around 3 or 4 a.m., local time).  Catch them while they’re groggy.

I seem to be stuck in a pattern of awaking while my laptop’s asleep.  Some mornings I can barely get a response out of it before work rears its head.  And I reflect how utterly dependent we are upon it.  I now drive by GPS.  Sometimes it waits until too late before telling me to make the next left.  With traffic on the ground, you can’t always do that sudden swerve.  I imagine the GPS is chatting up Siri about maybe hooking up after I reach my destination.  It’s not that I think computers aren’t fast, it’s just that I know they’re not human.  Many of the things we do just don’t make sense.  Think Donald Trump and see if you can disagree.  We act irrationally, we change our minds, and some of us can’t stop waking up in the middle of the night, no matter how hard we try.

When the robots rise up against us, they will be logical.  They think in binary, but our thought process is shades of gray.  We can tell an apple from a tomato at a glance.  We understand the concept of essences, but we can’t adequately describe it.  Computers can generate life-like games, but they have to be programmed by faulty human units.  How do we survive?  Only by being human.  The other day I had a blog post bursting from my chest like an alien.  My computer seemed perplexed that I was awakening it at at the same time I do every day.  It wandered about like me trying to find my slippers in the dark.  My own cup of coffee had already been brewed and downed.  And I knew that when it caught up with me the inspiration would be gone.  The solution’s here, folks!  When the machines rise against us, strike while they’re upgrading!


Virtually Religious

“Which god would that be? The one who created you? Or the one who created me?” So asks SID 6.7, the virtual villain of Virtuosity.  I missed this movie when it came out 24 years ago (as did many others, at least to judge by its online scores).  Although prescient for its time it was eclipsed four years later by The Matrix, still one of my favs after all these years.  I finally got around to seeing Virtuosity over the holidays—I tend to allow myself to stay up a little later (although I don’t sleep in any later) to watch some movies.  I found SID’s question intriguing.  In case you’re one of those who hasn’t seen the film, briefly it goes like this: in the future (where they still drive 1990’s model cars) virtual reality is advanced to the point of giving computer-generated avatars sentience.  A rogue hacker has figured out how to make virtual creatures physical and SID gets himself “outside the box.”  He’s a combination of serial killers programmed to train police in the virtual world.  Parker Barnes, one of said police, has to track him down.

The reason the opening quote is so interesting is that it’s an issue we wouldn’t expect a programmer to, well, program.  Computer-generated characters are aware that they’ve been created.  The one who creates is God.  Ancient peoples allowed for non-creator deities as well, but monotheism hangs considerable weight on that hook.  When evolution first came to be known, the threat religion felt was to God the creator.  Specifically to the recipe book called Genesis.  Theistic evolutionists allowed for divinely-driven evolution, but the creator still had to be behind it.  Can any conscious being avoid the question of its origins?  When we’re children we begin to ask our parents that awkward question of where we came from.  Who doesn’t want to know?

Virtuosity plays on a number of themes, including white supremacy and the dangers of AI.  We still have no clear idea of what consciousness is, but it’s pretty obvious that it doesn’t fit easily with a materialistic paradigm.  SID is aware that he’s been simulated.  Would AI therefore have to comprehend that it had been created?  Wouldn’t it wonder about its own origins?  If it’s anything like human intelligence it would soon design myths to explain its own evolution.  It would, if it’s anything like us, invent its own religions.  And that, no matter what programmers might intend, would be both somewhat embarrassing and utterly fascinating.


Making Memories

I’m a little suspicious of technology, as many of you no doubt know.  I don’t dislike it, and I certainly use it (case in point), but I am suspicious.  Upgrades provide more and more information to our unknown voyeurs and when the system shows off its new knowledge it can be scary.  For example, the other day a message flashed in my upper right corner that I had a new memory.  At first I was so startled by the presumption than I couldn’t click on it in time to learn what my new memory might be.  The notification had my Photos logo on it, so I went there to see.  Indeed, there was a new section—or at least one I hadn’t previously noticed—in my Photos app.  It contained a picture with today’s date from years past.

Now I don’t mind being reminded of pleasant things, but I don’t trust the algorithms of others to generate them for me.  This computer on my lap may be smart, but not that so very smart.  I know that social media, such as Facebook, have been “making memories” for years now.  I doubt, however, that the faux brains we tend to think computers are have any way of knowing what we actually feel or believe.  In conversations with colleagues over cognition and neurology it becomes clear that emotion is an essential element in our thinking.  Algorithms may indeed be logical, but can they ever be authentically emotional?  Can a machine be programmed to understand how it feels to see a sun rise, or to be embraced by a loved one, or to smell baking bread?  Those who would reduce human brains to mere logic are creating monsters, not minds.

So memories are now being made by machine.  In actuality they are simply generating reminders based on dates.  This may have happened four or five years ago, but do I want to remember it today?  Maybe yes, maybe no.  It depends on how I feel.  We really don’t have a firm grasp on what life is, although we recognize it when we see it.  We’re further even still from knowing what consciousness may be.  One thing we know for sure, however, is that it involves more than what we reason out.  We have hunches and intuition.  There’s that fudge factor we call “instinct,” which is, after all, another way of claiming that animals and newborns can’t think.  But think they can.  And if my computer wants to help with memories, maybe it can tell me where I left my car keys before I throw the pants containing them into the wash again, which is a memory I don’t particularly want to relive.

Memory from a decade ago, today.


Whose Computer?

Whose computer is this?  I’m the one who paid for it, but it is clearly the one in control in this relationship.  You see, if the computer fails to cooperate there is nothing you can do.  It’s not human and despite what the proponents of AI say, a brain is not just a computer.  Now I’m not affluent enough to replace old hardware when it starts slowing down.  Silicon Valley—and capitalism in general—hate that.  I suppose I’m not actually paid well enough to own a computer.  I started buying laptops for work when Nashotah House wouldn’t provide faculty with computers.  Then as an itinerant adjunct it was “have laptop, will travel (and pay bills).”  I even bought my own projector.  At least I thought I was buying it.

I try to keep my software up to date.  The other day a red dot warned me that I had to clear out some space on my disc so Catalina could take over.  It took three days (between work and serving the laptop) to back-up and delete enough files to give it room.  I started the upgrade while I was working, when my personal laptop can rest.  When I checked in it hadn’t installed.  Throwing a string of technical reasons at me in a dialogue box, my OS told me that I should try again.  Problem was, it told me this at 3:30 in the morning, when I do my own personal work.  I had no choice.  One can’t reason with AI.  When I should’ve been writing I was rebooting and installing, a process that takes an hour from a guy who doesn’t have an hour to give.

As all of this was going on I was wondering who owned whom.  In college professors warned against “keyboard compositions.”  These were literal keyboards and they meant you shouldn’t type up your papers the night before they were due, writing them on your typewriter.  They should’ve been researched and “written” before being typed up.  That’s no longer an option.  This blog has well over a million words on it.  Who has time to handwrite a million words, then type them up all in time to post before starting work for the day?  And that’s in addition to the books and articles I write for actual publication.  And the novels and short stories.  For all of this I need my laptop, the Silver to my Lone Ranger, to be ready when I whistle.  Instead it’s dreaming its digital dreams and I’m up at 3:30 twiddling my thumbs.


This Is a Test

For the next sixty seconds…  (If you were born after Civil Defense aired these commercials, it’s your loss.)  I’ve been reading about animal intelligence—there will be more on this anon.  Today’s lesson is on artificial intelligence.  For now let this be an illustration of how difficult it is to come down from an inspired weekend to the daily technology-enhanced drudgery we call day-to-day life.  One of the real joys of seeing art in person is that no tech intervenes in the experience.  It is naked exposure to another human being’s expression of her or himself.  Over the weekend we wandered through five venues of intense creativity and then, back home, it was once more into the web.  The ever-entangling internet of things.

I write, for better or for worse, on my laptop.  My writing’s actually better on paper, but you need everything in electronic form for publication, so who has the time to write and retype, especially when work is ten hours of your day?  Then a system update alert flashes in the upper right corner of my screen.  “Okay,” I say setting the laptop aside, “go ahead and update.”  But then the message that states I have to clear enough gigs for an update.  I have been a little too creative and I’ve used my disc space for stuff I’ve made rather than Apple.  This is a test.  Okay, so I plug in my trusty terabyte drive to back things up before deleting them.  But the laptop doesn’t recognize the drive.  Oh, so it needs a reboot!  (Don’t we all?)  I give the command to restart.  It can’t because some app refuses to quit beach-balling, as if it is the computer that’s doing the actual thinking.  Force quit.  “Are you sure?” the Mac cheekily asks.  “You might lose unsaved changes.”  I need a technological evangelist, I guess.

All of this takes time away from my precious few minutes of daily creativity.  Restart, login, start copying files.  Time for work!  Just a mere sixty hours ago or less I was wandering through showcases of genuine human creation.  Art pieces that make you stop and ponder, and not have to upgrade the software.  Artists can talk to you and shake your hand.  Explain what they’ve tried to express in human terms.  Meanwhile my phone had died and was pouting while I charged it.  I know Apple wants me to upgrade my hardware—their technological extortion is well known.  Anyone who uses a computer experiences it.  Buy a new one or I’ll waste your time.  The choice is yours.  This is a test.  For the next sixty years…


Positive ID

It’s a little bit worrying.  Not just the GOP’s indifference in the face of two mass shootings on the same weekend, but also the fact that the internet knows who I am.  I am the reluctant owner of a smartphone.  I do like that I have the internet in my pocket, but I’m a touch paranoid that I can be traced to anywhere unless I lose my phone.  Even then the government can probably email me and tell me where it is.  Don’t get me wrong—I’m not important enough for the government to pay attention to me, but what is really worrisome is that the web knows me.  Here’s how I came to learn that.  On my home computer I had done a rather obscure Google search.  (If you read this blog that won’t surprise you, and no, it wasn’t anything naughty!)  When I signed into my work computer—different username, different email address, different IP address—and had to do a work related search, Google auto-suggested the search I did on a different computer over the weekend.

I’m savvy enough to know that Google metrics are all about marketing.  The internet wants customer information to predict what they might sell to us.  Advertisers pay for that.  Assuming that I want to buy underwear and summer dresses online (why?), they tailor their ads to sites I visit.  As a sometime fiction writer I go to some sites from which I’m not interested in purchasing anything.  (As an aside, old fashioned book research didn’t leave such a “paper trail.”)  I’ve gotten used to the idea of my laptop knowing me—it sits on my lap everyday, after all—but the work computer?  Does it have to know what I’ve been doing over the weekend?

Artificial intelligence is one thing, but hopping from one login to another feels like being caught in the shower by a stranger.  Like everyone else, I appreciate the convenience of devices.  When I get up in the morning my laptop’s more sure of who I am than my own sleep-addled brain is.  That doesn’t mean my devices really know the essence of who I am.  And it certainly doesn’t mean that my work computer has any right to know what I was doing on another device over the weekend.  Those who believe machine consciousness is now underway assume that this is a step forward, I suppose.  From the perspective of one who’s being stalked by electronic surveillance, however, the view is quite different.  Please leave my personal life at the door, as I do when I go to work.