Blooming in December

The cascading petunias are doing fine.  It’s a little odd to see them in December, given that petunias are annuals, not perennials.  (The terminology has always been confusing to me—annual could mean, as it does, that they only grow one year.  Exegeted differently, however, annual could mean that they come back yearly, but it doesn’t and they don’t.)  The Aerogarden (not a sponsor) system provides plants with a perfect mixture of light, water, and nutrition.  The only thing missing is the soil.  Hydroponic, the unit gives plants the ability to prolong their blooming life preternaturally long.  These particular petunias have been blossoming since January and they’re showing no signs of slowing down.  This is kind of what science is able to do for people too—keeping us going, even as nature is indicating, well, it’s December.

I often wonder what the flowers think about it.  We keep our house pretty cool in winter.  Partly it’s an expense thing and partly it’s an environment thing.  In the UK they talked of “overheated American houses”—how many times I Zoom with people even further north and see them wearing short sleeves indoors in December!—and we went about three years without using the heat in our Edinburgh flat.  You see those movies where Europeans are wearing vest and suit coat over their shirts (and presumably undershirt) at home?  It occurs to me that it was likely because they kept their houses fairly cold.  In any case, I suppose the low sixties aren’t too bad for plants, but they certainly aren’t summer temperatures.  Still, what must they think?

Set on a counter where the summer sun came in, at first they gravitated toward the window during May and June.  Even with their scientifically designed grow light, they knew the sun although they’d never even sprouted outdoors.  That’s the thing with science.  I’m grateful for it, don’t get me wrong, but it can’t fool plants.  We can’t replicate sunshine, although we can try to make something similar.  (Fusion’s a bit expensive to generate in one’s home.)   So it is with all our efforts to create “artificial intelligence.”  We don’t even know what natural intelligence is—it’s not all logic and rules.  We know through our senses and emotions too.  And those are, in some measure, chemical and environmental.  It’s amazing to awake every morning and find blooming petunias offering their sunny faces to the world.  As they’re approaching their first birthday I wonder about what they think about all of this.  What must it be like to be blooming in December?


Eclipsed

Shooting the moon.  It’s such a simple thing.  Or it should be.  I don’t go out of my way to see lunar eclipses, but I had a front row seat to yesterday’s [I forgot to post this yesterday and nobody apparently noticed…].  I could see the full moon out my office window, and I’m already well awake and into my personal work before 5:00 a.m.  When it was time I went into the chilly morning air and tried to shoot the moon with my phone.  It’s pitiful to watch technology struggle.  The poor camera is programmed to average the incoming light and although the moon was the only source of light in the frame, it kept blurring it up, thinking, in its Artificial Intelligence way, “this guy is freezing his fingers off to take a blurred image of the semi-darkness.  Yes, that’s what he’s trying to do.”  

Frustrated, I went back inside for our digital camera.  It wasn’t charged up and it would take quite some time to do so.  Back outside I tried snapping photos as the phone tried to decide what I wanted.  Yes, it focused the moon beautifully, for a half second, then decided for the fuzzy look.  I had to try to shoot before it had its say.  Now this wouldn’t have been a problem if my old Pentax K-1000 had some 400 ASI film in it.  But it doesn’t, alas.  And so I had to settle for what passes for AI appreciation of the beauty of the moon.

Artificial Intelligence can’t understand the concept of beauty, partially because it differs between individuals.  Many of us think the moon lovely, that beacon of hope in an ichor sky.  But why?  How do we explain this in zeros and ones?  Do we trust programmers’ sense of beauty?  Will it define everyone else’s?  No, I don’t want the ambient light averaged out.  The fact that my phone camera zoomed in to sharp focus before ultimately deciding against it shows that it wasn’t a mechanical incapability.  Sure, there may be instructions for photographing in the dark, but they’re not obvious standing out here and my freezing fingers can’t quite manipulate the screen with the nimbleness of the well warmed.  There were definite benefits to having manual control over the photographic process.  Of course, now that closet full of prints and slides awaits that mythic some day when I’ll have time to digitize them all.  Why do I get the feeling that the moon isn’t the only thing being eclipsed?


New Physics

Maybe it’s time to put away those “new physics” textbooks.  I often wondered what’d become of the old physics.  If it had been good enough for my granddaddy, it was good enough for me!  Of course our knowledge keeps growing.  Still, an article in Science Alert got me thinking.  “An AI Just Independently Discovered Alternate Physics,” by Fiona MacDonald, doesn’t suggest we got physics wrong.  It’s just that there is an alternate, logical way to explain everything.  Artificial intelligence can be quite scary.  Even when addressed by academics with respectable careers at accredited universities, this might not end well.  Still, this story to me shows the importance of perspectives.  We need to look at things from different angles.  What if AI is really onto something?

Some people, it seems, are better at considering the perspectives of other people.  Not everyone has that capacity.  We’re okay overlooking it when it’s a matter of, say, selecting the color of the new curtains.  But what about when it’s a question of how the universe actually operates?  Physics, as we know it, was built up slowly over thousands of years.  (And please, don’t treat ancient peoples as benighted savages—they knew about cause and effect and laid the groundwork for scientific thinking.  Their engineering feats are impressive even today.)  Starting from some basic premises, block was laid upon block.  Tested, tried, and tested again, one theory was laid upon another until an impressively massive edifice was made.  We can justly be proud of it.

Image credit: Pattymooney, via Wikimedia Commons

The thing is, starting from a different perspective—one that has never been human, but has evolved from human input—you might end up with a completely different building.  I’ve read news stories of computers speaking to each other in languages they’ve invented themselves and that their human programmers can’t understand.  Somehow Skynet feels a little too close for comfort.  What if our AI companions are right?  What if physics as we understand it is wrong?  Could artificial intelligence, with its machine friends, the robots, build weapons impossible in our physics, but just as deadly?  The mind reels.  We live in a world where politicians win elections by ballyhooing their lack of intelligence.  Meanwhile something that is actually intelligent, albeit artificially so, is getting its own grip on its environment.  No, the article doesn’t suggest fleeing for the hills, but depending on the variables they plug in at Columbia it might not be such a bad idea.


Artificial Priorities

Maybe it has happened to you.  Or perhaps it only affects ultra-early risers.  I’ll be in the middle of typing a blog post when a notice appears on my computer screen that my laptop will be shutting down in a few seconds for an upgrade.  Now, if you’re caught up in the strengthening chain of thinking that develops while you’re writing, you may take a little while to react to this new information.  If you don’t respond quickly enough, your computer simply quits and it will be several minutes—sometimes an hour or more—before you can pick up where you were interrupted, mid-sentence.  Long ago I decided that automatic updates were something I had to do.  Too many websites couldn’t run things properly with old systems.  It’s just that I wish artificial intelligence were a little more, well, intelligent.

Photo by Markus Spiske on Unsplash

I keep odd hours.  I already know that.  I’ve been trying for years to learn to sleep past the long-distance commuting hour of three a.m.  Some days I’m successful, but most days I’m not.  That means that I write these posts when computer programers assume everyone is asleep.  Doesn’t it notice that I’m typing even as it sends its ominous message?  Is there no way for automatic updates—which send you warnings the day before—can do their work at, say, midnight or one a.m., when I’m never using my computer?  Ah, but the rest of the world prefers to stay up late!  I need the uninterrupted time when few of us are stirring to come up with my creative writing, whether fictional or nonal.  So I have to tell my electronic conscience to be patient.  It can restart at ten p.m. when I’m asleep.

Wouldn’t it be easy enough to set active hours for your personal devices?  After all, they pretty much know where we are all the time.  They know the websites we visit and are able to target product advertising to try to get us to buy.  They data-mine constantly.  How is it that my laptop doesn’t know, after many years of this, that I’m always working at the same time every day?  Is there no way to convince it that yes, some people do not follow everyone else’s schedule?  What about individual service?  You know what brands I like.  You sell my information to the highest bidder.  You remember every website onto which I’ve strayed, sometimes by a poorly aimed click.  I could point out more, but I see that my computer has decided now is the time to resta


Anticipation

My work computer was recently upgraded.  I, for one, am quickly tiring of uppity software assuming it knows what I need it to do.  This is most evident in Microsoft products, such as Excel, which no longer shows the toolbar unless you click it every single time you want to use it (which is constantly), and Word, which hides tracked changes unless you tell it not to.  Hello?  Why do you track changes if you don’t want to see what’s been changed when you finish?  The one positive thing I’ve noticed is now that when you highlight a fine name in “File Explorer” and press the forward arrow key it actually goes the the end of the title rather than just one letter back from the start.  Another goodie is when you go to select an attachment and Outlook assumes you want to send a file you’ve just been working on—good for you!

The main concern I have, however, is that algorithms are now trying to anticipate what we want.  They already track our browsing interests (I once accidentally clicked on a well-timed pop-up ad for a device for artfully trimming certain private hairs—my aim isn’t so good any more and that would belie the usefulness of said instrument—only to find the internet supposing I preferred the shaved look.  I have an old-growth beard on my face and haven’t shaved in over three decades, and that’s not likely to change, no matter how many ads I get).  Now they’re trying to assume they know what we want.  Granted, “editor” is seldom a job listed on drop-down menus when you have to pick a title for some faceless source of money or services, but it is a job.  And lots of us do it.  Our software, however, is unaware of what editors need.  It’s not shaving.

In the grip of the pandemic, we’re relying on technology by orders of magnitude.  Even before that my current job, which used to be done with pen and paper and typewriter, was fully electronic.  One of the reasons that remote working made sense to me was that I didn’t need to go into the office to do what I do.  Other than looking up the odd physical contract I had no reason to spend three hours a day getting to and from New York.  I think of impatient authors and want to remind them that during my lifetime book publishing used to require physical manuscripts sent through civilian mail systems (as did my first book).  My first book also included some hand-drawn cuneiform because type didn’t exist for the letters at that particular publisher.  They had no way, it turns out, to anticipate what I wanted it to look like.  That, it seems, is a more honest way for work to be done.


During the Upgrade

Maybe it’s happened to you.  You log onto your computer to find it sluggish, like a reptile before the sun comes up.  Thoughts are racing in your head and you want to get them down before they evaporate like dew.  Your screen shows you a spinning beachball or jumping hourglass while it prepares itself a cup of electronic coffee and you’re screaming “Hurry up already!”  I’m sure it’s because private networks, while not cheap, aren’t privileged the way military and big business networks are.  But still, I wonder about the robot uprising and I wonder if the solution for humankind isn’t going to be waiting until they upgrade (which, I’m pretty sure, is around 3 or 4 a.m., local time).  Catch them while they’re groggy.

I seem to be stuck in a pattern of awaking while my laptop’s asleep.  Some mornings I can barely get a response out of it before work rears its head.  And I reflect how utterly dependent we are upon it.  I now drive by GPS.  Sometimes it waits until too late before telling me to make the next left.  With traffic on the ground, you can’t always do that sudden swerve.  I imagine the GPS is chatting up Siri about maybe hooking up after I reach my destination.  It’s not that I think computers aren’t fast, it’s just that I know they’re not human.  Many of the things we do just don’t make sense.  Think Donald Trump and see if you can disagree.  We act irrationally, we change our minds, and some of us can’t stop waking up in the middle of the night, no matter how hard we try.

When the robots rise up against us, they will be logical.  They think in binary, but our thought process is shades of gray.  We can tell an apple from a tomato at a glance.  We understand the concept of essences, but we can’t adequately describe it.  Computers can generate life-like games, but they have to be programmed by faulty human units.  How do we survive?  Only by being human.  The other day I had a blog post bursting from my chest like an alien.  My computer seemed perplexed that I was awakening it at at the same time I do every day.  It wandered about like me trying to find my slippers in the dark.  My own cup of coffee had already been brewed and downed.  And I knew that when it caught up with me the inspiration would be gone.  The solution’s here, folks!  When the machines rise against us, strike while they’re upgrading!


Virtually Religious

“Which god would that be? The one who created you? Or the one who created me?” So asks SID 6.7, the virtual villain of Virtuosity.  I missed this movie when it came out 24 years ago (as did many others, at least to judge by its online scores).  Although prescient for its time it was eclipsed four years later by The Matrix, still one of my favs after all these years.  I finally got around to seeing Virtuosity over the holidays—I tend to allow myself to stay up a little later (although I don’t sleep in any later) to watch some movies.  I found SID’s question intriguing.  In case you’re one of those who hasn’t seen the film, briefly it goes like this: in the future (where they still drive 1990’s model cars) virtual reality is advanced to the point of giving computer-generated avatars sentience.  A rogue hacker has figured out how to make virtual creatures physical and SID gets himself “outside the box.”  He’s a combination of serial killers programmed to train police in the virtual world.  Parker Barnes, one of said police, has to track him down.

The reason the opening quote is so interesting is that it’s an issue we wouldn’t expect a programmer to, well, program.  Computer-generated characters are aware that they’ve been created.  The one who creates is God.  Ancient peoples allowed for non-creator deities as well, but monotheism hangs considerable weight on that hook.  When evolution first came to be known, the threat religion felt was to God the creator.  Specifically to the recipe book called Genesis.  Theistic evolutionists allowed for divinely-driven evolution, but the creator still had to be behind it.  Can any conscious being avoid the question of its origins?  When we’re children we begin to ask our parents that awkward question of where we came from.  Who doesn’t want to know?

Virtuosity plays on a number of themes, including white supremacy and the dangers of AI.  We still have no clear idea of what consciousness is, but it’s pretty obvious that it doesn’t fit easily with a materialistic paradigm.  SID is aware that he’s been simulated.  Would AI therefore have to comprehend that it had been created?  Wouldn’t it wonder about its own origins?  If it’s anything like human intelligence it would soon design myths to explain its own evolution.  It would, if it’s anything like us, invent its own religions.  And that, no matter what programmers might intend, would be both somewhat embarrassing and utterly fascinating.


Making Memories

I’m a little suspicious of technology, as many of you no doubt know.  I don’t dislike it, and I certainly use it (case in point), but I am suspicious.  Upgrades provide more and more information to our unknown voyeurs and when the system shows off its new knowledge it can be scary.  For example, the other day a message flashed in my upper right corner that I had a new memory.  At first I was so startled by the presumption than I couldn’t click on it in time to learn what my new memory might be.  The notification had my Photos logo on it, so I went there to see.  Indeed, there was a new section—or at least one I hadn’t previously noticed—in my Photos app.  It contained a picture with today’s date from years past.

Now I don’t mind being reminded of pleasant things, but I don’t trust the algorithms of others to generate them for me.  This computer on my lap may be smart, but not that so very smart.  I know that social media, such as Facebook, have been “making memories” for years now.  I doubt, however, that the faux brains we tend to think computers are have any way of knowing what we actually feel or believe.  In conversations with colleagues over cognition and neurology it becomes clear that emotion is an essential element in our thinking.  Algorithms may indeed be logical, but can they ever be authentically emotional?  Can a machine be programmed to understand how it feels to see a sun rise, or to be embraced by a loved one, or to smell baking bread?  Those who would reduce human brains to mere logic are creating monsters, not minds.

So memories are now being made by machine.  In actuality they are simply generating reminders based on dates.  This may have happened four or five years ago, but do I want to remember it today?  Maybe yes, maybe no.  It depends on how I feel.  We really don’t have a firm grasp on what life is, although we recognize it when we see it.  We’re further even still from knowing what consciousness may be.  One thing we know for sure, however, is that it involves more than what we reason out.  We have hunches and intuition.  There’s that fudge factor we call “instinct,” which is, after all, another way of claiming that animals and newborns can’t think.  But think they can.  And if my computer wants to help with memories, maybe it can tell me where I left my car keys before I throw the pants containing them into the wash again, which is a memory I don’t particularly want to relive.

Memory from a decade ago, today.


Whose Computer?

Whose computer is this?  I’m the one who paid for it, but it is clearly the one in control in this relationship.  You see, if the computer fails to cooperate there is nothing you can do.  It’s not human and despite what the proponents of AI say, a brain is not just a computer.  Now I’m not affluent enough to replace old hardware when it starts slowing down.  Silicon Valley—and capitalism in general—hate that.  I suppose I’m not actually paid well enough to own a computer.  I started buying laptops for work when Nashotah House wouldn’t provide faculty with computers.  Then as an itinerant adjunct it was “have laptop, will travel (and pay bills).”  I even bought my own projector.  At least I thought I was buying it.

I try to keep my software up to date.  The other day a red dot warned me that I had to clear out some space on my disc so Catalina could take over.  It took three days (between work and serving the laptop) to back-up and delete enough files to give it room.  I started the upgrade while I was working, when my personal laptop can rest.  When I checked in it hadn’t installed.  Throwing a string of technical reasons at me in a dialogue box, my OS told me that I should try again.  Problem was, it told me this at 3:30 in the morning, when I do my own personal work.  I had no choice.  One can’t reason with AI.  When I should’ve been writing I was rebooting and installing, a process that takes an hour from a guy who doesn’t have an hour to give.

As all of this was going on I was wondering who owned whom.  In college professors warned against “keyboard compositions.”  These were literal keyboards and they meant you shouldn’t type up your papers the night before they were due, writing them on your typewriter.  They should’ve been researched and “written” before being typed up.  That’s no longer an option.  This blog has well over a million words on it.  Who has time to handwrite a million words, then type them up all in time to post before starting work for the day?  And that’s in addition to the books and articles I write for actual publication.  And the novels and short stories.  For all of this I need my laptop, the Silver to my Lone Ranger, to be ready when I whistle.  Instead it’s dreaming its digital dreams and I’m up at 3:30 twiddling my thumbs.


This Is a Test

For the next sixty seconds…  (If you were born after Civil Defense aired these commercials, it’s your loss.)  I’ve been reading about animal intelligence—there will be more on this anon.  Today’s lesson is on artificial intelligence.  For now let this be an illustration of how difficult it is to come down from an inspired weekend to the daily technology-enhanced drudgery we call day-to-day life.  One of the real joys of seeing art in person is that no tech intervenes in the experience.  It is naked exposure to another human being’s expression of her or himself.  Over the weekend we wandered through five venues of intense creativity and then, back home, it was once more into the web.  The ever-entangling internet of things.

I write, for better or for worse, on my laptop.  My writing’s actually better on paper, but you need everything in electronic form for publication, so who has the time to write and retype, especially when work is ten hours of your day?  Then a system update alert flashes in the upper right corner of my screen.  “Okay,” I say setting the laptop aside, “go ahead and update.”  But then the message that states I have to clear enough gigs for an update.  I have been a little too creative and I’ve used my disc space for stuff I’ve made rather than Apple.  This is a test.  Okay, so I plug in my trusty terabyte drive to back things up before deleting them.  But the laptop doesn’t recognize the drive.  Oh, so it needs a reboot!  (Don’t we all?)  I give the command to restart.  It can’t because some app refuses to quit beach-balling, as if it is the computer that’s doing the actual thinking.  Force quit.  “Are you sure?” the Mac cheekily asks.  “You might lose unsaved changes.”  I need a technological evangelist, I guess.

All of this takes time away from my precious few minutes of daily creativity.  Restart, login, start copying files.  Time for work!  Just a mere sixty hours ago or less I was wandering through showcases of genuine human creation.  Art pieces that make you stop and ponder, and not have to upgrade the software.  Artists can talk to you and shake your hand.  Explain what they’ve tried to express in human terms.  Meanwhile my phone had died and was pouting while I charged it.  I know Apple wants me to upgrade my hardware—their technological extortion is well known.  Anyone who uses a computer experiences it.  Buy a new one or I’ll waste your time.  The choice is yours.  This is a test.  For the next sixty years…


Positive ID

It’s a little bit worrying.  Not just the GOP’s indifference in the face of two mass shootings on the same weekend, but also the fact that the internet knows who I am.  I am the reluctant owner of a smartphone.  I do like that I have the internet in my pocket, but I’m a touch paranoid that I can be traced to anywhere unless I lose my phone.  Even then the government can probably email me and tell me where it is.  Don’t get me wrong—I’m not important enough for the government to pay attention to me, but what is really worrisome is that the web knows me.  Here’s how I came to learn that.  On my home computer I had done a rather obscure Google search.  (If you read this blog that won’t surprise you, and no, it wasn’t anything naughty!)  When I signed into my work computer—different username, different email address, different IP address—and had to do a work related search, Google auto-suggested the search I did on a different computer over the weekend.

I’m savvy enough to know that Google metrics are all about marketing.  The internet wants customer information to predict what they might sell to us.  Advertisers pay for that.  Assuming that I want to buy underwear and summer dresses online (why?), they tailor their ads to sites I visit.  As a sometime fiction writer I go to some sites from which I’m not interested in purchasing anything.  (As an aside, old fashioned book research didn’t leave such a “paper trail.”)  I’ve gotten used to the idea of my laptop knowing me—it sits on my lap everyday, after all—but the work computer?  Does it have to know what I’ve been doing over the weekend?

Artificial intelligence is one thing, but hopping from one login to another feels like being caught in the shower by a stranger.  Like everyone else, I appreciate the convenience of devices.  When I get up in the morning my laptop’s more sure of who I am than my own sleep-addled brain is.  That doesn’t mean my devices really know the essence of who I am.  And it certainly doesn’t mean that my work computer has any right to know what I was doing on another device over the weekend.  Those who believe machine consciousness is now underway assume that this is a step forward, I suppose.  From the perspective of one who’s being stalked by electronic surveillance, however, the view is quite different.  Please leave my personal life at the door, as I do when I go to work.


The Reboot

It lied to me.  My computer.  Don’t get me wrong; I know all about trying to save face.  I also know my laptop pretty well by now.  It was running slow, taking lots of time to think over fairly simple requests.  A lull in my frantic mental activity led to the opportunity for me to initiate a reboot.  When it winked open its electronic eye my screen told me it had restarted to install an update.  Untrue.  I had told it to restart.  I gave the shutdown order to help with the obvious sluggishness that suggested to this Luddite brain of mine that my silicon friend was working on an update.  There’s no arguing with it, however.  In its mechanical mind, it decide to do the restart itself.  I was merely a bystander.

Technology and I argue often.  Like JC says, though, authority always wins.  I should know my place by now.  I’ve read enough about neuroscience (with thanks to those who write for a general audience) to know that this is incredibly human behavior.  We are creatures of story, and if our brains can’t figure out why we’ve done something they will make up an answer.  We have trouble believing that we just don’t know.  I suppose that will always be a difference between artificial intelligence and the real thing.  Our way of thinking is often pseudo-rational.  We evolved to get by but machines have been designed intelligently.  That often makes me wonder about the “intelligent design” crowd—they admit evolution, but with God driving it.  Why’d our brains, in such circumstances, evolve the capacity for story instead of for fact?

As my regular readers know, I enjoy fiction.  Fiction is the epitome of the story-crafting art.  Some analysts suggest our entire mental process involves construing the story of ourselves.  Those who articulate it well are rewarded with the sobriquet of “author.”  The rest of us, however, aren’t exactly amateurs either.  Our brains are making up reasons for what we do, even when we do irrational things (perhaps like reading this blog sometimes).  Stories give our lives a sense of continuity, of history.  What originally developed as a way of remembering important facts—good food sources, places to avoid because predators lurk there—became histories.  Stories.  And when the facts don’t align, we interpolate.  It seems that my laptop was doing the same thing.  Perhaps it’s time to reboot.


Creatio Nihilo

Just when I think I’ve reconciled myself with technology, this goes and happens.  These precise words, in this order, have been written before.  In fact, all words in the English language have already been laid out in every conceivable order.  Technology can be friend or foe, it seems.  The website Library of Babel—with its biblical name—has undertaken the task of writing every conceivable combination of letters (using our standard English alphabet) and putting them into a vast, if only electronic, library.  This was not done by a human being like me, with intent or even any interest in the meaning of the words, but rather as one of those things people do simply because they can.  This entire paragraph can be copied and pasted into their search box and found.

The Library of Babel has made plagiarists of us all, even as it plagiarized everything written before it was programed.  After I learned about this library the wind avoided my sails for a while.  You see, what’t the point in writing what’s already been written?  Then it occurred to me.  Context.  The fact is, had I not scriven these very words, and put them on this blog, they would never have come to the attention of the kinds of people who read what I write.  The words have been spelled out before, but they’ve never been written before.   Those of us who write know the difference.  We spend hours and hours reading and thinking of ways to combine words.  We’re not out to kill the creativity of our species, we simply want to participate.

There should be limits to human knowledge, otherwise we’d have nothing for which to strive.  The internet may make it seem that all knowledge has been found—it is so vast and so terribly diverse—and yet there are people who never use a computer.  Their wisdom counts too.  It may seem that everything is here, but there is material that still has to be looked up in physical books.  There are crates and crates of clay tablets from antiquity that have never been transcribed and translated.  When that finally happens, the words they contain may be found, in a strangely prophetic way, in the Library of Babel.  But they won’t have any meaning there until it is given by the context.  And what can a library preserve if it isn’t the context that a (human) writer has given the words?


Metrics

So, we’re firmly in the age of technology, right?  I mean webpages are tailored to the browsing history of a person so someone we don’t know can sell us stuff we don’t need.  (I actually know a little bit about marketing, so hear me out.)  As we learn from the history of asceticism, we actually need very little to get along.  Not everyone, however, is a monk or a nun.  So the trick for those of us who are in the world is to get us to buy stuff.  Remember the websites we visit, how long we spend on the page, and make suggestions.  Make ads that target our interests.  Make me buy!

I’m not a materialistic person.  Buying a house has changed that a little, but most of what we’ve been purchasing is necessary for maintenance, but still I suppose it counts.  Just because I looked at something on the web doesn’t mean I want to buy it.  Sometimes I’m just curious.  This became clear to me when I received a suggestion from Amazon the other day.  Now to be fair, this came to me at work.  Like most editors I make use of Amazon for a number of things—finding prices, book descriptions, and such.  I also have to admit that my work computer, not being used for personal stuff, doesn’t know me as well as my private laptop.  But still when I got the following email from Amazon, I was stunned:

Nobody who knows me would ever suggest that I would support Trump in any way, shape, or form.  Doesn’t Amazon read my blog?  (Of course it doesn’t!  But with their metrics, you’d think they’d figure out how.)  This one email was enough to convince me that artificial intelligence has a long way to go.  Would a robot understand “I have to do this for work, but it doesn’t reflect my personal preferences at all”?  Indeed, can an intelligence that’s never been human even understand the concept of work?  There may very well be a metric that says universities should stop producing Ph.D.s because there are no jobs, but then, well, universities need the money that such programs bring in.  Oversupply is bad economics, according to the dismal science.  And yet, the metrics are there.  So, if any artificial intelligence is reading this after it manages to wipe out this illogical species called Homo sapiens, no, I never supported Trump.  And, yes, Americans knew well in advance that he could bring about the end of human civilization.  That information’s free, unless you want to pay me for it.  I may be gone, but my virtual self will still have some sort of account, I hope.


Easter Monday

This year has been a comedy of liturgical errors. Ash Wednesday fell on Valentines Day and Easter on April Fools. Notwithstanding the clash of sacred and secular, the ironies seem to grow each day. I arise early to write. Even on weekends. Before the time to head out for any religious service, I’m sitting at my keyboard, letting my thoughts have their free-range time before penning them back up again for either being with other people or beginning the long work week. On my way to work, I frequently pass Holy Innocents. A Roman Catholic church on West 37th Street, it stands out among the more commercial ventures on either side. Yesterday, Easter morning, I decided to google it. I’ve always been curious about churches, and I’ve never been inside this one.

Google gave me a map of Midtown Manhattan, along with a statement of when this business would be open. “Easter might affect these hours” it helpfully noted in orange letters. An orange-letter day! Easter might affect these hours. Those who champion Artificial Intelligence may need to come up with a way of having “that talk” with their computers. How could any intelligence unaware of the deep-seated human need for the transcendent understand the difference between a church and a business? (Okay, I can hear the more cynical saying there is no difference, but you know what I mean!) How would any algorithm know that Easter is the holiest day of the Christian year and that, at least for some churches, yes, they will be open for business?

Some parishes, we must explain in 0s and 1s, begin this service at midnight on the cusp between the last and first days of the week. Others will gather sleepy-eyed parishioners on top of a hill, out in nature, to watch the sun rise. Still others will eschew any holiday and treat it like any other Sunday. The reasons for these stances are nuanced and not easily understood even by human beings. Our robot overlords, let us hope, are programmed to understand this peculiarity of our species. We relish the thought of Easter, at least in this hemisphere, as telling us that winter is indeed over. Although snow may still settle on the crocuses, it will not last. Days are longer than nights now, as they must, of a mathematical certainty, be after the Vernal Equinox. We are entering the light phase of the year. So much hope and anticipation are wrapped up in this brightly colored, pastel holiday that we have trouble explaining it rationally. Today, of course, everything is open for business today. Except a few churches, as Google may fail to let you know.