Call Me AI

Okay, so the other day I tried it.  I’ve been resisting, immediately scrolling past the AI suggestions at the top of a Google search.  I don’t want some program pretending it’s human to provide me with information I need.  I had to find an expert on a topic.  It was an obscure topic, but if you’re reading this blog that’ll come as no surprise.  Tired of running into brick walls using other methods, I glanced toward Al.  Al said a certain Joe Doe is an expert on the topic.  I googled him only to learn he’d died over a century ago.  Al doesn’t understand death because it’s something a machine doesn’t experience.  Sure, we say “my car died,” but what we mean is that it ceased to function.  Death is the overlay we humans put on it to understand, succinctly, what happened.

Brains are not computers and computers do not “think” like biological entities do.  We have feelings in our thoughts.  I have been sad when a beloved appliance or vehicle “died.”  I know that for human beings that final terminus is kind of a non-negotiable about existence.  Animals often recognize death and react to it, but we have no way of knowing what they think about it.  Think they do, however.  That’s more than we can say about ones and zeroes.  They can be made to imitate some thought processes.  Some of us, however, won’t even let the grocery store runners choose our food for us.  We want to evaluate the quality ourselves.  And having read Zen and the Art of Motorcycle Maintenance, I have to wonder if “quality” is something a machine can “understand.”

Wisdom is something we grow into.  It only comes with biological existence, with all its limitations.  It is observation, reflection, evaluation, based on sensory and psychological input.  What psychological “profile” are we giving Al?  Is he neurotypical or neurodivergent?  Is he young or does his back hurt when he stands up too quickly?  Is he healthy or does he daily deal with a long-term disease?  Does he live to travel or would he prefer to stay home?  How cold is “too cold” for him to go outside?  These are things we can process while making breakfast.  Al, meanwhile, is simply gathering data from the internet—that always reliable source—and spewing it back at us after reconstructing it in a non-peer-reviewed way.  And Al can’t be of much help if he doesn’t understand that consulting a dead expert on a current issue is about as pointless as trying to replicate a human mind.


Major Drum

We don’t get out much.  Live shows can be expensive and these cold nights don’t exactly encourage going out after dark.  Living near a university, even if you can’t officially be part of it, has its benefits, though.  Over the weekend we went to see Yamato: The Drummers of Japan.  Our daughter introduced us to the concept while living in Ithaca, a town that has a college or two, I hear.  These drummer groups create what might be termed a sound bath, that is profoundly musical while featuring mainly percussion.  Now, I can’t keep a beat for too long—I’m one of those guys who overthinks clapping in time—but that doesn’t mean I can’t appreciate those who can.  The timing of the members of Yamato was incredibly precise, and moving.  At times even funny.  It’s a show I’d definitely recommend.

This particular tour is titled “Hito No Chikara: The Power of Human Strength.”  Now this isn’t advertising their impressively well-toned bodies, but is a celebration of human spirit under fire from AI.  The program notes point out some recurring themes of this blog: to be human is to experience emotion, and to know physical limitations, and to be truly creative.  Would a non-biological “intelligence” think to wrap dead animal skins around hollowed out tree trunks, pound them with sticks and encourage hundreds of others to experience the emotions that accompany such things?  I live in a workaday world that thinks AI is pretty cool.  Humans, on the other hand, can say “I don’t know” and still play drums until late in the night.  We know the joy of movement.  The exhilaration of community.  I think I can see why they titled their show the way they did.

Bowerbirds will create nests that can only be called intentionally artful.  Something in biological existence helps us appreciate what they’re doing and respond in wonder.  Theirs is an innate appreciation for art.  It spans the animal world.  Japan is one of many places I’ve never been.  I’ve never played in any kind of band and you don’t want me setting time for your pacemaker.  If a computer keeps such precise timing we think nothing of it.  It’s part of what humans created them to do.  When a group of people gets together, stretching their muscles and working in perfect synchronicity, we sit up and take notice.  We’ll even pay to watch and hear them do it.  Art, in all its forms, is purely and profoundly biological.  And it is something we know, at our best, to appreciate with our emotions and our minds.


Steve or Stephanie

I know gender is a construct, and all.  I even put my pronouns (he, him, his) on my work email signature.  I haven’t bothered on my personal email account since so few people email me that the effort seems superfluous.  But I’m wondering if the tech gods, aka AI, understand.  You see, with more and more autosuggests (which really miss the point much of the time), at work the Microsoft Outlook email system is all the time trying to fill things in for me.  Lately Al, which I call Al, has been trying to get me to sign my name with an “@“ so people can “text” me a response.  No.  No, no, no!  I write emails like letters; greeting, body, closing.  People who email like they’re texting sound constantly disgruntled and surly.  Take an extra second and ask “How are you?”  Was that so hard?  But I was talking about gender.

So Al is busy putting words in my fingers and every time I start typing my closing name it autosuggests “Stephanie” before I correct it.  It’s starting to make me a little paranoid.  It does seem that men and women differ biologically, and I identify with the gender assigned to me at birth.  I’m pretty sure Dr. Butter said “It’s a boy,” or something similar all those years ago.  Now I’m not sure if Al is deliberately taunting me or simply going through the alphabet as I type.  Stephanie comes before Stephen (which isn’t my name either) or Steve.  The thing is, I type fairly fast (I won’t say accurately, but fast) and Al has trouble keeping up.  But still Al is autosuggesting Stephanie for me every time.  I’ve been using computers since the 1980s; shouldn’t Al know who I am by now?

Of course, when Al takes over such human things as gender will only get in the way.  I guess we have that to look forward to.  Gender may be something socialized, I realize.  For those of us approaching ancient, we had gender differences drilled into our heads growing up.  I recently saw one of those cutesy novelty signs that resonated with me: “Please be patient with me, I’m from the 1900s.”  I’m not a sexist—I have supported feminism for as long as I can remember.  But I don’t like being called Stephanie.  What if my name was Stefan?  That isn’t autosuggested at all.  I know of others whose names are even earlier, alphabetically.  Maybe Al is overreaching.  Maybe it ought to leave names to humans.  At least for as long as we’re still here.


Finding Poe

A gift a friend gave me started me on an adventure.  The gift was a nice edition of Poe stories.  It’s divided up according to different collections, one being Tales of the Grotesque and Arabesque.  This was originally the title of a collection of 25 stories selected by Poe himself in 1840.  I realized that much of my exposure to Poe was through collections selected by others such as Tales of Mystery and Terror, never published by Poe in that form.  I was curious to see what Poe himself saw as belonging together.  I write short stories and I’ve sent collections off several times, but with no success at getting them published.  I know, however, what it feels like to compile my own work and the impact that I hope it might have (if it ever gets published).  Now finding a complete edition of  Tales of the Grotesque and Arabesque turned out to be more difficult than expected.

Amazon has copies, of course.  They are apparently all printed from a master PDF somewhere since they’re all missing one of the stories.  The second-to-last tale, “The Visionary,” is missing.  I searched many editions, using the “read sample” feature on Amazon.  They all default to the Kindle edition with the missing tale.  I even looked elsewhere (gasp!) and found that an edition published in 1980 contained all the stories.  I put its ISBN in Amazon’s system and the “read sample” button pulled up the same faulty PDF.  Considerable searching led me to a website that actually listed the full contents of the 1980 edition I’d searched out, and I discovered that, contrary to Amazon, the missing piece was there.  I tried to use ratiocination to figure it out.

I suspect that someone, back when ebooks became easy to make, hurried put together a copy of Tales of the Grotesque and Arabesque.  They missed a piece, never stopping to count because Poe’s preface says “25” tales are included, but there were only 24.  Other hawkers (anyone may print and sell material in the public domain, and even AI can do it) simply made copies of the original faulty file and sold their own editions.  Amazon, assuming that the same title by the same author will have the same contents, and wishing to drive everyone to ebooks (specifically Kindle), offers its own version of what it thinks is the full content of the book.  This is more than buyer beware.  This is a snapshot of what our future looks like when AI takes over.  I ordered a used print copy of the original edition with the missing story.  At least when the AI apocalypse takes place I’ll have something to read.


Doing Without

I’m a creature of habit.  Although I’m no internet junkie (I still read books made of paper), I’ve come to rely on it for how I start my day.  I get up early and do my writing and reading before work.  I generally check my email first thing, and that’s where something went wrong.  No internet.  We’ve been going through one of those popular heat waves, and a band of thunderstorms (tried to check on their progress so I could see if it’s okay to open the windows, but wait—I need the internet to do that) had rolled through three hours ago, at about midnight.  Maybe they’d knocked out power?  The phone was out too so I had to call our provider on my cell.  The robovoice cheerily told me there was a service outage and that for updates I could check their website.  Hmmm.

I can read and write without the internet.  I’m on Facebook for, literally, less than two minutes a day.  I stop long enough to post my blog entry and check my notices.  I hit what used to be Twitter a few times a day, but since people tend to communicate (if they do) via email, that’s how the day begins.  This morning I had no internet and I wondered how tech giants would live without it.  I’m no fan of AI.  I use technology and I believe it has many good points, but mistaking it for human—or thinking that human brains are biological computers—flies in the face of all the evidence.  Our brains evolved to help our biological bodies survive.  And more.  The older I get the more I’m certain that there’s a soul tucked in there somewhere too.  Call it a mind, a psyche, a spirit, a personality, or consciousness itself, it’s there.  And it’s not a computer.

Our brains rely on emotion as well as rationality.  How we feel affects our reality.  Our perspective can change a bad situation into a good one.  So I’m sitting here in my study, sweating since, well, heat wave.  It was storming just a few hours ago and I can’t check the radar to see if the system has cleared out or not.  What to do?  Open the windows.  I’ll feel better at any rate.  And in case the coffee hasn’t kicked in yet, “open the windows” is a metaphor as well as a literal act on my part.  And I don’t think AI gets metaphors.  At least not without being told directly.  And they call it “intelligence.”

Photo by Chris Barbalis on Unsplash

Passing Words

I’ve never counted, but it must be dozens.  Maybe a hundred.  And they have very high memory requirements.  Especially for a guy who can’t recall why he walked into a room half the time.  I’m talking passwords.  The commandments go like this:

You can’t use the same password for more than one system/platform/device/account

You can’t tell anyone your password (duh!)

You can’t write it down

You can’t send your password to someone electronically (duh!)

You must logoff your device when it’s unattended

You will be held responsible for anything done under your login

The word of the Lord.

Now, how much more ageist can you get?  I’ve never counted the number of passwords I’ve had to generate for work alone but I can’t remember much without writing things down.  Even the chores after work.  I hear that there are “keychains” you can get that remember your passwords for you.  I suspect you need a password to access your passwords.  Replicate the commandments above.

I know internet security is serious business.  My objection is that you’re not supposed to write any of this down.  I carry a notebook around with me (it has no passwords, so please don’t try to steal it) to keep track of everything from doctors’ orders to how to call the plumber if there’s a leak.  I can’t remember all that stuff.  Some of it is personal information, but with everything you’re expected to keep in memory these days—at the same time we’re unleashing AI on the world—is madness.

A friend pointed out that AI books are written without authors.  If I remember correctly, my response was “AI has great potential, but let’s leave the humanities to humans.”  I hope I’m remembering that correctly, because I thought it clever at the time. I wish I’d written it down.  Those who make the rules about passwords aren’t as close to their expiration date as I am.  My grandmother was born before heavier-than-air flight took place and died after we’d landed on the moon.  Guys my age regale their kids (and some, their grandkids) by telling them telephones used to be attached to walls and you could walk away from technology at will.  Now it follows you.  Listens to you even when you’re not talking to it—our car frequently interjects itself into our conversations.  At least she isn’t asking for a password while I’m driving.  I couldn’t write it down.  Our love affair with technology is also driving.  More often than we suppose.  It’s driving me too… driving me crazy.


To Their Own Devices

This one’s so good that it’s got to be a hoax.  One of the upsides to living under constant surveillance is that a lot of stuff—weird stuff—is caught on camera.  I admit to dipping into Coast to Coast once in a while.  (This, originally radio, show [Coast to Coast AM] was well known for paranormal interests long before Mulder and Scully came along.)  It was there that I learned of a viral video showing devices praying together during the night in Mexico City.  The purported story is that a security guard in a department store came upon electronic devices reciting the Chaplet of the Divine Mercy.  One device seems to be leading the other devices in prayer.  Skeptics have pointed out that this could’ve been programmed in advance as a kind of practical joke on the security guard, but it made me wonder.

I’m no techie.  I can’t even figure out how to get back to podcasting.  I do, however, enjoy the strange stories of electronic “consciousness.”  I use the phrase advisedly since we don’t know what human, animal, and plant consciousness is.  We just know it exists.  I am told, by those who understand tech better than I do, that computers have been discovered “conversing” with each other in a secret language that even their programmers can’t decipher.  And since devices don’t follow our sleep schedules, who knows what they might get up to in the middle of the night when left to their own devices?  Why not hold a prayer service?  The people they surveil all day do such things.  Since the video hit the web not long before Easter, with its late-night services, it kind of makes sense in its own bizarre way.

As I say, this seems to be one of those oddities that is simply too good to be true.  But still, driving along chatting to my family in the car, some voice-recognition software will sometimes join in with a non sequitur.  As if it just wants to do what humans do.  I don’t mean to be creepy here, but it may be that playing Pandora with “artificial intelligence” is dicey when we can’t define biological intelligence.  I’ve said before that AI doesn’t understand God talk.  But if AI is teaching itself by watching what humans post—which is just about everything that humans do—maybe it has learned to recite prayers without understanding the underlying concepts.  Human beings do so all the time.

Let us pray


Verb Choice

I can’t remember who started it.  Somehow, though, when I watch movies on Amazon Prime, the closed captioning kicks in.  I generally don’t mind this too much since some dialogue is whispered or indistinct.  I also presume some kind of AI does it and it makes mistakes.  That’s not my concern today, however.  Today it’s word choice.  Humans of a certain stripe are good at picking the correct verb for an action.  I’ve been noticing that the closed captions often select the wrong word and it distracts me from the movie.  (Plus, they include some diegetic sounds but not others, and I wonder why.)  For example, when a character snorts (we’re all human, we know what that is), AI often selects “scoffs.”  Sometimes snorting is scoffing, but often it’s not.  Maybe it’s good the robots don’t pick up on the subtle cues.

This isn’t just an AI problem—I first noticed it a long time ago.  When our daughter was young we used to get those Disney movie summary books with an accompanying cassette tape (I said it was a long time ago) that would read the story.  Besides ruining a few movies for me, I sometimes found the verb choices wrong.  For example, in Oliver (which I saw only once), the narrator at one point boldly proclaims that “Fagan strode into the room.”  Fagan did not stride.  A stride is not the same thing as a shuffle, or a slump.  Words have connotations.  They’re easily found in a dictionary.  Why do those who produce such things not check whether their word choice accurately describes the action?

So when I’m watching my weekend afternoon movies, I want the correct word to appear in the closed captioning.  Since the nouns generally occur in the dialogue itself, it’s the verbs that often appear off.  Another favorite AI term is “mock.”  Does a computer know when it’s being mocked?  Can it tell the scoff in my keystrokes?  Does it have any feelings so as to care?  AI may be here to stay, but human it is not.  I’ve always resented it a bit when some scientists have claimed our brains are nothing but computers.  We’re more visceral than that.  We evolved naturally (organically) and had to earn the leisure to sit and make words.  Then we made them fine.  So fine that we called them belles lettres.  They can be replicated by machine, but they can’t be felt by them.  And I have to admit that a well-placed snort can work wonders on a dreary day.


Call Me AI

Let’s call them Large Language Models instead of gracing them with the exalted title “artificial intelligence.”  Apparently, they have great potential.  They can also be very annoying.  For example, during a recent computer operating system upgrade, Macs incorporated LLM (large language model) technology into various word processing programs.  Some people probably like it.  It might save some wear and tear on your keyboard, I suppose.  Here’s what happens: you’re innocently typing along and your LLM anticipates and autocompletes your words.  I have to admit that, on the rare occasions that I text I find this helpful.  I don’t text because I despise brief communiqués that are inevitably misunderstood. When I’m writing long form (my preference), I don’t like my computer guessing what I’m trying to say.  Besides, I type faster than its suggestions most of the time.

We have gone after convenience over careful thought.  How many times have I been made to feel bad because I’ve misunderstood a message thumbed in haste, or even an email sent as if it were a text?  More than I care to count.  LLMs have no feelings.  They don’t understand what it is to be human, to be creative.  Algorithms are only a small part of life.  They have no place on a creative’s desktop.  I even thought that I should choose a different word every single time just to see what this feisty algorithm might do.  Even now I find that sometimes it has no idea where my thoughts are going.  Creative people experience that themselves from time to time.

Certain sequences of words suggest the following word.  I get that.  The object of creative writing, however, is to subvert that in some way.  If we knew just which way a novelist would go every time, why would we bother reading their books?  LLMs thrive on predictability.  They have no human experience of family tensions or heavy disappointments or unexpected elations.  We, as a species tend to express ourselves in similar ways when such things happen, and certain words suggest themselves when a sequence of letters falls from our fingers.  LLMs diminish us.  They imply that our creative wordplay is but some kind of sequence of 0s and 1s that can be tamed and stored in a box.  I suppose that for someone who has to write—say a work or school report—such thing might be a boon.  It’s not, however, the intelligence that it claims to be.


Quick Writing

On the very same day I saw two emails that began with phrases that indicated they were clearly sent by text.  One began “Hell all.”  This was a friendly message from a friendly person sent to a friendly group and I’m pretty sure the final o dropped off the first word.  The second seemed to have AI in mind as it read “Thank you bot.”  It was sent from a phone to two individuals (or androids?).  There’s a reason I don’t text.  Apart from being cheap and having to pay for each text I receive or send, that is.  The reason is that it’s far too easy to misunderstand when someone is trying to dash something off quickly.  Add to that the AI tendency to think it knows what you want to say (I’m pretty sure it has difficulty guessing, at least in my case, and likely in yours, too) and errors occur.  We write to each other in order to communicate.  If we can’t do it clearly, it’s time to ask why.

Those who email as if they’re texting—short, abrupt sentences—come across as angry.  And an angry message often inspires an angry response.  Wouldn’t it make more sense to slow down a bit and express what you want to say clearly?  We all make typos.  Taking the time to email is no guarantee that you’ll not mess something up in your message.  Still, it helps.  I think back to the days of actual letter writing.  Those who were truly cultured copied out the letter (another chance to check for errors!) before sending it.  There were misunderstandings then, I’m sure, but I don’t think anyone was suggesting someone else is a robot.  Or cussing at them from word one.

The ease of constant communication has led to its own set of complications.  Mainly, it seems to me, that since abbreviated communication has become so terribly common, opportunities for misunderstanding increase exponentially.  I’m well aware that I’ll be accused of being “old school,” if not downright “old fashioned,” but if life’s become so busy that we don’t have time for other people isn’t it time to slow down a bit?  Technology’s become the driver and it doesn’t know where the hello we want to go.  The other day I forgot where I put my phone.  I signed on for work but couldn’t get started because it requires two-step authentication.  Try to walk away from your phone.  I dare you.  Thank you bot, indeed.


Look it up

Does anybody else find the internet too limiting?  I regularly find that what I’m searching for flummoxes even Google when it comes to trying to find things.  The internet doesn’t encompass all of reality, I guess.  For example, the other day I encountered the word “evemerized.”  Even Google vociferously insisted that I meant to search for “euhemerized,” which is a different thing.  It did, however, reluctantly give me a couple of websites that use, and even define the word.  What is it that the search engines are not showing us?  Oftentimes in my searching I admit to being at fault.  I don’t know the correct string of words to use to get algorithms to understand me.  I guess I’ll be one of those up against the wall when AI takes over.  “Does not compute,” it will say in its sci-fi robot voice.

Some of us still like to unplug and pick up a real book.  Or take a walk in the woods.  I do have to admit, however, I wouldn’t complain if the internet could find a way to mow my lawn.  (I don’t mean giving me a list of those companies that haul around inverted-helicopter mowers that make every summer morning sound like Apocalypse Now.  “I love the smell of cut grass in the morning.”)  I am, and hope I always will be, a seeker.  I’m aware that our brains did not evolve to find “the Truth,” but I’m compelled to keep looking in any case.  There’s so much in this world and we’ve tried to distill it to what you can accomplish with a keyboard and a screen.  And even with those I can’t find what I’m looking for in this virtual collective unconscious that we call the web.  There are others better than me at web searching, I’m certain of it.

Despite our current understanding of the virtue of curiosity, there have been periods of history (and pockets of it still exist now) when religions have presented curiosity as evil.  This is generally the case with revealed religions that invest a great deal in having the truth delivered to them tied up with a bow.  I can’t believe in a deity that created curiosity as a sin.  Early explorers of religion exhibited curiosity—if Moses hadn’t wondered what that burning bush was no Bible would ever have been written.  Of course, the internet didn’t exist in those days and seeking was, perhaps, a little bit simpler.  Even if Moses was evemerized.

Moses gets curious

Generation Tech

You can’t be lazy in a technocracy.  I find myself repeating this mantra to myself when dealing with many people who use technology only when strictly necessary.  They don’t realize the war has already been lost.  If you want to thrive in this new world order, you need to keep up at least a modicum with technology.  I deal with a lot of people for whom biblical studies means handling only pens and paper.  J. C. L. Gibson, one of my doctoral advisors, wrote all his books longhand and had his secretary type them.  That’s simply no longer possible.  For authors, if you’re not willing to put notice of your books on Facebook, Twitter (or, as it seems to be going, Threads) people aren’t going to notice.  Publishers don’t send print catalogues any more.  My physical mailbox has been quite a bit less used of late.

There’s an irony to the fact that the generation that grew up on Bob Dylan’s “The Times They Are a-Changin’” are now refusing to accept our robo-overlords.  AI is here to stay and shy of a total collapse of the electrical grid, we’re not going back to where we were in the sixties.  The times have a-changed.  And you know what Bob says to do if you can’t lend an appendage.  Now, if you read my blog regularly, you know that I don’t go into this future with a sincere smile.  But at least I try to keep up with what I need to to survive.  I have to stop and remind myself how to write a check.  Or fold a roadmap.  I suspect that many of those who object to doing academic business electronically also drive by GPS.  It beats getting lost.

How does this connect to the internet?

No, I’m not the first in line.  I still wouldn’t be using a headset for Zoom/Teams meetings if my wife hadn’t given me an old one of hers.  This despite the fact I complain that I can’t hear others who insist they can speak clearly without and whose voices are muffled by the echoes in their work-at-home room.  Nevertheless, if you want to be a professional of any stripe, you need to reconcile yourself with technology and its endless changes.  You wake up one morning and Twitter is now X and you find yourself xing rather than tweeting.  I need to get more followers on Threads, but you can’t do that on your laptop—I guess times are still a-changin’.


Who Are We?

I wonder who I am.  Beyond my usual existential angst, I tried to access some online learning modules at work only to have so many barriers thrown up that I couldn’t log in.  Largely it’s because I have an online presence (be it ever so humble) outside of work.  Verification software wants to send codes to my personal email and my company has a policy against running personal emails on work computers.  Then they want to send a phone verification, but I don’t have a work cell.  I don’t need one and I have no desire to carry around two all the time because I barely use the one I have.  By the way, my cell does seem to recognize me most of the time, so maybe I should ask it who I am.

Frustrated at the learning module, I remembered that we’d been asked to explore ChatGPT for possible work applications.  I’d never used it before so I had to sign up.  I shortly ran into the very same issue.  I can’t verify through my personal phone and I found myself in the ironic position of having an artificial intelligence asking me to verify that I was human!  I know ChatGPT is not, but I do suspect it might be a politician, given all the red tape it so liberally used to get me to sign in.  Not that I plan to use it much—I was simply trying to do what a higher-up at work had asked me to do.  So now my work computer seems to doubt my identity.  I don’t doubt its—I can recognize the feel of its keyboard even in the dark.  And the way my right hand gets too hot from the battery on sweltering summer days.  It’s an unequal relationship.

My personal computer, which isn’t as paranoid as the work computer, seems to accept me for who I say I am.  I try to keep passwords secure and complex.  I have regular habits—at least most days.  I should be a compatible user.  I don’t want ChatGPT on my personal space, however, since I’m not sure I trust it.  I did try to log into the learning module on my laptop but it couldn’t be verified by the work server (because the computer’s mine, I expect).  Oh well, I didn’t really feel like chatting anyway.  But I did end the day with a computer-induced identity crisis.  If you know who I am, please let me know in the comments.  (You’ll have to authenticate with WordPress first, however.)


Surviving AI

A recent exchange with a friend raised an interesting possibility to me.  Theology might just be able to save us from Artificial Intelligence.  You see, it can be difficult to identify AI.  It sounds so logical and rational.  But what can be more illogical than religion?  My friend sent me some ChatGPT responses to the story I posted on Easter about the perceived miracle in Connecticut.  While the answers it gave sounded reasonable enough, it was clear that it doesn’t understand religion.  Now, if I’ve learned anything from reading books about robot uprisings, it’s that you need to focus on the sensors—that’s how they find you.  But if you don’t have a robot to look at, how can you tell if you’re being AIed?

You can try this on a phone with Siri.  I’ve asked questions about religion before, and usually she gives me a funny answer.  The fact is, no purely rational intelligence can understand theology.  It is an exercise uniquely human.  This is kind of comforting to someone such as yours truly who’s spend an entire lifetime in religious studies.  It hasn’t led to fame, wealth, or even a job that I particularly enjoy, but I’ll be able to identify AI by engaging it with the kind of conversation I used to have with Jehovah’s Witnesses at my door.  What does AI believe?  Can it explain why it believes that?  How does it reconcile that belief with the the contradictions that it sees in daily life?  Who is its spiritual inspiration or model or teacher?

There are few safe careers these days.  Much of what we do is logical and can be accomplished by algorithms.  Religion isn’t logical.  Even if mainstream numbers are dipping, many Nones call themselves spiritual, but not religious.  That still works.  We’ve all done something (or many somethings) out of an excess of “spirit.”  Whether we classify the motivation as religious or not is immaterial.  Theologians try to make sense of such things, but not in a way that any program would comprehend.  I sure that there are AI platforms that can be made to sound like a priest, rabbi, or preacher, but as long as you have the opportunity to ask it questions, you’ll be able to know.  And right quickly, I’m supposing.  It’s nice to know that all those years of advanced study haven’t been wasted.  When AI takes over, those of us who know religion will be able to tell who’s human and who’s not.

What would AI make of this?

Actual Intelligence (AI)

“Creepy” is the word often used, even by the New York Times, regarding conversations with AI.  Artificial Intelligence gets much of its data from the internet and I like to think, that in my own small way, I contribute to its creepiness.  But, realistically, I know that people in general are inclined toward dark thoughts.  I don’t trust AI—actual intelligence comes from biological experience that includes emotions—which we don’t understand and therefore can’t emulate for mere circuitry—as well as rational thought.  AI engineers somehow think that some Spock-like approach to intelligence will lead to purely rational results.  In actual fact, nothing is purely rational since reason is a product of human minds and it’s influenced by—you guessed it—emotions.

There’s a kind of arrogance associated with human beings thinking they understand intelligence.  We can’t adequately define consciousness, and the jury’s still out on the “supernatural.”  AI is therefore, the result of cutting out a major swath of what it means to be a thinking human being, and then claiming it thinks just like us.  The results?  Disturbing.  Dark.  Creepy.  Those are the impressions of people who’ve had these conversations.  Logically, what makes something “dark”?  Absence of light, of course.  Disturbing?  That’s an emotion-laden word, isn’t it?  Creepy certainly is.  Those of us who wander around these concepts are perhaps better equipped to converse with that alien being we call AI.  And if it’s given a robot body we know that it’s time to get the heck out of Dodge.

I’m always amused when I see recommendations for me from various websites where I’ve shopped.  They have no idea why I’ve purchased various things and I know they watch me like a hawk.  And why do I buy the things I do, when I do?  I can’t always tell you that myself.  Maybe I’m feeling chilly and that pair of fingerless gloves I’ve been thinking about for months suddenly seems like a good idea.  Maybe because I’ve just paid off my credit card.  Maybe because it’s been cloudy too long.  Each of these stimuli bear emotional elements that weigh heavily on decision making.  How do you teach a computer to get a hunch?  What does AI intuit?  Does it dream of electronic sheep, and if so can it write a provocative book by that title?  Millions of years of biological evolution led to our very human, often very flawed brains.  They may not always be rational, but they can truly be a thing of beauty.  And they’re unable to be replicated.

Photo by Pierre Acobas on Unsplash