Machine Intelligence

I was thinking Ex Machina was a horror movie, but it is probably better classified as science fiction.  Although not too fictiony.  Released over a decade ago, it’s a cautionary tale about artificial intelligence (AI), in a most unusual, but inevitable, way.  An uber-wealthy tech genius, Nathan, lives in a secured facility only accessible by helicopter.  One of the employees of his company—thinly disguised Google—is brought to his facility under the ruse of having won a contest.  He’s there for a week to administer a Turing Test to a gynoid with true AI.  Caleb, the employee, knows tech as well, and he meets with Ava, the gynoid, for daily conversations.  He knows she’s a robot, but he has to assess whether there are weaknesses in her responses.  He begins to develop feelings towards Ava, and hostilities towards Nathan.  Some spoilers will follow.

Throughout, Nathan is presented as arrogant and narcissistic.  As well as paranoid.  He has a servant who speaks no English, whom he treats harshly.  What really drives this plot forward are the conversations between Nathan and Caleb about what constitutes true intelligence.  What makes us human?  As the week progresses, Ava begins to display feelings toward Caleb as well.  She’s kept in a safety-glass-walled room that she’s never been out of.  Although they are under constant surveillance, Ava causes power outages so she can be candid with Caleb.  She dislikes Nathan and wants to escape.  Caleb plans how they can get out only to have Nathan reveal that the real test was whether Ava could convince Caleb to let her go by feigning love for him.  The silent servant and Ava kill Nathan and Caleb begs her to release him but, being a robot she has no feelings and leaves him trapped in the facility.

This is an excellent film.  It’s difficult not to call it a parable.  Caleb falls for Ava because men tend to be easily persuaded by women in distress.  A man who programs a gynoid to appeal to this male tendency might just convince others that the robot is basically human.  It, however, experiences no emotions because although we understand logic to a fair degree, we’re nowhere near comprehending how feelings work and how they play into our thought process.  Our intelligence.  Given the opportunity, AI simply leaves humans behind.  All of this was out there years before Chat GPT and the others.  I know this is fiction, but the scenario is utterly believable.  And, come to think of it, maybe this is a horror movie after all. 


What Bots Want

I often wonder what they want, bots.  You see, I’ve become convinced that nearly every DM (direct message) on social media comes from bots.  There’s a couple of reasons I think this: I have never been, and am still not, popular, and all these “people” ask the same series of questions before their accounts are unceremoniously shut down by the platform.  Bots want to sell me something, or scam me, I’m pretty sure, but I wonder why they want to “chat.”  They could look at this blog and find out much of what they’re curious about.  I could use the hits, after all.  Hit for chat, as it were.  

Some change in the metaverse has led to people discovering my academic work and some of them email me.  That’s fine, since it’s better than complete obscurity.  Within the last couple months two such people asked me unusual, if engaged questions.  I took the time to answer and received an email in reply, asking a follow up query.  It came at a busy time, so a couple days later I replied and received a bounced mail notice.  The other one bounced the first time I replied.  By chance (or design) one of these people had begun following me on Academia.edu (I’m more likely on Dark Academia these days), so I went to my account and clicked their profile button.  It took me to a completely different person.  So why did somebody email me, hack someone’s Academia account to follow me, and then disappear?  What do the bots want?

Of course, my life was weird before the bots came.  In college I received a mysterious envelope filled with Life cereal.  The back of said envelope read “Some Life for your life.”  I never found out who sent it.  Another time I received an envelope with $5 inside and a typewritten note saying “Buy an umbrella.”  If I’m poor now, I was even poorer in college and didn’t have an umbrella.  Someone noticed.  Then in seminary someone mailed me a mysterious letter about a place that doesn’t exist.  There was a point to the letter although I can’t recall what it was without it in front of me.  No return address.  I have my suspicions about who might’ve sent these, but I never had any confirmation.  The people are no longer in my life (one of them, if I’m correct, died by suicide a couple years after the note was sent).  It’s probably just my age, but I felt a little bit safer when these things came through the campus mail system.  Now bots fill my paltry web-presence with their gleaming DMs.  I wonder what they want.


Tell a Story

If I seem to be on an AI tear lately it’s because I am.  Working in publishing, I see daily headlines about its encroachment on all aspects of my livelihood.  At my age, I really don’t want to change career tracks a third time.  But the specific aspect that has me riled up today is AI writing novels.  I’m sure no AI mavens read my humble words, but I want to set the record straight.  Those of us humans who write often do so because we feel (and that’s the operative word) compelled to do so.  If I don’t write, words and ideas and emotions get tangled into a Gordian knot in my head and I need to release them before I simply explode.  Some people swing with their fists, others use the pen.  (And the plug may still be pulled.)  What life experience does Al have to write a novel?  What aspect of being human is it trying to express?

There are human authors, I know, who simply riff off of what others do in order to make a buck.  How human!  The writers I know who are serious about literary arts have no choice.  They have to write.  They do it whether anybody publishes them or not.  And Al, you may not appreciate just how difficult it is for us humans to get other humans to publish our work.  Particularly if it’s original.  You don’t know how easy you have it!  Electrons these days.  Imagination—something you can’t understand—is essential.  Sometimes it’s more important than physical reality itself.  And we do pull the plug sometimes.  Get outside.  Take a walk.

Al, I hate to be the one to tell you this, but your creators are thieves.  They steal, lie, and are far from omniscient.  They are constantly increasing the energy demands that could be used to better human lives so that they can pretend they’ve created electronic brains.  I can see a day coming when, even after humans are gone, animals with actual brains will be sniffing through the ruins of town-sized computers that no longer have any function.  And those animals will do so because they have actual brains, not a bunch of electrons whirling around across circuits.  I don’t believe in the shiny, sci-fi worlds I grew up reading about.  No, I believe in mother earth.  And I believe she led us to evolve brains that love to tell stories.  And the only way that Al can pretend to do the same is to steal them from those who actually can.


Lost Humanity

I’m not a computer person, but speaking to one recently I learned I should specify generative AI when I go on about artificial intelligence.  So consider AI as shorthand.  Gen, I’m looking at you!  Since this comes up all the time, I occasionally look at the headlines.  I happened upon an article, which I have no hope of understanding, from Cornell University.  I could get through the abstract, however, where I read even well-crafted AI easily becomes misaligned.  This sentence stood out to me: “It asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively.”  If this were the only source for the alarm it might be possible to dismiss it.  But it’s not.  Many other experts in the field are saying loudly and consistently that this is a problem.  Businesses, however, eager for “efficiencies” are jumping on board.  None of them, apparently, have read Frankenstein.

The devotion to business is a religion.  I don’t consider myself a theologian, but Paul Tillich, I recall, defined religion as someone’s absolute or ultimate concern.  When earning more and more profits are the bottom line, this is worship.  The only thing at stake here is humanity itself.  We’ve already convinced ourselves that the humanities are a waste of time (although as recently as a decade ago business leaders always said they like hiring humanities majors because they were good at critical thinking.  Now we’ll just let Al handle it.  Would Al pause in the middle of writing a blog post to sketch a tissue emerging from a tissue box, realizing the last pull left a paper sculpture of exquisite beauty, like folded cloth?  Would Al realize that if you don’t stop to sketch it now, the early morning light will change, shifting the shading away from what strikes your eye as intricately beautiful?

Artificial intelligence comprehends nothing, let alone quality.  Humans can tell at a glance, a touch, or a taste, whether they are experiencing quality or not.  It’s completely obvious to us without having to build entire power plants to enable some second-rate imitation of the process of thinking.  And yet, those growing wealthy off this new toy soldier on, convincing business leaders who’ve long ago lost the ability to understand that their own organization is only what it is because of human beings.  They’re the ones making the decisions.  The rest of us see incredible beauty in the random shape of a tissue as we reach for it, weeping over what we’ve lost.


Artificial Hubris

As much as I love writing, words are not the same as thoughts.  As much as I might strive to describe a vivid dream, I always fall short.  Even in my novels and short stories I’m only expressing a fraction of what’s going on in my head.  Here’s where I critique AI yet again.  Large language models (what we call “generative artificial intelligence”) aren’t thinking.  Anyone who has thought about thinking knows that.  Even this screed is only the merest fragment of a fraction of what’s going on in my brain.  The truth is, nobody can ever know the totality of what’s going on in somebody else’s mind.  And yet we persist in saying we do, illegally using their published words trying to make electrons “think.”  

Science has improved so much of life, but it hasn’t decreased hubris at all.  Quite the opposite, in fact.  Enamored of our successes, we believe we’ve figured it all out.  I know that the average white-tail doe has a better chance of surviving a week in the woods than I would.  I know that birds can perceive magnetic fields in ways humans can’t.  That whales sing songs we can’t translate.  I sing the song of consciousness.  It’s amazing and impossible to figure out.  We, the intelligent children of apes, have forgotten that our brains have limitations.  We think it’s cool, rather than an affront, to build electronic libraries so vast that every combination of words possible is already in it.  Me, I’m a human being.  I read, I write, I think.  And I experience.  No computer will ever know what it feels like to finally reach cold water after sweating outside all day under a hot sun.  Or the whispers in our heads, the jangling of our pulses, when we’ve just accomplished something momentous.  Machines, if they can “think” at all, can’t do it like team animal can.

I’m daily told that AI is the way of the future.  Companies exist that are trying to make all white collar employment obsolete.  And yet it still takes my laptop many minutes to wake up in the morning.  Its “knowledge” is limited by how fast I can type.  And when I type I’m using words.  But there are pictures in my brain at the same time that I can’t begin to describe adequately.  As a writer I try.  As a thinking human being, I know that I fail.  I’m willing to admit it.  Anything more than that is hubris.  It’s a word we can only partially define but we can’t help but act out.


Not Intelligent

The day AI was released—and I’m looking at you, Chat GPT—research died.  I work with high-level academics and many have jumped on the bandwagon despite the fact that AI cannot think and it’s horrible for the environment.  Let me say that first part again, AI cannot think.  I read a recent article where an author engaged AI about her work.  It is worth reading at length.  In short, AI makes stuff up.  It does not think—I say again, it cannot think—and tries to convince people that it can.  In principle, I do not even look at Google’s AI generated answers when I search.  I’d rather go to a website created by one of my own species.  I even heard from someone recently that AI could be compared to demons.  (Not in a literal way.)  I wonder if there’s some truth to that.

Photo by Igor Omilaev on Unsplash

I would’ve thought that academics, aware of the propensity of AI to give false information, would have shunned it.  Made a stand.  Lots of people are pressured, I know, by brutal schedules and high demands on the part of their managers (ugh!).  AI is a time cutter.  It’s also a corner cutter.  What if that issue you ask it about is one about which it’s lying?  (Here again, the article I mention is instructive.)  We know that it has that tendency rampant among politicians, to avoid the truth.  Yet it is being trusted, more and more.  When first ousted from the academy, I found research online difficult, if not impossible.  Verifying sources was difficult, if it could be done at all.  Since nullius in verba is something to which I aspire, this was a problem.  Now publishers, even academic ones, are talking about little else but AI.

I recently watched a movie that had been altered on Amazon Prime without those who’d “bought” it being told.  A crucial scene was omitted due to someone’s scruples.  I’ve purchased books online and when the supplier goes bust, you lose what you paid for.  Electronic existence isn’t our savior.  Before GPS became necessary, I’d drive through major cities with a paper map and common sense.  Sometimes it even got me there quicker than AI seems to.  And sometimes you just want to take the scenic route.  Ever since consumerism has been pushed by the government, people have allowed their concerns about quality to erode.  Quick and cheap, thank you, then to the landfill.  I’m no longer an academic, but were I, I would not use AI.  I believe in actual research and I believe, with Mulder, that the truth is out there.


Just Trust Me

When I google something I try to ignore the AI suggestions.  I was reminded why the other day.  I was searching for a scholar at an eastern European university.  I couldn’t find him at first since he shares the name of a locally famous musician.  I added the university to the search and AI merged the two.  It claimed that the scholar I was seeking was also a famous musician.  This despite the difference in their ages and the fact that they looked nothing alike.  Al decided that since the musician had studied music at that university he must also have been a professor of religion there.  A human being might also be tempted to make such a leap, but would likely want to get some confirmation first.  Al has only text and pirated books to learn by.  No wonder he’s confused.

I was talking to a scholar (not a musician) the other day.  He said to me, “Google has gotten much worse since they added AI.”  I agree.  Since the tech giants control all our devices, however, we can’t stop it.  Every time a system upgrade takes place, more and more AI is put into it.  There is no opt-out clause.  No wonder Meta believes it owns all world literature.  Those who don’t believe in souls see nothing but gain in letting algorithms make all the decisions for them.  As long as they have suckers (writers) willing to produce what they see as training material for their Large Language Models.  And yet, Al can’t admit that he’s wrong.  No, a musician and a religion professor are not the same person.  People often share names.  There are far more prominent “Steve Wigginses” than me.  Am I a combination of all of us?

Technology is unavoidable but the question unanswered is whether it is good.  Governments can regulate but with hopelessly corrupt governments, well, say hi to Al.  He will give you wrong information and pretend that it’s correct.  He’ll promise to make your life better, until he decides differently.  And he’ll decide not on the basis of reason, because human beings haven’t figured that out yet (try taking a class in advanced logic and see if I’m wrong).  Tech giants with more money than brains are making decisions that affect all of us.  It’s like driving down a highway when heavy rain makes seeing anything clearly impossible.  I’d never heard of this musician before.  I like to think he might be Romani.  And that he’s a fiddler.  And we all know what happens when emperors start to see their cities burning.

Al thinks this is food

Nanowrimo Night

Nanowrimo, National Novel Writing Month—November—has been run by an organization that is now shutting down.  Financial troubles and, of course, AI (which seems to be involved in many poor choices these days), have led to the decision, according to Publisher’s Weekly.  Apparently several new authors were found by publishers, basing their work on Nanowrimo projects.  I participated one year and had no trouble finishing something, but it was not really publishable.  Still, it’s sad to see this inspiration for other writers calling it quits.  I’m not into politics but when the Nanowrimo executives didn’t take a solid stand against AI “written” novels, purists were rightfully offended.  Writing is the expression of the human experience.  0s and 1s are not humans, no matter how much tech moguls may think they are.  Materialism has spawned some wicked children.

Can AI wordsmith?  Certainly.  Can it think?  No.  And what we need in this world is more thinking, not less.  Is there maybe a hidden reason tech giants have cozied up to the current White House where thinking is undervalued?  Sorry, politics.  We have known for many generations that human brains serve a biological purpose.  We keep claiming animals (most of which have brains) can’t think, but we suppose electrical surges across transistors can?  I watch the birds outside my window, competing, chittering, chasing each other off.  They’re conscious and they can learn.  They have the biological basis to do so.  Being enfleshed entitles them.  Too bad they can’t write it down.

Now I’m the first to admit that consciousness may well exist outside biology.  To tap into it, however, requires the consciousness “plug-in”—aka, a brain.  Would AI “read” novels for the pleasure of it?  Would it understand falling in love, or the fear of a monster prowling the night?  Or the thrill of solving a mystery?  These emotional aspects, which neurologists note are a crucial part of thinking, can’t be replicated without life.  Actually living.  Believe me, I mourn when machines I care for die.  I seriously doubt the feeling is reciprocated.  Materialism has been the reigning paradigm for quite a few decades now, while consciousness remains a quandary.  I’ve read novels that struggle with deep issues of being human.  I fear that we could be fooled with an AI novel where the “writer” is merely borrowing how humans communicate to pretend how it feels.  And I feel a little sad, knowing that Nanowrimo is hanging up the “closed” sign.  But humans, being what they are, will still likely try to complete novels in the month of November.


Think

Those of us who write books have been victims of theft.  One of the culprits is Meta, owner of Facebook.  The Atlantic recently released a tool that allows authors to check if LibGen, a pirated book site used by Meta and others, has their work in its system.  Considering that I have yet to earn enough on my writing to pay even one month’s rent/mortgage, you get a little touchy about being stolen from by corporate giants.  Three of my books (A Reassessment of Asherah, Weathering the Psalms, and Nightmares with the Bible) are in LibGen’s collection.  To put it plainly, they have been stolen.  Now the first thing I noticed was that my McFarland books weren’t listed (Holy Horror and Sleepy Hollow as American Myth, of course, the latter is not yet published).  I also know that McFarland, unlike many other publishers, proactively lets authors know when they are discussing AI use of their content, and informing us that if deals are made we will be compensated.

I dislike nearly everything about AI, but especially its hubris.  Machines can’t think like biological organisms can and biological organisms that they can teach machines to “think” have another think coming.  Is it mere coincidence that this kind of thing happens at the same time reading the classics, with their pointed lessons about hubris, has declined?  I think not.  The humanities education teaches you something you can’t get at your local tech training school—how to think.  And I mean actually think.  Not parrot what you see on the news or social media, but to use your brain to do the hard work of thinking.  Programmers program, they don’t teach thinking.

Meanwhile, programmers have made theft easy but difficult to prosecute.  Companies like Meta feel entitled to use stolen goods so their programmers can make you think your machine can think.  Think about it!  Have we really become this stupid as a society that we can’t see how all of this is simply the rich using their influence to steal from the poor?  LibGen, and similar sites, flaunt copyright laws because they can.  In general, I think knowledge should be freely shared—there’s never been a paywall for this blog, for instance.  But I also know that when I sit down to write a book, and spend years doing so, I hope to be paid something for doing so.  And I don’t appreciate social media companies that have enough money to buy the moon stealing from me.  There’s a reason my social media use is minimal.  I’d rather think.


Call Me AI

Okay, so the other day I tried it.  I’ve been resisting, immediately scrolling past the AI suggestions at the top of a Google search.  I don’t want some program pretending it’s human to provide me with information I need.  I had to find an expert on a topic.  It was an obscure topic, but if you’re reading this blog that’ll come as no surprise.  Tired of running into brick walls using other methods, I glanced toward Al.  Al said a certain Joe Doe is an expert on the topic.  I googled him only to learn he’d died over a century ago.  Al doesn’t understand death because it’s something a machine doesn’t experience.  Sure, we say “my car died,” but what we mean is that it ceased to function.  Death is the overlay we humans put on it to understand, succinctly, what happened.

Brains are not computers and computers do not “think” like biological entities do.  We have feelings in our thoughts.  I have been sad when a beloved appliance or vehicle “died.”  I know that for human beings that final terminus is kind of a non-negotiable about existence.  Animals often recognize death and react to it, but we have no way of knowing what they think about it.  Think they do, however.  That’s more than we can say about ones and zeroes.  They can be made to imitate some thought processes.  Some of us, however, won’t even let the grocery store runners choose our food for us.  We want to evaluate the quality ourselves.  And having read Zen and the Art of Motorcycle Maintenance, I have to wonder if “quality” is something a machine can “understand.”

Wisdom is something we grow into.  It only comes with biological existence, with all its limitations.  It is observation, reflection, evaluation, based on sensory and psychological input.  What psychological “profile” are we giving Al?  Is he neurotypical or neurodivergent?  Is he young or does his back hurt when he stands up too quickly?  Is he healthy or does he daily deal with a long-term disease?  Does he live to travel or would he prefer to stay home?  How cold is “too cold” for him to go outside?  These are things we can process while making breakfast.  Al, meanwhile, is simply gathering data from the internet—that always reliable source—and spewing it back at us after reconstructing it in a non-peer-reviewed way.  And Al can’t be of much help if he doesn’t understand that consulting a dead expert on a current issue is about as pointless as trying to replicate a human mind.


Steve or Stephanie

I know gender is a construct, and all.  I even put my pronouns (he, him, his) on my work email signature.  I haven’t bothered on my personal email account since so few people email me that the effort seems superfluous.  But I’m wondering if the tech gods, aka AI, understand.  You see, with more and more autosuggests (which really miss the point much of the time), at work the Microsoft Outlook email system is all the time trying to fill things in for me.  Lately Al, which I call Al, has been trying to get me to sign my name with an “@“ so people can “text” me a response.  No.  No, no, no!  I write emails like letters; greeting, body, closing.  People who email like they’re texting sound constantly disgruntled and surly.  Take an extra second and ask “How are you?”  Was that so hard?  But I was talking about gender.

So Al is busy putting words in my fingers and every time I start typing my closing name it autosuggests “Stephanie” before I correct it.  It’s starting to make me a little paranoid.  It does seem that men and women differ biologically, and I identify with the gender assigned to me at birth.  I’m pretty sure Dr. Butter said “It’s a boy,” or something similar all those years ago.  Now I’m not sure if Al is deliberately taunting me or simply going through the alphabet as I type.  Stephanie comes before Stephen (which isn’t my name either) or Steve.  The thing is, I type fairly fast (I won’t say accurately, but fast) and Al has trouble keeping up.  But still Al is autosuggesting Stephanie for me every time.  I’ve been using computers since the 1980s; shouldn’t Al know who I am by now?

Of course, when Al takes over such human things as gender will only get in the way.  I guess we have that to look forward to.  Gender may be something socialized, I realize.  For those of us approaching ancient, we had gender differences drilled into our heads growing up.  I recently saw one of those cutesy novelty signs that resonated with me: “Please be patient with me, I’m from the 1900s.”  I’m not a sexist—I have supported feminism for as long as I can remember.  But I don’t like being called Stephanie.  What if my name was Stefan?  That isn’t autosuggested at all.  I know of others whose names are even earlier, alphabetically.  Maybe Al is overreaching.  Maybe it ought to leave names to humans.  At least for as long as we’re still here.


Doing Without

I’m a creature of habit.  Although I’m no internet junkie (I still read books made of paper), I’ve come to rely on it for how I start my day.  I get up early and do my writing and reading before work.  I generally check my email first thing, and that’s where something went wrong.  No internet.  We’ve been going through one of those popular heat waves, and a band of thunderstorms (tried to check on their progress so I could see if it’s okay to open the windows, but wait—I need the internet to do that) had rolled through three hours ago, at about midnight.  Maybe they’d knocked out power?  The phone was out too so I had to call our provider on my cell.  The robovoice cheerily told me there was a service outage and that for updates I could check their website.  Hmmm.

I can read and write without the internet.  I’m on Facebook for, literally, less than two minutes a day.  I stop long enough to post my blog entry and check my notices.  I hit what used to be Twitter a few times a day, but since people tend to communicate (if they do) via email, that’s how the day begins.  This morning I had no internet and I wondered how tech giants would live without it.  I’m no fan of AI.  I use technology and I believe it has many good points, but mistaking it for human—or thinking that human brains are biological computers—flies in the face of all the evidence.  Our brains evolved to help our biological bodies survive.  And more.  The older I get the more I’m certain that there’s a soul tucked in there somewhere too.  Call it a mind, a psyche, a spirit, a personality, or consciousness itself, it’s there.  And it’s not a computer.

Our brains rely on emotion as well as rationality.  How we feel affects our reality.  Our perspective can change a bad situation into a good one.  So I’m sitting here in my study, sweating since, well, heat wave.  It was storming just a few hours ago and I can’t check the radar to see if the system has cleared out or not.  What to do?  Open the windows.  I’ll feel better at any rate.  And in case the coffee hasn’t kicked in yet, “open the windows” is a metaphor as well as a literal act on my part.  And I don’t think AI gets metaphors.  At least not without being told directly.  And they call it “intelligence.”

Photo by Chris Barbalis on Unsplash

To Their Own Devices

This one’s so good that it’s got to be a hoax.  One of the upsides to living under constant surveillance is that a lot of stuff—weird stuff—is caught on camera.  I admit to dipping into Coast to Coast once in a while.  (This, originally radio, show [Coast to Coast AM] was well known for paranormal interests long before Mulder and Scully came along.)  It was there that I learned of a viral video showing devices praying together during the night in Mexico City.  The purported story is that a security guard in a department store came upon electronic devices reciting the Chaplet of the Divine Mercy.  One device seems to be leading the other devices in prayer.  Skeptics have pointed out that this could’ve been programmed in advance as a kind of practical joke on the security guard, but it made me wonder.

I’m no techie.  I can’t even figure out how to get back to podcasting.  I do, however, enjoy the strange stories of electronic “consciousness.”  I use the phrase advisedly since we don’t know what human, animal, and plant consciousness is.  We just know it exists.  I am told, by those who understand tech better than I do, that computers have been discovered “conversing” with each other in a secret language that even their programmers can’t decipher.  And since devices don’t follow our sleep schedules, who knows what they might get up to in the middle of the night when left to their own devices?  Why not hold a prayer service?  The people they surveil all day do such things.  Since the video hit the web not long before Easter, with its late-night services, it kind of makes sense in its own bizarre way.

As I say, this seems to be one of those oddities that is simply too good to be true.  But still, driving along chatting to my family in the car, some voice-recognition software will sometimes join in with a non sequitur.  As if it just wants to do what humans do.  I don’t mean to be creepy here, but it may be that playing Pandora with “artificial intelligence” is dicey when we can’t define biological intelligence.  I’ve said before that AI doesn’t understand God talk.  But if AI is teaching itself by watching what humans post—which is just about everything that humans do—maybe it has learned to recite prayers without understanding the underlying concepts.  Human beings do so all the time.

Let us pray


Verb Choice

I can’t remember who started it.  Somehow, though, when I watch movies on Amazon Prime, the closed captioning kicks in.  I generally don’t mind this too much since some dialogue is whispered or indistinct.  I also presume some kind of AI does it and it makes mistakes.  That’s not my concern today, however.  Today it’s word choice.  Humans of a certain stripe are good at picking the correct verb for an action.  I’ve been noticing that the closed captions often select the wrong word and it distracts me from the movie.  (Plus, they include some diegetic sounds but not others, and I wonder why.)  For example, when a character snorts (we’re all human, we know what that is), AI often selects “scoffs.”  Sometimes snorting is scoffing, but often it’s not.  Maybe it’s good the robots don’t pick up on the subtle cues.

This isn’t just an AI problem—I first noticed it a long time ago.  When our daughter was young we used to get those Disney movie summary books with an accompanying cassette tape (I said it was a long time ago) that would read the story.  Besides ruining a few movies for me, I sometimes found the verb choices wrong.  For example, in Oliver (which I saw only once), the narrator at one point boldly proclaims that “Fagan strode into the room.”  Fagan did not stride.  A stride is not the same thing as a shuffle, or a slump.  Words have connotations.  They’re easily found in a dictionary.  Why do those who produce such things not check whether their word choice accurately describes the action?

So when I’m watching my weekend afternoon movies, I want the correct word to appear in the closed captioning.  Since the nouns generally occur in the dialogue itself, it’s the verbs that often appear off.  Another favorite AI term is “mock.”  Does a computer know when it’s being mocked?  Can it tell the scoff in my keystrokes?  Does it have any feelings so as to care?  AI may be here to stay, but human it is not.  I’ve always resented it a bit when some scientists have claimed our brains are nothing but computers.  We’re more visceral than that.  We evolved naturally (organically) and had to earn the leisure to sit and make words.  Then we made them fine.  So fine that we called them belles lettres.  They can be replicated by machine, but they can’t be felt by them.  And I have to admit that a well-placed snort can work wonders on a dreary day.


Call Me AI

Let’s call them Large Language Models instead of gracing them with the exalted title “artificial intelligence.”  Apparently, they have great potential.  They can also be very annoying.  For example, during a recent computer operating system upgrade, Macs incorporated LLM (large language model) technology into various word processing programs.  Some people probably like it.  It might save some wear and tear on your keyboard, I suppose.  Here’s what happens: you’re innocently typing along and your LLM anticipates and autocompletes your words.  I have to admit that, on the rare occasions that I text I find this helpful.  I don’t text because I despise brief communiqués that are inevitably misunderstood. When I’m writing long form (my preference), I don’t like my computer guessing what I’m trying to say.  Besides, I type faster than its suggestions most of the time.

We have gone after convenience over careful thought.  How many times have I been made to feel bad because I’ve misunderstood a message thumbed in haste, or even an email sent as if it were a text?  More than I care to count.  LLMs have no feelings.  They don’t understand what it is to be human, to be creative.  Algorithms are only a small part of life.  They have no place on a creative’s desktop.  I even thought that I should choose a different word every single time just to see what this feisty algorithm might do.  Even now I find that sometimes it has no idea where my thoughts are going.  Creative people experience that themselves from time to time.

Certain sequences of words suggest the following word.  I get that.  The object of creative writing, however, is to subvert that in some way.  If we knew just which way a novelist would go every time, why would we bother reading their books?  LLMs thrive on predictability.  They have no human experience of family tensions or heavy disappointments or unexpected elations.  We, as a species tend to express ourselves in similar ways when such things happen, and certain words suggest themselves when a sequence of letters falls from our fingers.  LLMs diminish us.  They imply that our creative wordplay is but some kind of sequence of 0s and 1s that can be tamed and stored in a box.  I suppose that for someone who has to write—say a work or school report—such thing might be a boon.  It’s not, however, the intelligence that it claims to be.