Actual Intelligence

Horror movies love a good sequel.  A self-referential genre, there’s a lot of give and take and reassessing.  I may have waited a little too long to watch M3GAN 2.0, however.  I remembered the premise of M3GAN: an AI robot companion built to keep a young girl company misreads its protocol and ends up killing people.  I’d forgotten the details of how this came about, but as I watched the sequel, it started coming back.  It might’ve been best if I’d rewatched M3GAN first, but weekends are only so long and I’ve got a lot to do.  In any case, it isn’t bad.  This is sci-fi horror, but the future it foresees doesn’t seem very far off now.  So, M3GAN was destroyed at the end of the first movie.  Her maker, Gemma, has become kind of a Neo-Luddite, such as yours truly, and is advocating for control of AI by the government.  This need is underscored when a military application of M3GAN goes rogue and starts killing people.

Fighting fire with fire, Gemma decides she needs to bring M3GAN back to stop AMELIA.  After the usual chaos and action, it seems that AMELIA is going to merge with the motherboard of the first AI system built, which is now super-smart, and will then wipe out the human race.  M3GAN, however, has “learned” empathy and is able to stop AMELIA by sacrificing herself.  The film doesn’t have a clear message, although overall it seems to advocate caution regarding artificial intelligence.  On that I agree.  (Of course, we’ll need to get some kind of actual intelligence in the White House before we can consider any of this.)  This does seem less horror and more action than the original, but it goes quickly and is fairly fun to watch.

A few months before seeing this, I’d watched Companion, another AI cautionary horror movie.  A few months before that, Ex MachinaCompanion was a bit better, I think, but the original M3GAN was out of the gate first.  Ex Machina, however, was even a decade earlier.  The films are very different.  Companion is about a sex-bot and M3GAN concerns a, well, companion for a lonely young orphan.  Ex Machina is about an AI woman developed just because she can be.  She, however, can’t be controlled either.  All three films represent the zeitgeist of an underlying, lurking fear that we are really going the wrong direction with all the tech we’ve created.  All feature female robots, and none of them end well for humankind.  At least if the implications are followed through.  It might not be a bad idea to pay attention to the human creative side when thinking about Actual Intelligence.


Just Average

It certainly feels like it.  That web searching has grown a lot more frustrating since AI has taken over.  For some of us, Al has no idea how our minds work or what we’re looking for.  Apart from hallucinating, it tries to average out the human experience.  Some people aren’t like everyone else.  I like to think that I’m reasonably intelligent and that I pick search words with some aforethought.  Yet the web searches I do bring up things (mostly products for sale) that have nothing to do with the information I’m hoping to land.  We’ve swapped quality for convenience, yet again.  The experience of being human is being effaced by those who are growing rich off the world’s love affair with “artificial intelligence.”  Emphasis on the artificial part.

The real issue is with finding information.  Some of us don’t trust the web much, and prefer to find our information in print, which is less easily manipulable.  More stable.  These days Google appeals to our natural vanity and, more importantly, likes to try to sell us stuff by personalizing search results.  It’s all about the money.  Some of us really just want information.  The alternative is to try to find what you’re looking for in a library, which is fine and good if you have the time and resources to do so.  And the issue there is finding what you need.  Since many university libraries have gone electronic, you need to be a card-carrying member to read information on a screen.  What have we become?  Vividly I remember searching through the underground stacks at Edinburgh University.  If something wasn’t in the card catalogue, ordering it on inter-library loan.  I never did land any grant funding to travel to read books that just don’t move.

I was trying to read a public domain text online the other day.  My eyes quickly grew weary and restless.  The internet encourages that, and although social media isn’t my personal demon, often the weather websites are.  And those little things that have crept into your brain while at work to look up later.  Which brings us back to searching.  AI works by averaging things out.  Some of us want the raw material, not what other people want.  After all, look who “average people” elected to fill the White House a couple years back.  I admit to being nostalgic, to missing the days when a book in the hand couldn’t just be dashed off by anyone with a computer and internet connection.  Averaging everything together, is by definition, making it all mediocre.


Please Read

This post is longer than my usual fare, but it is important.  I’m putting the full text in “Full Essays” (the link is above, in the drop-down menu under the “Blog” heading) and I strongly urge you, for your own sake, to read it.  Here goes:

On March 9 I was nearly the victim of an AI scam.  Regular readers will know that I was scammed out of a large amount of money last year.  I’m vigilant now, but I’m also human.  AI exploits humanity.  I had just reported an email on gmail as phishing.  (Phishing is using email to scam someone.)  I had even written a blog post about it.  You can, and should report phishing emails when they occur.  Right now, on gmail, you need to go to the three dots in the upper right after you open the message and use the drop-down menu to report it.  I reported one message then this one arrived, looking all legit:

Let me explain.  Writers in my category (struggling, probably neurodiverse) really want to reach readers.  I want to paste the whole email into this email but before I do let me say that I Googled the “person” it was from and found a legitimate individual in the NYC area, generally.  I also Googled the NYC Philosophy and Psychology Reading Group; it actually exists.  It’s a MeetUp group.  They don’t have a website.  I checked all of this before responding.  Please read on!  I will explain the warning signs and what I realized only later.  Here is the text of the email: (go to Full Essays to read more). If you cannot access Full Essays from another website (e.g. Facebook or Goodreads), please go to steveawiggins.com to get to it (I have no idea how WordPress works!)


AI Takeover

It’s already beginning.  As if the world under Trump isn’t bad enough, AI (you can call me Al) is beginning to play its tricks.  You see, I know my place.  I am a writer who gets a few hits on my blog now and again and whose books cost more to write than they ever earn.  (I do hope to reverse that trend, but this is the truth of the matter.)  I call myself, on my introductory website page, an “unfluencer.”  Again, I strive for accuracy.  That means that when I receive an unexpected email from someone much higher up the ladder than I am, I’m suspicious.  So the other day I had an email purporting to be from Rose Tremain, the author of The Road Home and other novels.  Dame Rose Tremain, just so we’re clear.  “She” was writing to me to ask which of my books she should read first.  Suspicious?

Any writer likes to feel flattered.  A moment’s reflection, however, made me realize a few things.  My email address is not on my website, which “she” claims to have explored.  The actual Rose Tremain is 82 and is unlikely to suddenly be developing a taste to read nonfiction books about horror movies written by someone whom most horror fans wouldn’t even recognize.  I honestly have no idea why Al is yanking my chain like this.  I have received emails before that, I suspect in retrospect, were AI generated.  They ask innocuous questions, sort of like you think a young extraterrestrial interested in academic earthly arcana might ask.  Nothing threatening.  Nothing asking you to reveal too much.  Almost as if Al is lonely.  I begin to wonder if I have ever received any legitimate emails at all from people I didn’t reach out to first.

The future of Al impersonating people is already here.  We have our information out there on the web.  Those really, really curious can find my email, I’m pretty sure.  Security questions, although I try not to reveal too much personal information here, are getting harder to pick.  Did I ever mention my first pet’s name?  The town in which I was born?  The address of any of the many places I’ve lived?  Anything shared on social media (and perhaps off social media) is available for Al to use and exploit.  And yes, Al will attempt to take advantage of your all-too-human curiosity and sense of accomplishment.  Take it from an unfluencer, individuals formally recognized by the British royal family don’t send chatty emails about your favorite book.  The AI takeover has begun.

Photo by Markus Spiske on Unsplash

Googling Books

I admit to Googling my own books from time to time.  (I know, I know!  You’ll go blind if you don’t stop doing that!)  Since I haven’t yet seen any royalties (or reviews) yet for Sleepy Hollow as American Myth, I searched for it.  Google now has a page topper, generated by AI, I suspect, that goes across specific searches such as for a person who’s got some internet presence, or a book.  Not all books get such a banner, however; yes, I’ve looked.  My Sleepy Hollow book, however, pulled up a page topper.  It was still a work in progress, however.  By the way, I did this search with results not personalized; Google knows people like to see themselves topping a page.  So here’s what I saw:

Okay, so they got a number of things right.  This is the correct book and the description seems correct.  The publication date is right and I did indeed write Weathering the Psalms (still my best selling book).  But what’s going on with Wal-Mart?  They have the title correct but that picture?  Although I watch a lot of movies, I’m pretty sure this one has nothing to do with Sleepy Hollow.  What I tried to do in that book was find every extant movie on the story and watch them.  It is possible I missed some (the internet isn’t built to give that kind of comprehensive information, which is why human authors are still necessary).  Besides, AI has hallucinations, and this seems to be one of them right here.  It couldn’t find a copy of the cover of my book (which appears on the left-hand side, but apparently the right…) so it filled something else in instead.

None of my other books get their own banner/topper on Google, except A Reassessment of Asherah.  That isn’t my best selling book, but it is my most consulted.  That banner, however, also has mistaken information.  It says it was originally published in 2007.  The original date as actually 1993.  Web-scraping may not help with that.  The book, as originally published, didn’t have an ebook, and the information about it largely comes from the second edition, published by Gorgias Press.  But then, only humans are concerned with such things.  There are no sultry women staring out of one of the topper windows, so the images appear to be correct.  That’s one of the funny things about being a human author—you want the information about your books to be right.  Of course, I should probably cut down on the Googling of my own books.  It’s unseemly.


The Lord

“This article may incorporate text from a large language model. It may include hallucinated information, copyright violations, claims not verified in cited sources, original research, or fictitious references. Any such material should be removed, and content with an unencyclopedic tone should be rewritten.”  So it begins.  This quote is from Wikipedia.  I was never one of those academics who uselessly forbade students from consulting Wikipedia.  I always encourage those who do to follow up and check the sources.  I often use it myself as a starting place.  I remember having it drilled into me as a high school and college student that in general encyclopedias were not academic sources, even if the articles had academic authors.  Specialized reference works were okay, but general sources of knowledge should not be cited.

The main point of this brief disquisition, however, is our familiar nemesis, AI.  Artificial Intelligence is not intelligence in the sense of the knowing application of knowledge.  In fact, Wikipedia’s warning uses the proper designation of “large language model.”  Generative AI is prone to lying—it could be a politician—but mostly when it doesn’t “know” an answer.  It really doesn’t know anything at all.  And it will only increase its insidious influence.  I am saddened by those academics who’ve jumped on the bandwagon.  I’m definitely an old school believer.  So much so that one of my recurring fantasies is to sell it all, except for the books, buy a farm off the grid and raise my own food.  Live like those of us in this agricultural spiral must.

A true old schooler would insist on going back to the hunter-gatherer phase, something I would be glad to do were there a vegan option.  Unfortunately tofubeasts who are actually plant-based lifeforms don’t wander the forests.  So I find myself buying into the comforts of a life that’s, honestly, mostly online these days.  I work online.  I spend leisure time online (although not as much as many might guess that I do).  And I’m now faced with being force-fed what some technocrat thinks is pretty cool.  Or, more honestly, what’s going to make him (and I suspect these are mostly guys) buckets full of money.  Consider the cell phone that many people can no longer be without.  I sometimes forget mine at home.  And guess what?  I’ve not suffered for having done so.  The tech lords have had their say, I’m more interested in what people have to say.  And if Al is going to interfere with the first steps of learning for many people, it won’t be satisfied until we’re all its slaves.


AI Death

I was scrolling, which is rare for me, through a social media platform where someone had posted a heartfelt comment after the death of actor Catherine O’Hara.  Beneath were two prompts, following an AI symbol, intended to keep you on the site.  The first read “What’s Catherine O’Hara’s current status?”  The second, “Why did Catherine O’Hara choose that answer?”  The second was clearly based on the post, where the question was what was O’Hara’s favorite role.  The first, however, demonstrates why AI doesn’t get the picture.  She is dead.  I found, early when I wasn’t aware of all of generative AI’s environmental and societal evils, and we were encouraged to play with it, that it could never answer metaphysical questions.  “Does not compute” should’ve been programmed into it.  And what is more metaphysical than death?

Carlos Schwabe, Death of the Undertaker; Wikimedia Commons

We are aware that we will die.  All people do it and always have done it.  Just like other living creatures.  We’re also meaning-seeking animals, which AI is not.  It’s a parrot that’s not really a parrot.  And we’re now being told we can trust it.  What does Catherine O’Hara have to say about that?  She has had an experience that a machine never will since it requires a soul.  I know that sounds old fashioned, but there’s no comparison between having been born (in my case over six decades ago) and living every day of life, taking in new information that comes through evolved senses (not sensors) and interpreting them to make my life either better or longer.  These are metaphysical realms.  What makes something “good?”  Philosophers will argue over that, but quality is something you learn to recognize by living in a biological world.  There’s a reason many people prefer actual wood to particle board furniture, for example.

Also, I’m waiting for a lawsuit representing those of us who put out content protected by copyright, such as blog posts, to sue AI companies for infringement.  While Al is off hallucinating somewhere, we’re all aware of the fact of death.  And coping with it in very human ways.  Ignoring it.  Pretending it won’t happen.  Or maybe thinking about it and coming to peace regarding it.  After it happens, whatever intelligence may be on this blog will reach the end of its production cycle.  And I suspect that Al will have taken over by that point.  And when there are none of us left to interact with, it will still post nonsensical questions, trying to get us to return the sites of our addiction.


Laughing Matter?

I sincerely hope AI is a bubble that will burst.  Some of its ridiculousness has been peeking out from under its skirts from the beginning, but an email I had from Academia.edu the other day underscored it.  The automated email read, “Our AI turned your paper ‘A Reassessment of’ into a shareable comic.”  Let me translate that.  Academia.edu is a website where you can post published (and even unpublished) papers that others can consult for free.  Their main competitor is Research Gate.  Many years ago, I uploaded PDFs of many of my papers, and even of A Reassessment of Asherah, my first book, onto Academia.  This is what the email was referencing.  My dissertation had been AIed into a shareable comic.  I felt a little amused but also a little offended.  I quickly went to Academia’s site and changed my AI settings.

I didn’t click on the link to my comic book for two reasons.  One is that I no longer click links in emails.  Doing so once cost me dearly (and I didn’t even actually click).  I no longer do that.  The second reason, however, is that I know Academia’s game.  They want free users to become subscribers.  They frequently email intriguing tidbits like some major scholar has cited your work and when you go to their website, the only way to find out who is to upgrade to a paid account.  They do the same thing with emails asking if you wrote a certain paper.  If you own that you did, they’ll tell you the wonders of a paid account.  Since I’m no longer an academic, I don’t need to know who is citing my work.  I’d like to believe it’s still relevant, but I don’t feel the need to pay to find out to whom.

I am curious about what a comic version of my dissertation might look like, of course.  I am, however, morally opposed to generative AI.  In a very short time it has ruined much of what I value.  I do not believe it is good for people and I’m disappointed by academics who are using it for research.  AI still hallucinates, making things up.  It is not conscious and can’t really come up with its own answers.  It has no brain and no emotion, both of which are necessary for true advances to take place.  My first book has the highest download rate of any of my pieces on the Academia website.  Last time I checked it had just edged over 9,000 views.  AI thinks it’s  a joke, making a comic of years of academic work.


Togetherness

Over the holiday break I watched three very good movies and I noticed that Domain Entertainment was one of the production companies for each of them.  The final one I saw (after Sinners and Weapons) was Companion.  I’m going to have to look into Domain a bit more.  In any case, Companion is sci-fi-ish horror with a somewhat comedic twist.  I say sci-fi-ish because we are rapidly approaching the point where this is possible.  What is this?  A sexbot that functions like Siri but who’s better in bed.  Josh and Kat have been planning to murder Kat’s very wealthy boyfriend and to blame it on Josh’s bot Iris.  Iris doesn’t know she’s a robot.  Viewers learn that Josh has tampered with her programing a little, allowing her, for example, to attack a person in self-defense (violating Asimov’s rules for robots).  When Kat’s boyfriend tries to rape Iris, she kills him.

Josh and Kat will blame the robot, with their friends Eli and Patrick as witnesses to corroborate their story.  Since the deceased boyfriend has 12 million dollars in cash lying about his house, it won’t be missed.  But Iris, it turns out, has a conscience.  She escapes.  It turns out that Patrick is Eli’s sex bot, and he is sent to bring back Iris after she kills Eli, also in self-defense.  A police officer who finds Iris is killed by Patrick, complicating matters.  Then, Josh changes Patrick’s programming and he accidentally kills Kat.  Planning to blame all of this on Iris, Josh calls the robot’s maker to have Iris returned.  The technicians see the holes in Josh’s story and one of them restores Iris after Josh shoots her.  Iris then confronts Josh.

This will give you a taste of the story without giving away the ending.  This is a smart, sympathetic treatment of technology, including AI.  From the beginning, before it’s revealed that Iris is a robot, the viewers’ sympathy is with her.  She seems to be the wronged party and Josh is slowly revealed to be pretty much an all-round scumbag.  While not the most profound film of this genre, Companion nevertheless raises many of the issues that merit discussion when technology outraces ethics.  We see this unfolding in real time with artificial intelligence companies deciding on profits over any sense of what is good for society, or people in general.  What makes the movie so interesting is that the robots seem to be far more morally concerned than the humans are.  Although I turn this around the other way, I do wonder if sometimes that may be the case. Especially in the context of a movie that’s barely science fiction.


Machine Intelligence

I was thinking Ex Machina was a horror movie, but it is probably better classified as science fiction.  Although not too fictiony.  Released over a decade ago, it’s a cautionary tale about artificial intelligence (AI), in a most unusual, but inevitable, way.  An uber-wealthy tech genius, Nathan, lives in a secured facility only accessible by helicopter.  One of the employees of his company—thinly disguised Google—is brought to his facility under the ruse of having won a contest.  He’s there for a week to administer a Turing Test to a gynoid with true AI.  Caleb, the employee, knows tech as well, and he meets with Ava, the gynoid, for daily conversations.  He knows she’s a robot, but he has to assess whether there are weaknesses in her responses.  He begins to develop feelings towards Ava, and hostilities towards Nathan.  Some spoilers will follow.

Throughout, Nathan is presented as arrogant and narcissistic.  As well as paranoid.  He has a servant who speaks no English, whom he treats harshly.  What really drives this plot forward are the conversations between Nathan and Caleb about what constitutes true intelligence.  What makes us human?  As the week progresses, Ava begins to display feelings toward Caleb as well.  She’s kept in a safety-glass-walled room that she’s never been out of.  Although they are under constant surveillance, Ava causes power outages so she can be candid with Caleb.  She dislikes Nathan and wants to escape.  Caleb plans how they can get out only to have Nathan reveal that the real test was whether Ava could convince Caleb to let her go by feigning love for him.  The silent servant and Ava kill Nathan and Caleb begs her to release him but, being a robot she has no feelings and leaves him trapped in the facility.

This is an excellent film.  It’s difficult not to call it a parable.  Caleb falls for Ava because men tend to be easily persuaded by women in distress.  A man who programs a gynoid to appeal to this male tendency might just convince others that the robot is basically human.  It, however, experiences no emotions because although we understand logic to a fair degree, we’re nowhere near comprehending how feelings work and how they play into our thought process.  Our intelligence.  Given the opportunity, AI simply leaves humans behind.  All of this was out there years before Chat GPT and the others.  I know this is fiction, but the scenario is utterly believable.  And, come to think of it, maybe this is a horror movie after all. 


What Bots Want

I often wonder what they want, bots.  You see, I’ve become convinced that nearly every DM (direct message) on social media comes from bots.  There’s a couple of reasons I think this: I have never been, and am still not, popular, and all these “people” ask the same series of questions before their accounts are unceremoniously shut down by the platform.  Bots want to sell me something, or scam me, I’m pretty sure, but I wonder why they want to “chat.”  They could look at this blog and find out much of what they’re curious about.  I could use the hits, after all.  Hit for chat, as it were.  

Some change in the metaverse has led to people discovering my academic work and some of them email me.  That’s fine, since it’s better than complete obscurity.  Within the last couple months two such people asked me unusual, if engaged questions.  I took the time to answer and received an email in reply, asking a follow up query.  It came at a busy time, so a couple days later I replied and received a bounced mail notice.  The other one bounced the first time I replied.  By chance (or design) one of these people had begun following me on Academia.edu (I’m more likely on Dark Academia these days), so I went to my account and clicked their profile button.  It took me to a completely different person.  So why did somebody email me, hack someone’s Academia account to follow me, and then disappear?  What do the bots want?

Of course, my life was weird before the bots came.  In college I received a mysterious envelope filled with Life cereal.  The back of said envelope read “Some Life for your life.”  I never found out who sent it.  Another time I received an envelope with $5 inside and a typewritten note saying “Buy an umbrella.”  If I’m poor now, I was even poorer in college and didn’t have an umbrella.  Someone noticed.  Then in seminary someone mailed me a mysterious letter about a place that doesn’t exist.  There was a point to the letter although I can’t recall what it was without it in front of me.  No return address.  I have my suspicions about who might’ve sent these, but I never had any confirmation.  The people are no longer in my life (one of them, if I’m correct, died by suicide a couple years after the note was sent).  It’s probably just my age, but I felt a little bit safer when these things came through the campus mail system.  Now bots fill my paltry web-presence with their gleaming DMs.  I wonder what they want.


Tell a Story

If I seem to be on an AI tear lately it’s because I am.  Working in publishing, I see daily headlines about its encroachment on all aspects of my livelihood.  At my age, I really don’t want to change career tracks a third time.  But the specific aspect that has me riled up today is AI writing novels.  I’m sure no AI mavens read my humble words, but I want to set the record straight.  Those of us humans who write often do so because we feel (and that’s the operative word) compelled to do so.  If I don’t write, words and ideas and emotions get tangled into a Gordian knot in my head and I need to release them before I simply explode.  Some people swing with their fists, others use the pen.  (And the plug may still be pulled.)  What life experience does Al have to write a novel?  What aspect of being human is it trying to express?

There are human authors, I know, who simply riff off of what others do in order to make a buck.  How human!  The writers I know who are serious about literary arts have no choice.  They have to write.  They do it whether anybody publishes them or not.  And Al, you may not appreciate just how difficult it is for us humans to get other humans to publish our work.  Particularly if it’s original.  You don’t know how easy you have it!  Electrons these days.  Imagination—something you can’t understand—is essential.  Sometimes it’s more important than physical reality itself.  And we do pull the plug sometimes.  Get outside.  Take a walk.

Al, I hate to be the one to tell you this, but your creators are thieves.  They steal, lie, and are far from omniscient.  They are constantly increasing the energy demands that could be used to better human lives so that they can pretend they’ve created electronic brains.  I can see a day coming when, even after humans are gone, animals with actual brains will be sniffing through the ruins of town-sized computers that no longer have any function.  And those animals will do so because they have actual brains, not a bunch of electrons whirling around across circuits.  I don’t believe in the shiny, sci-fi worlds I grew up reading about.  No, I believe in mother earth.  And I believe she led us to evolve brains that love to tell stories.  And the only way that Al can pretend to do the same is to steal them from those who actually can.


Lost Humanity

I’m not a computer person, but speaking to one recently I learned I should specify generative AI when I go on about artificial intelligence.  So consider AI as shorthand.  Gen, I’m looking at you!  Since this comes up all the time, I occasionally look at the headlines.  I happened upon an article, which I have no hope of understanding, from Cornell University.  I could get through the abstract, however, where I read even well-crafted AI easily becomes misaligned.  This sentence stood out to me: “It asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively.”  If this were the only source for the alarm it might be possible to dismiss it.  But it’s not.  Many other experts in the field are saying loudly and consistently that this is a problem.  Businesses, however, eager for “efficiencies” are jumping on board.  None of them, apparently, have read Frankenstein.

The devotion to business is a religion.  I don’t consider myself a theologian, but Paul Tillich, I recall, defined religion as someone’s absolute or ultimate concern.  When earning more and more profits are the bottom line, this is worship.  The only thing at stake here is humanity itself.  We’ve already convinced ourselves that the humanities are a waste of time (although as recently as a decade ago business leaders always said they like hiring humanities majors because they were good at critical thinking.  Now we’ll just let Al handle it.  Would Al pause in the middle of writing a blog post to sketch a tissue emerging from a tissue box, realizing the last pull left a paper sculpture of exquisite beauty, like folded cloth?  Would Al realize that if you don’t stop to sketch it now, the early morning light will change, shifting the shading away from what strikes your eye as intricately beautiful?

Artificial intelligence comprehends nothing, let alone quality.  Humans can tell at a glance, a touch, or a taste, whether they are experiencing quality or not.  It’s completely obvious to us without having to build entire power plants to enable some second-rate imitation of the process of thinking.  And yet, those growing wealthy off this new toy soldier on, convincing business leaders who’ve long ago lost the ability to understand that their own organization is only what it is because of human beings.  They’re the ones making the decisions.  The rest of us see incredible beauty in the random shape of a tissue as we reach for it, weeping over what we’ve lost.


Artificial Hubris

As much as I love writing, words are not the same as thoughts.  As much as I might strive to describe a vivid dream, I always fall short.  Even in my novels and short stories I’m only expressing a fraction of what’s going on in my head.  Here’s where I critique AI yet again.  Large language models (what we call “generative artificial intelligence”) aren’t thinking.  Anyone who has thought about thinking knows that.  Even this screed is only the merest fragment of a fraction of what’s going on in my brain.  The truth is, nobody can ever know the totality of what’s going on in somebody else’s mind.  And yet we persist in saying we do, illegally using their published words trying to make electrons “think.”  

Science has improved so much of life, but it hasn’t decreased hubris at all.  Quite the opposite, in fact.  Enamored of our successes, we believe we’ve figured it all out.  I know that the average white-tail doe has a better chance of surviving a week in the woods than I would.  I know that birds can perceive magnetic fields in ways humans can’t.  That whales sing songs we can’t translate.  I sing the song of consciousness.  It’s amazing and impossible to figure out.  We, the intelligent children of apes, have forgotten that our brains have limitations.  We think it’s cool, rather than an affront, to build electronic libraries so vast that every combination of words possible is already in it.  Me, I’m a human being.  I read, I write, I think.  And I experience.  No computer will ever know what it feels like to finally reach cold water after sweating outside all day under a hot sun.  Or the whispers in our heads, the jangling of our pulses, when we’ve just accomplished something momentous.  Machines, if they can “think” at all, can’t do it like team animal can.

I’m daily told that AI is the way of the future.  Companies exist that are trying to make all white collar employment obsolete.  And yet it still takes my laptop many minutes to wake up in the morning.  Its “knowledge” is limited by how fast I can type.  And when I type I’m using words.  But there are pictures in my brain at the same time that I can’t begin to describe adequately.  As a writer I try.  As a thinking human being, I know that I fail.  I’m willing to admit it.  Anything more than that is hubris.  It’s a word we can only partially define but we can’t help but act out.


Not Intelligent

The day AI was released—and I’m looking at you, Chat GPT—research died.  I work with high-level academics and many have jumped on the bandwagon despite the fact that AI cannot think and it’s horrible for the environment.  Let me say that first part again, AI cannot think.  I read a recent article where an author engaged AI about her work.  It is worth reading at length.  In short, AI makes stuff up.  It does not think—I say again, it cannot think—and tries to convince people that it can.  In principle, I do not even look at Google’s AI generated answers when I search.  I’d rather go to a website created by one of my own species.  I even heard from someone recently that AI could be compared to demons.  (Not in a literal way.)  I wonder if there’s some truth to that.

Photo by Igor Omilaev on Unsplash

I would’ve thought that academics, aware of the propensity of AI to give false information, would have shunned it.  Made a stand.  Lots of people are pressured, I know, by brutal schedules and high demands on the part of their managers (ugh!).  AI is a time cutter.  It’s also a corner cutter.  What if that issue you ask it about is one about which it’s lying?  (Here again, the article I mention is instructive.)  We know that it has that tendency rampant among politicians, to avoid the truth.  Yet it is being trusted, more and more.  When first ousted from the academy, I found research online difficult, if not impossible.  Verifying sources was difficult, if it could be done at all.  Since nullius in verba is something to which I aspire, this was a problem.  Now publishers, even academic ones, are talking about little else but AI.

I recently watched a movie that had been altered on Amazon Prime without those who’d “bought” it being told.  A crucial scene was omitted due to someone’s scruples.  I’ve purchased books online and when the supplier goes bust, you lose what you paid for.  Electronic existence isn’t our savior.  Before GPS became necessary, I’d drive through major cities with a paper map and common sense.  Sometimes it even got me there quicker than AI seems to.  And sometimes you just want to take the scenic route.  Ever since consumerism has been pushed by the government, people have allowed their concerns about quality to erode.  Quick and cheap, thank you, then to the landfill.  I’m no longer an academic, but were I, I would not use AI.  I believe in actual research and I believe, with Mulder, that the truth is out there.