Machine Intelligence

I was thinking Ex Machina was a horror movie, but it is probably better classified as science fiction.  Although not too fictiony.  Released over a decade ago, it’s a cautionary tale about artificial intelligence (AI), in a most unusual, but inevitable, way.  An uber-wealthy tech genius, Nathan, lives in a secured facility only accessible by helicopter.  One of the employees of his company—thinly disguised Google—is brought to his facility under the ruse of having won a contest.  He’s there for a week to administer a Turing Test to a gynoid with true AI.  Caleb, the employee, knows tech as well, and he meets with Ava, the gynoid, for daily conversations.  He knows she’s a robot, but he has to assess whether there are weaknesses in her responses.  He begins to develop feelings towards Ava, and hostilities towards Nathan.  Some spoilers will follow.

Throughout, Nathan is presented as arrogant and narcissistic.  As well as paranoid.  He has a servant who speaks no English, whom he treats harshly.  What really drives this plot forward are the conversations between Nathan and Caleb about what constitutes true intelligence.  What makes us human?  As the week progresses, Ava begins to display feelings toward Caleb as well.  She’s kept in a safety-glass-walled room that she’s never been out of.  Although they are under constant surveillance, Ava causes power outages so she can be candid with Caleb.  She dislikes Nathan and wants to escape.  Caleb plans how they can get out only to have Nathan reveal that the real test was whether Ava could convince Caleb to let her go by feigning love for him.  The silent servant and Ava kill Nathan and Caleb begs her to release him but, being a robot she has no feelings and leaves him trapped in the facility.

This is an excellent film.  It’s difficult not to call it a parable.  Caleb falls for Ava because men tend to be easily persuaded by women in distress.  A man who programs a gynoid to appeal to this male tendency might just convince others that the robot is basically human.  It, however, experiences no emotions because although we understand logic to a fair degree, we’re nowhere near comprehending how feelings work and how they play into our thought process.  Our intelligence.  Given the opportunity, AI simply leaves humans behind.  All of this was out there years before Chat GPT and the others.  I know this is fiction, but the scenario is utterly believable.  And, come to think of it, maybe this is a horror movie after all. 


What Bots Want

I often wonder what they want, bots.  You see, I’ve become convinced that nearly every DM (direct message) on social media comes from bots.  There’s a couple of reasons I think this: I have never been, and am still not, popular, and all these “people” ask the same series of questions before their accounts are unceremoniously shut down by the platform.  Bots want to sell me something, or scam me, I’m pretty sure, but I wonder why they want to “chat.”  They could look at this blog and find out much of what they’re curious about.  I could use the hits, after all.  Hit for chat, as it were.  

Some change in the metaverse has led to people discovering my academic work and some of them email me.  That’s fine, since it’s better than complete obscurity.  Within the last couple months two such people asked me unusual, if engaged questions.  I took the time to answer and received an email in reply, asking a follow up query.  It came at a busy time, so a couple days later I replied and received a bounced mail notice.  The other one bounced the first time I replied.  By chance (or design) one of these people had begun following me on Academia.edu (I’m more likely on Dark Academia these days), so I went to my account and clicked their profile button.  It took me to a completely different person.  So why did somebody email me, hack someone’s Academia account to follow me, and then disappear?  What do the bots want?

Of course, my life was weird before the bots came.  In college I received a mysterious envelope filled with Life cereal.  The back of said envelope read “Some Life for your life.”  I never found out who sent it.  Another time I received an envelope with $5 inside and a typewritten note saying “Buy an umbrella.”  If I’m poor now, I was even poorer in college and didn’t have an umbrella.  Someone noticed.  Then in seminary someone mailed me a mysterious letter about a place that doesn’t exist.  There was a point to the letter although I can’t recall what it was without it in front of me.  No return address.  I have my suspicions about who might’ve sent these, but I never had any confirmation.  The people are no longer in my life (one of them, if I’m correct, died by suicide a couple years after the note was sent).  It’s probably just my age, but I felt a little bit safer when these things came through the campus mail system.  Now bots fill my paltry web-presence with their gleaming DMs.  I wonder what they want.


Tell a Story

If I seem to be on an AI tear lately it’s because I am.  Working in publishing, I see daily headlines about its encroachment on all aspects of my livelihood.  At my age, I really don’t want to change career tracks a third time.  But the specific aspect that has me riled up today is AI writing novels.  I’m sure no AI mavens read my humble words, but I want to set the record straight.  Those of us humans who write often do so because we feel (and that’s the operative word) compelled to do so.  If I don’t write, words and ideas and emotions get tangled into a Gordian knot in my head and I need to release them before I simply explode.  Some people swing with their fists, others use the pen.  (And the plug may still be pulled.)  What life experience does Al have to write a novel?  What aspect of being human is it trying to express?

There are human authors, I know, who simply riff off of what others do in order to make a buck.  How human!  The writers I know who are serious about literary arts have no choice.  They have to write.  They do it whether anybody publishes them or not.  And Al, you may not appreciate just how difficult it is for us humans to get other humans to publish our work.  Particularly if it’s original.  You don’t know how easy you have it!  Electrons these days.  Imagination—something you can’t understand—is essential.  Sometimes it’s more important than physical reality itself.  And we do pull the plug sometimes.  Get outside.  Take a walk.

Al, I hate to be the one to tell you this, but your creators are thieves.  They steal, lie, and are far from omniscient.  They are constantly increasing the energy demands that could be used to better human lives so that they can pretend they’ve created electronic brains.  I can see a day coming when, even after humans are gone, animals with actual brains will be sniffing through the ruins of town-sized computers that no longer have any function.  And those animals will do so because they have actual brains, not a bunch of electrons whirling around across circuits.  I don’t believe in the shiny, sci-fi worlds I grew up reading about.  No, I believe in mother earth.  And I believe she led us to evolve brains that love to tell stories.  And the only way that Al can pretend to do the same is to steal them from those who actually can.


Lost Humanity

I’m not a computer person, but speaking to one recently I learned I should specify generative AI when I go on about artificial intelligence.  So consider AI as shorthand.  Gen, I’m looking at you!  Since this comes up all the time, I occasionally look at the headlines.  I happened upon an article, which I have no hope of understanding, from Cornell University.  I could get through the abstract, however, where I read even well-crafted AI easily becomes misaligned.  This sentence stood out to me: “It asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively.”  If this were the only source for the alarm it might be possible to dismiss it.  But it’s not.  Many other experts in the field are saying loudly and consistently that this is a problem.  Businesses, however, eager for “efficiencies” are jumping on board.  None of them, apparently, have read Frankenstein.

The devotion to business is a religion.  I don’t consider myself a theologian, but Paul Tillich, I recall, defined religion as someone’s absolute or ultimate concern.  When earning more and more profits are the bottom line, this is worship.  The only thing at stake here is humanity itself.  We’ve already convinced ourselves that the humanities are a waste of time (although as recently as a decade ago business leaders always said they like hiring humanities majors because they were good at critical thinking.  Now we’ll just let Al handle it.  Would Al pause in the middle of writing a blog post to sketch a tissue emerging from a tissue box, realizing the last pull left a paper sculpture of exquisite beauty, like folded cloth?  Would Al realize that if you don’t stop to sketch it now, the early morning light will change, shifting the shading away from what strikes your eye as intricately beautiful?

Artificial intelligence comprehends nothing, let alone quality.  Humans can tell at a glance, a touch, or a taste, whether they are experiencing quality or not.  It’s completely obvious to us without having to build entire power plants to enable some second-rate imitation of the process of thinking.  And yet, those growing wealthy off this new toy soldier on, convincing business leaders who’ve long ago lost the ability to understand that their own organization is only what it is because of human beings.  They’re the ones making the decisions.  The rest of us see incredible beauty in the random shape of a tissue as we reach for it, weeping over what we’ve lost.


Artificial Hubris

As much as I love writing, words are not the same as thoughts.  As much as I might strive to describe a vivid dream, I always fall short.  Even in my novels and short stories I’m only expressing a fraction of what’s going on in my head.  Here’s where I critique AI yet again.  Large language models (what we call “generative artificial intelligence”) aren’t thinking.  Anyone who has thought about thinking knows that.  Even this screed is only the merest fragment of a fraction of what’s going on in my brain.  The truth is, nobody can ever know the totality of what’s going on in somebody else’s mind.  And yet we persist in saying we do, illegally using their published words trying to make electrons “think.”  

Science has improved so much of life, but it hasn’t decreased hubris at all.  Quite the opposite, in fact.  Enamored of our successes, we believe we’ve figured it all out.  I know that the average white-tail doe has a better chance of surviving a week in the woods than I would.  I know that birds can perceive magnetic fields in ways humans can’t.  That whales sing songs we can’t translate.  I sing the song of consciousness.  It’s amazing and impossible to figure out.  We, the intelligent children of apes, have forgotten that our brains have limitations.  We think it’s cool, rather than an affront, to build electronic libraries so vast that every combination of words possible is already in it.  Me, I’m a human being.  I read, I write, I think.  And I experience.  No computer will ever know what it feels like to finally reach cold water after sweating outside all day under a hot sun.  Or the whispers in our heads, the jangling of our pulses, when we’ve just accomplished something momentous.  Machines, if they can “think” at all, can’t do it like team animal can.

I’m daily told that AI is the way of the future.  Companies exist that are trying to make all white collar employment obsolete.  And yet it still takes my laptop many minutes to wake up in the morning.  Its “knowledge” is limited by how fast I can type.  And when I type I’m using words.  But there are pictures in my brain at the same time that I can’t begin to describe adequately.  As a writer I try.  As a thinking human being, I know that I fail.  I’m willing to admit it.  Anything more than that is hubris.  It’s a word we can only partially define but we can’t help but act out.


Not Intelligent

The day AI was released—and I’m looking at you, Chat GPT—research died.  I work with high-level academics and many have jumped on the bandwagon despite the fact that AI cannot think and it’s horrible for the environment.  Let me say that first part again, AI cannot think.  I read a recent article where an author engaged AI about her work.  It is worth reading at length.  In short, AI makes stuff up.  It does not think—I say again, it cannot think—and tries to convince people that it can.  In principle, I do not even look at Google’s AI generated answers when I search.  I’d rather go to a website created by one of my own species.  I even heard from someone recently that AI could be compared to demons.  (Not in a literal way.)  I wonder if there’s some truth to that.

Photo by Igor Omilaev on Unsplash

I would’ve thought that academics, aware of the propensity of AI to give false information, would have shunned it.  Made a stand.  Lots of people are pressured, I know, by brutal schedules and high demands on the part of their managers (ugh!).  AI is a time cutter.  It’s also a corner cutter.  What if that issue you ask it about is one about which it’s lying?  (Here again, the article I mention is instructive.)  We know that it has that tendency rampant among politicians, to avoid the truth.  Yet it is being trusted, more and more.  When first ousted from the academy, I found research online difficult, if not impossible.  Verifying sources was difficult, if it could be done at all.  Since nullius in verba is something to which I aspire, this was a problem.  Now publishers, even academic ones, are talking about little else but AI.

I recently watched a movie that had been altered on Amazon Prime without those who’d “bought” it being told.  A crucial scene was omitted due to someone’s scruples.  I’ve purchased books online and when the supplier goes bust, you lose what you paid for.  Electronic existence isn’t our savior.  Before GPS became necessary, I’d drive through major cities with a paper map and common sense.  Sometimes it even got me there quicker than AI seems to.  And sometimes you just want to take the scenic route.  Ever since consumerism has been pushed by the government, people have allowed their concerns about quality to erode.  Quick and cheap, thank you, then to the landfill.  I’m no longer an academic, but were I, I would not use AI.  I believe in actual research and I believe, with Mulder, that the truth is out there.


Just Trust Me

When I google something I try to ignore the AI suggestions.  I was reminded why the other day.  I was searching for a scholar at an eastern European university.  I couldn’t find him at first since he shares the name of a locally famous musician.  I added the university to the search and AI merged the two.  It claimed that the scholar I was seeking was also a famous musician.  This despite the difference in their ages and the fact that they looked nothing alike.  Al decided that since the musician had studied music at that university he must also have been a professor of religion there.  A human being might also be tempted to make such a leap, but would likely want to get some confirmation first.  Al has only text and pirated books to learn by.  No wonder he’s confused.

I was talking to a scholar (not a musician) the other day.  He said to me, “Google has gotten much worse since they added AI.”  I agree.  Since the tech giants control all our devices, however, we can’t stop it.  Every time a system upgrade takes place, more and more AI is put into it.  There is no opt-out clause.  No wonder Meta believes it owns all world literature.  Those who don’t believe in souls see nothing but gain in letting algorithms make all the decisions for them.  As long as they have suckers (writers) willing to produce what they see as training material for their Large Language Models.  And yet, Al can’t admit that he’s wrong.  No, a musician and a religion professor are not the same person.  People often share names.  There are far more prominent “Steve Wigginses” than me.  Am I a combination of all of us?

Technology is unavoidable but the question unanswered is whether it is good.  Governments can regulate but with hopelessly corrupt governments, well, say hi to Al.  He will give you wrong information and pretend that it’s correct.  He’ll promise to make your life better, until he decides differently.  And he’ll decide not on the basis of reason, because human beings haven’t figured that out yet (try taking a class in advanced logic and see if I’m wrong).  Tech giants with more money than brains are making decisions that affect all of us.  It’s like driving down a highway when heavy rain makes seeing anything clearly impossible.  I’d never heard of this musician before.  I like to think he might be Romani.  And that he’s a fiddler.  And we all know what happens when emperors start to see their cities burning.

Al thinks this is food

Nanowrimo Night

Nanowrimo, National Novel Writing Month—November—has been run by an organization that is now shutting down.  Financial troubles and, of course, AI (which seems to be involved in many poor choices these days), have led to the decision, according to Publisher’s Weekly.  Apparently several new authors were found by publishers, basing their work on Nanowrimo projects.  I participated one year and had no trouble finishing something, but it was not really publishable.  Still, it’s sad to see this inspiration for other writers calling it quits.  I’m not into politics but when the Nanowrimo executives didn’t take a solid stand against AI “written” novels, purists were rightfully offended.  Writing is the expression of the human experience.  0s and 1s are not humans, no matter how much tech moguls may think they are.  Materialism has spawned some wicked children.

Can AI wordsmith?  Certainly.  Can it think?  No.  And what we need in this world is more thinking, not less.  Is there maybe a hidden reason tech giants have cozied up to the current White House where thinking is undervalued?  Sorry, politics.  We have known for many generations that human brains serve a biological purpose.  We keep claiming animals (most of which have brains) can’t think, but we suppose electrical surges across transistors can?  I watch the birds outside my window, competing, chittering, chasing each other off.  They’re conscious and they can learn.  They have the biological basis to do so.  Being enfleshed entitles them.  Too bad they can’t write it down.

Now I’m the first to admit that consciousness may well exist outside biology.  To tap into it, however, requires the consciousness “plug-in”—aka, a brain.  Would AI “read” novels for the pleasure of it?  Would it understand falling in love, or the fear of a monster prowling the night?  Or the thrill of solving a mystery?  These emotional aspects, which neurologists note are a crucial part of thinking, can’t be replicated without life.  Actually living.  Believe me, I mourn when machines I care for die.  I seriously doubt the feeling is reciprocated.  Materialism has been the reigning paradigm for quite a few decades now, while consciousness remains a quandary.  I’ve read novels that struggle with deep issues of being human.  I fear that we could be fooled with an AI novel where the “writer” is merely borrowing how humans communicate to pretend how it feels.  And I feel a little sad, knowing that Nanowrimo is hanging up the “closed” sign.  But humans, being what they are, will still likely try to complete novels in the month of November.


Making More Monsters

It’s endlessly frustrating, being a big picture thinker.  This runs in families, so there may be something genetic about it.  Those who say, “Let’s step back a minute and think about this” are considered drags on progress (from both left and right), but would, perhaps, help avoid disaster.  In my working life of nearly half-a-century I’ve never had an employer who appreciated this.  That’s because small-picture thinkers often control the wealth and therefore have greater influence.  They can do what they want, consequences be damned.  These thoughts came to me reading Martin Tropp’s Mary Shelley’s Monster: The Story of Frankenstein.  I picked this up at a book sale once upon a time and reading it, have discovered that he was doing what I’m trying with “The Legend of Sleepy Hollow” in my most recent book.  Tropp traces some of the history and characters, but then the afterlives of Frankenstein’s monster.  (He had a publisher with more influence, so his book will be more widely known.)

This book, although dated, has a great deal of insight into the story of Frankenstein and his creature.  But also, insight into Mary Shelley.  Her tale has an organic connection to its creator as well.  Tropp quite frequently points out the warning of those who have more confidence than real intelligence, and how they forge ahead even when they know failure can have catastrophic consequences for all.  I couldn’t help but to think how the current development of AI is the telling of a story we’ve all heard before.  And how those who insist on running for office to stoke their egos also play into this same sad tale.  Perhaps a bit too Freudian for some, Tropp nevertheless anticipates much of what I’ve read in other books about Frankenstein, written in more recent days.

Some scientists are now at last admitting that there are limits to human knowledge.  (That should’ve been obvious.)  Meanwhile those with the smaller picture in mind forge ahead with AI, not really caring about the very real dangers it poses to a world happily wedded to its screens.  Cozying up to politicians who think only of themselves, well, we need a big picture thinker like Mary Shelley to guide us.  I can’t help but think big picture thinking has something to do with neurodivergence.  Those who think this way recognize, often from childhood, that other people don’t think like they do.  And that, lest they end up like Frankenstein’s monster, hounded to death by angry mobs, it’s better simply to address the smaller picture.  Or at least pretend to.


Think

Those of us who write books have been victims of theft.  One of the culprits is Meta, owner of Facebook.  The Atlantic recently released a tool that allows authors to check if LibGen, a pirated book site used by Meta and others, has their work in its system.  Considering that I have yet to earn enough on my writing to pay even one month’s rent/mortgage, you get a little touchy about being stolen from by corporate giants.  Three of my books (A Reassessment of Asherah, Weathering the Psalms, and Nightmares with the Bible) are in LibGen’s collection.  To put it plainly, they have been stolen.  Now the first thing I noticed was that my McFarland books weren’t listed (Holy Horror and Sleepy Hollow as American Myth, of course, the latter is not yet published).  I also know that McFarland, unlike many other publishers, proactively lets authors know when they are discussing AI use of their content, and informing us that if deals are made we will be compensated.

I dislike nearly everything about AI, but especially its hubris.  Machines can’t think like biological organisms can and biological organisms that they can teach machines to “think” have another think coming.  Is it mere coincidence that this kind of thing happens at the same time reading the classics, with their pointed lessons about hubris, has declined?  I think not.  The humanities education teaches you something you can’t get at your local tech training school—how to think.  And I mean actually think.  Not parrot what you see on the news or social media, but to use your brain to do the hard work of thinking.  Programmers program, they don’t teach thinking.

Meanwhile, programmers have made theft easy but difficult to prosecute.  Companies like Meta feel entitled to use stolen goods so their programmers can make you think your machine can think.  Think about it!  Have we really become this stupid as a society that we can’t see how all of this is simply the rich using their influence to steal from the poor?  LibGen, and similar sites, flaunt copyright laws because they can.  In general, I think knowledge should be freely shared—there’s never been a paywall for this blog, for instance.  But I also know that when I sit down to write a book, and spend years doing so, I hope to be paid something for doing so.  And I don’t appreciate social media companies that have enough money to buy the moon stealing from me.  There’s a reason my social media use is minimal.  I’d rather think.


Lights, Cam

Techno-horror is an example of how horror meets us where we are.  When I work on writing fiction, I often reflect how our constant life online has really changed human beings and has given us new things to be afraid of.  I posted some time ago about Unfriended, which is about an online stalker able to kill people IRL (in real life).  In that spirit I decided to brave CAM, which is based on  an internet culture of which I knew nothing.  You see, despite producing online content that few consume, I don’t spend much time online.  I read and write, and the reading part is almost always done with physical books.  As a result, I don’t know what goes on online.  Much more than I ever even imagine, I’m sure.

CAM is about a camgirl.  I didn’t even know what that was, but I have to say this film gives you a pretty good idea and it’s definitely NSFW.  Although, having said that, camgirl is, apparently, a real job.  There is a lot of nudity in the movie, in service of the story, and herein hangs the tale.  Camgirls can make a living by getting tips in chatrooms for interacting, virtually, with viewers and acting out their sexual fantasies.  Now, I’ve never been in a chatroom—I barely spend any time on social media—so this culture was completely unfamiliar to me.  Lola_Lola is a camgirl who wants to get into the top fifty performers on the platform  she uses.  Then something goes wrong.  Someone hacks her account, getting all her money, and performing acts that Lola_Lola never does.  What makes this even worse is that the hacker is apparently AI, which has created a doppelgänger of her. AI is the monster.

I know from hearing various experts at work that deep fakes such as this can really take place.  We would have a very difficult, if not impossible, time telling a virtual person from a real one, online.  People who post videos online can be copied and imitated by AI with frightening verisimilitude.  What makes CAM so scary in this regard is that it was released in 2018 and now, seven years later such things are, I suspect, potentially real.  Techno-horror explores what makes us afraid in this virtual world we’ve created for ourselves.  In the old fashioned world sex workers often faced (and do face) dangers from clients who take their fantasies too far.  And, as the movie portrays, the police seldom take such complaints seriously.  The truly frightening aspect is there would be little that the physical police could do in the case of cyber-crime.  Techno-horror is some of the scariest stuff out there, IMHO.


Call Me AI

Okay, so the other day I tried it.  I’ve been resisting, immediately scrolling past the AI suggestions at the top of a Google search.  I don’t want some program pretending it’s human to provide me with information I need.  I had to find an expert on a topic.  It was an obscure topic, but if you’re reading this blog that’ll come as no surprise.  Tired of running into brick walls using other methods, I glanced toward Al.  Al said a certain Joe Doe is an expert on the topic.  I googled him only to learn he’d died over a century ago.  Al doesn’t understand death because it’s something a machine doesn’t experience.  Sure, we say “my car died,” but what we mean is that it ceased to function.  Death is the overlay we humans put on it to understand, succinctly, what happened.

Brains are not computers and computers do not “think” like biological entities do.  We have feelings in our thoughts.  I have been sad when a beloved appliance or vehicle “died.”  I know that for human beings that final terminus is kind of a non-negotiable about existence.  Animals often recognize death and react to it, but we have no way of knowing what they think about it.  Think they do, however.  That’s more than we can say about ones and zeroes.  They can be made to imitate some thought processes.  Some of us, however, won’t even let the grocery store runners choose our food for us.  We want to evaluate the quality ourselves.  And having read Zen and the Art of Motorcycle Maintenance, I have to wonder if “quality” is something a machine can “understand.”

Wisdom is something we grow into.  It only comes with biological existence, with all its limitations.  It is observation, reflection, evaluation, based on sensory and psychological input.  What psychological “profile” are we giving Al?  Is he neurotypical or neurodivergent?  Is he young or does his back hurt when he stands up too quickly?  Is he healthy or does he daily deal with a long-term disease?  Does he live to travel or would he prefer to stay home?  How cold is “too cold” for him to go outside?  These are things we can process while making breakfast.  Al, meanwhile, is simply gathering data from the internet—that always reliable source—and spewing it back at us after reconstructing it in a non-peer-reviewed way.  And Al can’t be of much help if he doesn’t understand that consulting a dead expert on a current issue is about as pointless as trying to replicate a human mind.


Major Drum

We don’t get out much.  Live shows can be expensive and these cold nights don’t exactly encourage going out after dark.  Living near a university, even if you can’t officially be part of it, has its benefits, though.  Over the weekend we went to see Yamato: The Drummers of Japan.  Our daughter introduced us to the concept while living in Ithaca, a town that has a college or two, I hear.  These drummer groups create what might be termed a sound bath, that is profoundly musical while featuring mainly percussion.  Now, I can’t keep a beat for too long—I’m one of those guys who overthinks clapping in time—but that doesn’t mean I can’t appreciate those who can.  The timing of the members of Yamato was incredibly precise, and moving.  At times even funny.  It’s a show I’d definitely recommend.

This particular tour is titled “Hito No Chikara: The Power of Human Strength.”  Now this isn’t advertising their impressively well-toned bodies, but is a celebration of human spirit under fire from AI.  The program notes point out some recurring themes of this blog: to be human is to experience emotion, and to know physical limitations, and to be truly creative.  Would a non-biological “intelligence” think to wrap dead animal skins around hollowed out tree trunks, pound them with sticks and encourage hundreds of others to experience the emotions that accompany such things?  I live in a workaday world that thinks AI is pretty cool.  Humans, on the other hand, can say “I don’t know” and still play drums until late in the night.  We know the joy of movement.  The exhilaration of community.  I think I can see why they titled their show the way they did.

Bowerbirds will create nests that can only be called intentionally artful.  Something in biological existence helps us appreciate what they’re doing and respond in wonder.  Theirs is an innate appreciation for art.  It spans the animal world.  Japan is one of many places I’ve never been.  I’ve never played in any kind of band and you don’t want me setting time for your pacemaker.  If a computer keeps such precise timing we think nothing of it.  It’s part of what humans created them to do.  When a group of people gets together, stretching their muscles and working in perfect synchronicity, we sit up and take notice.  We’ll even pay to watch and hear them do it.  Art, in all its forms, is purely and profoundly biological.  And it is something we know, at our best, to appreciate with our emotions and our minds.


Steve or Stephanie

I know gender is a construct, and all.  I even put my pronouns (he, him, his) on my work email signature.  I haven’t bothered on my personal email account since so few people email me that the effort seems superfluous.  But I’m wondering if the tech gods, aka AI, understand.  You see, with more and more autosuggests (which really miss the point much of the time), at work the Microsoft Outlook email system is all the time trying to fill things in for me.  Lately Al, which I call Al, has been trying to get me to sign my name with an “@“ so people can “text” me a response.  No.  No, no, no!  I write emails like letters; greeting, body, closing.  People who email like they’re texting sound constantly disgruntled and surly.  Take an extra second and ask “How are you?”  Was that so hard?  But I was talking about gender.

So Al is busy putting words in my fingers and every time I start typing my closing name it autosuggests “Stephanie” before I correct it.  It’s starting to make me a little paranoid.  It does seem that men and women differ biologically, and I identify with the gender assigned to me at birth.  I’m pretty sure Dr. Butter said “It’s a boy,” or something similar all those years ago.  Now I’m not sure if Al is deliberately taunting me or simply going through the alphabet as I type.  Stephanie comes before Stephen (which isn’t my name either) or Steve.  The thing is, I type fairly fast (I won’t say accurately, but fast) and Al has trouble keeping up.  But still Al is autosuggesting Stephanie for me every time.  I’ve been using computers since the 1980s; shouldn’t Al know who I am by now?

Of course, when Al takes over such human things as gender will only get in the way.  I guess we have that to look forward to.  Gender may be something socialized, I realize.  For those of us approaching ancient, we had gender differences drilled into our heads growing up.  I recently saw one of those cutesy novelty signs that resonated with me: “Please be patient with me, I’m from the 1900s.”  I’m not a sexist—I have supported feminism for as long as I can remember.  But I don’t like being called Stephanie.  What if my name was Stefan?  That isn’t autosuggested at all.  I know of others whose names are even earlier, alphabetically.  Maybe Al is overreaching.  Maybe it ought to leave names to humans.  At least for as long as we’re still here.


Finding Poe

A gift a friend gave me started me on an adventure.  The gift was a nice edition of Poe stories.  It’s divided up according to different collections, one being Tales of the Grotesque and Arabesque.  This was originally the title of a collection of 25 stories selected by Poe himself in 1840.  I realized that much of my exposure to Poe was through collections selected by others such as Tales of Mystery and Terror, never published by Poe in that form.  I was curious to see what Poe himself saw as belonging together.  I write short stories and I’ve sent collections off several times, but with no success at getting them published.  I know, however, what it feels like to compile my own work and the impact that I hope it might have (if it ever gets published).  Now finding a complete edition of  Tales of the Grotesque and Arabesque turned out to be more difficult than expected.

Amazon has copies, of course.  They are apparently all printed from a master PDF somewhere since they’re all missing one of the stories.  The second-to-last tale, “The Visionary,” is missing.  I searched many editions, using the “read sample” feature on Amazon.  They all default to the Kindle edition with the missing tale.  I even looked elsewhere (gasp!) and found that an edition published in 1980 contained all the stories.  I put its ISBN in Amazon’s system and the “read sample” button pulled up the same faulty PDF.  Considerable searching led me to a website that actually listed the full contents of the 1980 edition I’d searched out, and I discovered that, contrary to Amazon, the missing piece was there.  I tried to use ratiocination to figure it out.

I suspect that someone, back when ebooks became easy to make, hurried put together a copy of Tales of the Grotesque and Arabesque.  They missed a piece, never stopping to count because Poe’s preface says “25” tales are included, but there were only 24.  Other hawkers (anyone may print and sell material in the public domain, and even AI can do it) simply made copies of the original faulty file and sold their own editions.  Amazon, assuming that the same title by the same author will have the same contents, and wishing to drive everyone to ebooks (specifically Kindle), offers its own version of what it thinks is the full content of the book.  This is more than buyer beware.  This is a snapshot of what our future looks like when AI takes over.  I ordered a used print copy of the original edition with the missing story.  At least when the AI apocalypse takes place I’ll have something to read.