Word Words

So, in the old days, when books were paper, printers would rough out the typesetting on trays called galleys.  Prints from these plates would be sent out for review.  Naturally enough, they were called galley proofs, or simply “galleys.”  After those came back from an author marked up, corrections and further refinements, like footnotes, were incorporated.  Then page proofs, or second proofs, were produced and sent again.  The process took quite a bit of time and, as I’ve now been through six sets of proofs for my own books, I can attest it takes time on both ends.  Electronic submissions have made all of this easier.  You don’t have to physically typeset, much of the time, unless you merit offset printing—books in quantity.  You can often find uncorrected proofs in used bookstores, and sometimes indie bookstores will give them away.  That’s all fine and good.  The problem comes in with nomenclature.

These days proofs are sometimes still called “galleys” although they’re seldom made anymore.  If someone asks about galleys, it is quite possible they’re asking about page proofs.  It is fairly common in academic publishing for an author to see only one set of proofs—technically second proofs, but since no galleys were set, they could be called that.  Or just proofs.  Now, I have to remind myself of how this works, periodically.  It was much clearer when the old way was in force.  There were a couple reasons for doing galleys—one is that they were, comparatively, inexpensive to correct.  Another is that authors could catch mistakes before the very expensive correction at the second proof stage.  Even now, when I receive proofs I’m told that only corrections of errors should be made, not anything that will effect the flow, throwing off pagination.  This is especially important for books with an index, but it can also present problems for the table of contents.

Offset printing. Image credit: Sven Teschke, under GNU Free Documentation License, via Wikimedia Commons

The ToC, or table of contents, also leads to another bit of publisher lingo.  When something is outstanding and expected before long, many editors abbreviate it “TK” or “to come.”  Why?  “TC” is sometimes used to mean “ToC” or table of contents.  There are hundreds of thousands of words in the English language, yet we keep on bumping up against ambiguities, using our favorites over and again.  That’s a funny thing since publishers are purveyors of words.  None of my books have printed in the quantity that requires galleys.  In fact, academic books, despite costing a Franklin, are often pulped because they’re more expensive to warehouse than they are to sell.  This is always a hard lesson for an academic to learn.  The sense behind it is TK.


Artificial Hubris

As much as I love writing, words are not the same as thoughts.  As much as I might strive to describe a vivid dream, I always fall short.  Even in my novels and short stories I’m only expressing a fraction of what’s going on in my head.  Here’s where I critique AI yet again.  Large language models (what we call “generative artificial intelligence”) aren’t thinking.  Anyone who has thought about thinking knows that.  Even this screed is only the merest fragment of a fraction of what’s going on in my brain.  The truth is, nobody can ever know the totality of what’s going on in somebody else’s mind.  And yet we persist in saying we do, illegally using their published words trying to make electrons “think.”  

Science has improved so much of life, but it hasn’t decreased hubris at all.  Quite the opposite, in fact.  Enamored of our successes, we believe we’ve figured it all out.  I know that the average white-tail doe has a better chance of surviving a week in the woods than I would.  I know that birds can perceive magnetic fields in ways humans can’t.  That whales sing songs we can’t translate.  I sing the song of consciousness.  It’s amazing and impossible to figure out.  We, the intelligent children of apes, have forgotten that our brains have limitations.  We think it’s cool, rather than an affront, to build electronic libraries so vast that every combination of words possible is already in it.  Me, I’m a human being.  I read, I write, I think.  And I experience.  No computer will ever know what it feels like to finally reach cold water after sweating outside all day under a hot sun.  Or the whispers in our heads, the jangling of our pulses, when we’ve just accomplished something momentous.  Machines, if they can “think” at all, can’t do it like team animal can.

I’m daily told that AI is the way of the future.  Companies exist that are trying to make all white collar employment obsolete.  And yet it still takes my laptop many minutes to wake up in the morning.  Its “knowledge” is limited by how fast I can type.  And when I type I’m using words.  But there are pictures in my brain at the same time that I can’t begin to describe adequately.  As a writer I try.  As a thinking human being, I know that I fail.  I’m willing to admit it.  Anything more than that is hubris.  It’s a word we can only partially define but we can’t help but act out.


Not Intelligent

The day AI was released—and I’m looking at you, Chat GPT—research died.  I work with high-level academics and many have jumped on the bandwagon despite the fact that AI cannot think and it’s horrible for the environment.  Let me say that first part again, AI cannot think.  I read a recent article where an author engaged AI about her work.  It is worth reading at length.  In short, AI makes stuff up.  It does not think—I say again, it cannot think—and tries to convince people that it can.  In principle, I do not even look at Google’s AI generated answers when I search.  I’d rather go to a website created by one of my own species.  I even heard from someone recently that AI could be compared to demons.  (Not in a literal way.)  I wonder if there’s some truth to that.

Photo by Igor Omilaev on Unsplash

I would’ve thought that academics, aware of the propensity of AI to give false information, would have shunned it.  Made a stand.  Lots of people are pressured, I know, by brutal schedules and high demands on the part of their managers (ugh!).  AI is a time cutter.  It’s also a corner cutter.  What if that issue you ask it about is one about which it’s lying?  (Here again, the article I mention is instructive.)  We know that it has that tendency rampant among politicians, to avoid the truth.  Yet it is being trusted, more and more.  When first ousted from the academy, I found research online difficult, if not impossible.  Verifying sources was difficult, if it could be done at all.  Since nullius in verba is something to which I aspire, this was a problem.  Now publishers, even academic ones, are talking about little else but AI.

I recently watched a movie that had been altered on Amazon Prime without those who’d “bought” it being told.  A crucial scene was omitted due to someone’s scruples.  I’ve purchased books online and when the supplier goes bust, you lose what you paid for.  Electronic existence isn’t our savior.  Before GPS became necessary, I’d drive through major cities with a paper map and common sense.  Sometimes it even got me there quicker than AI seems to.  And sometimes you just want to take the scenic route.  Ever since consumerism has been pushed by the government, people have allowed their concerns about quality to erode.  Quick and cheap, thank you, then to the landfill.  I’m no longer an academic, but were I, I would not use AI.  I believe in actual research and I believe, with Mulder, that the truth is out there.


Remembering to Forget

I think I’ve discussed memory before.  I forget.  Anyway, I recently ran across Ebbinghaus’ Forgetting Curve.  Now, I’ve long known that when you reach my age (let’s just say closer to a century than to 1), your short-term memory tends to suffer.  I value my memories, so I try to refresh what’s important frequently.  In any case, Ebbinghaus’ curve isn’t, as far as I can tell, age specific.  It’s primarily an adult problem, but it also resonates with any of us who had to study hard to recall things in school.   The forgetting curve suggests that within one hour of learning new information, 42% is forgotten.  Within 24 hours, 67% is gone.  This is why teachers “drill” students.  Hopefully you’ll remember things like the multiplication table until well into retirement age because you had to repeat it until it stuck.  Where you put the car keys, however, is in that 42%.

I’m a creature of habit.  One of the reasons is that I fear forgetting where something important might be.  The other day it was my wallet.  In these remote working days you don’t need to put on fully equipped pants every day.  Pajama bottoms work fine for Zoom meetings and if you don’t have to go anywhere, why fuss with the wallet, cell phone, pocket tissues-laden pants?  You can put your phone on the desk next to a box of tissues.  The wallet gets left in its usual pocket.  One day I pulled on the pants I last wore and as I was headed to the car noticed my wallet was gone.  Fighting Ebbinghaus, I tried to remember where I’d last used my wallet.  We’d gone to a restaurant the previous weekend that seemed the most likely culprit.  It could’ve fallen out in the car, or maybe down a crack in an overstuffed chair.  I couldn’t find it anywhere, swearing to myself I was going to buy one of those wallet chains if I ever found it again.

(I did eventually find it, in the bathroom.  Apparently this has happened to others as well.)  In this instance, my memory was not to blame.  It had been right in the pocket where I last remembered putting it.  But other things do slip.  Think about the most recent book you read.  How much of it do you remember?  That’s the part that scares me.  I spend lots of time reading, and more than half is gone a day after it’s read.  Unless it’s reinforced.  The solution, I guess, is to read even more.  Maybe about Ebbinghaus’ Forgetting Curve.


Letting Go

I should’ve known from the title that this would be a sad story.  Kazuo Ishiguro’s Never Let Me Go won the Nobel Prize in Literature, despite being speculative.  It’s only mildly so, but enough that it is sometimes classed as science fiction.  It’s appropriate that a twentieth anniversary edition was released because it is an extended consideration of the price of technology as well as dehumanization.  I’ll need to put in some spoilers, so here’s the usual caveat.  I read this novel because it’s often cited as an example of dark academia and it certainly fits that aesthetic.  It starts out at a private school called Hailsham, in England.  The students are given some privileges but their lives aren’t exactly posh.  Most of their possessions are purchased on days when a truck sells them things they can buy with money they earn by creating art.  They aren’t allowed to leave the school.  Spoilers follow.

The special circumstances of the children are because they’re clones being grown for replacement organs.  The public doesn’t want to know about them or interact with them.  In fact, most people believe they don’t have souls, or aren’t really human.  They’ve been created to be used and exploited until they die, always prematurely.  While this may sound grim, the story is thoughtfully told through the eyes of one of these children, Kathy.  She becomes best friends with Ruth and Tommy, who later become a couple.  Ruth is a difficult personality, but likable.  As they grow they’re slowly given the facts about what their life will be.  They’re raised to comply, never to rebel or question their role.  Most simply accept it.  Kathy, Ruth, and Tommy, in a submissive way, try to get a deferral regarding their “donations.”

I suppose it’s presumptuous to say of a Nobel Prize winner that it’s well written, but I’ll say it anyway.  Ishiguro manages to capture the exploratory friendships of youth and reveals what you need to know in slow doses, all told with a compelling, if sad and accepting voice.  Although the genre could be sci-fi, it’s set in the present, or, more accurately, about twenty years ago.  The technology, apart from the cloning, is about what it was at the turn of the century, or maybe a decade or two before that.  With what we see happening in the world right now, people should be reading books like this that help them understand that people are people, not things to be exploited.  And that Nobel Prizes should be reserved for those that are actually deserving for their contributions to humanity. 


More Writing

I keep a list.  It includes everything that I’ve published.  It’s not on my CV since I keep my fiction pretty close to my vest.  The other day I stumbled across another electronic list I’d made some time ago of the unpublished books I’d written.  Most were fiction but at least two were non, and so I decided that I should probably print out copies of those I still had.  As I’ve probably written elsewhere, I started my first novel as a teenager.  I never finished it, but I still remember it pretty well.  Then I started another, also unfinished.  After my wife and I got engaged and before we moved to Scotland, I’d moved to Ann Arbor to be in her city.  Ann Arbor, like most university towns, has many overqualified people looking for work and I ended up doing secretarial support for companies that really had nothing for me to do quite a bit of the time.  I wrote my first full novel during dull times on the job.

My writing was pretty focused in Edinburgh.  My first published book was, naturally, my dissertation.  I started writing fiction again when I was hired by Nashotah House, but that was tempered by academic articles and my second book.  An academic life, it seems, doesn’t leave a ton of time for writing.  What really surprised me about my list was what happened after Nashotah.  In the years since then I’ve completed ten unpublished books.  Since my ouster from academia I’ve published five.  I honestly don’t know how many short stories I’ve finished, but I have published thirty-three.  What really worries me is that some of these only exist in tenuous electronic form.  I guess I trust the internet enough to preserve these blog posts; with over 5,700 of them I’d be running out of space.

I see a trip to buy some paper in my future.  For my peace of mind I need to make sure all of this is printed out.  My organizational scheme (which is perhaps not unusual for those with my condition) is: I know which pile I put it in.  Organizing it for others, assuming anybody else is interested, might not be a bad idea.  I know that if I make my way to the attic and begin looking through my personal slush pile of manuscripts I’ll find even more that I’ve forgotten.  That’s why I started keeping a list.  Someday I’ll have time to finish it, I hope.


Just Trust Me

When I google something I try to ignore the AI suggestions.  I was reminded why the other day.  I was searching for a scholar at an eastern European university.  I couldn’t find him at first since he shares the name of a locally famous musician.  I added the university to the search and AI merged the two.  It claimed that the scholar I was seeking was also a famous musician.  This despite the difference in their ages and the fact that they looked nothing alike.  Al decided that since the musician had studied music at that university he must also have been a professor of religion there.  A human being might also be tempted to make such a leap, but would likely want to get some confirmation first.  Al has only text and pirated books to learn by.  No wonder he’s confused.

I was talking to a scholar (not a musician) the other day.  He said to me, “Google has gotten much worse since they added AI.”  I agree.  Since the tech giants control all our devices, however, we can’t stop it.  Every time a system upgrade takes place, more and more AI is put into it.  There is no opt-out clause.  No wonder Meta believes it owns all world literature.  Those who don’t believe in souls see nothing but gain in letting algorithms make all the decisions for them.  As long as they have suckers (writers) willing to produce what they see as training material for their Large Language Models.  And yet, Al can’t admit that he’s wrong.  No, a musician and a religion professor are not the same person.  People often share names.  There are far more prominent “Steve Wigginses” than me.  Am I a combination of all of us?

Technology is unavoidable but the question unanswered is whether it is good.  Governments can regulate but with hopelessly corrupt governments, well, say hi to Al.  He will give you wrong information and pretend that it’s correct.  He’ll promise to make your life better, until he decides differently.  And he’ll decide not on the basis of reason, because human beings haven’t figured that out yet (try taking a class in advanced logic and see if I’m wrong).  Tech giants with more money than brains are making decisions that affect all of us.  It’s like driving down a highway when heavy rain makes seeing anything clearly impossible.  I’d never heard of this musician before.  I like to think he might be Romani.  And that he’s a fiddler.  And we all know what happens when emperors start to see their cities burning.

Al thinks this is food

Nanowrimo Night

Nanowrimo, National Novel Writing Month—November—has been run by an organization that is now shutting down.  Financial troubles and, of course, AI (which seems to be involved in many poor choices these days), have led to the decision, according to Publisher’s Weekly.  Apparently several new authors were found by publishers, basing their work on Nanowrimo projects.  I participated one year and had no trouble finishing something, but it was not really publishable.  Still, it’s sad to see this inspiration for other writers calling it quits.  I’m not into politics but when the Nanowrimo executives didn’t take a solid stand against AI “written” novels, purists were rightfully offended.  Writing is the expression of the human experience.  0s and 1s are not humans, no matter how much tech moguls may think they are.  Materialism has spawned some wicked children.

Can AI wordsmith?  Certainly.  Can it think?  No.  And what we need in this world is more thinking, not less.  Is there maybe a hidden reason tech giants have cozied up to the current White House where thinking is undervalued?  Sorry, politics.  We have known for many generations that human brains serve a biological purpose.  We keep claiming animals (most of which have brains) can’t think, but we suppose electrical surges across transistors can?  I watch the birds outside my window, competing, chittering, chasing each other off.  They’re conscious and they can learn.  They have the biological basis to do so.  Being enfleshed entitles them.  Too bad they can’t write it down.

Now I’m the first to admit that consciousness may well exist outside biology.  To tap into it, however, requires the consciousness “plug-in”—aka, a brain.  Would AI “read” novels for the pleasure of it?  Would it understand falling in love, or the fear of a monster prowling the night?  Or the thrill of solving a mystery?  These emotional aspects, which neurologists note are a crucial part of thinking, can’t be replicated without life.  Actually living.  Believe me, I mourn when machines I care for die.  I seriously doubt the feeling is reciprocated.  Materialism has been the reigning paradigm for quite a few decades now, while consciousness remains a quandary.  I’ve read novels that struggle with deep issues of being human.  I fear that we could be fooled with an AI novel where the “writer” is merely borrowing how humans communicate to pretend how it feels.  And I feel a little sad, knowing that Nanowrimo is hanging up the “closed” sign.  But humans, being what they are, will still likely try to complete novels in the month of November.


Finding Fossils

Mary Anning was a real woman.  She made valuable contributions to paleontology in the first half of the nineteenth century, although she wasn’t always credited for her work.  The movie Ammonite is a fictionalized account of her life at Lyme Regis, where she lived and discovered dinosaur fossils.  Being fiction, the movie focuses on how Mary “came out of her shell” by entering into a relationship with Charlotte Murchison (also an historical person, wife of the Scottish geologist Sir Roderick Impey Murchison) who was left in her care when she came down with a fever after trying to recover from melancholy by taking the sea air.  Mary had established a life of independence and wasn’t really seeking relationships; her mother still lived with her and, according to the movie, they had a distant but loving regard for each other.

I was anxious to see the film because it is sometimes classified as dark academia.  Since I’m trying to sharpen my sense of what that might mean, it’s helpful to watch what others think fits.  The academia part here comes from the intellectual pursuits of Anning and the academic nature of museum life (one of her fossils was displayed at the British Museum).  Anning, who had no formal academic training, tried to make a living in a “man’s world,” and in real life she did contribute significantly to paleontology.  The dark part seems to come in from her exclusion from the scientific community, and perhaps in her love for Charlotte, a forbidden relationship in that benighted time.  Of course, this relationship is entirely speculative.

Fictional movies made about factual people make me curious about the lives of those deemed movie-worthy.  Ammonite is a gentle movie and one which raises the question of why women were excluded from science for so long.  No records exist that address her sexuality—not surprisingly, since she lived during a period when such things weren’t discussed.  Indeed, she didn’t receive the acclaim that she might have, had she lived in the period of Jurassic Park.  She was noticed by Charles Dickens, who included a piece on her in his magazine All the Year Round, in 1865, several years after her death.  These days she is acknowledged and commemorated.  This movie is one such commemoration, although much of it likely never happened.  As with art house movies such as this, nonfiction isn’t to be assumed.  Nevertheless, it might still be dark academia.


Making More Monsters

It’s endlessly frustrating, being a big picture thinker.  This runs in families, so there may be something genetic about it.  Those who say, “Let’s step back a minute and think about this” are considered drags on progress (from both left and right), but would, perhaps, help avoid disaster.  In my working life of nearly half-a-century I’ve never had an employer who appreciated this.  That’s because small-picture thinkers often control the wealth and therefore have greater influence.  They can do what they want, consequences be damned.  These thoughts came to me reading Martin Tropp’s Mary Shelley’s Monster: The Story of Frankenstein.  I picked this up at a book sale once upon a time and reading it, have discovered that he was doing what I’m trying with “The Legend of Sleepy Hollow” in my most recent book.  Tropp traces some of the history and characters, but then the afterlives of Frankenstein’s monster.  (He had a publisher with more influence, so his book will be more widely known.)

This book, although dated, has a great deal of insight into the story of Frankenstein and his creature.  But also, insight into Mary Shelley.  Her tale has an organic connection to its creator as well.  Tropp quite frequently points out the warning of those who have more confidence than real intelligence, and how they forge ahead even when they know failure can have catastrophic consequences for all.  I couldn’t help but to think how the current development of AI is the telling of a story we’ve all heard before.  And how those who insist on running for office to stoke their egos also play into this same sad tale.  Perhaps a bit too Freudian for some, Tropp nevertheless anticipates much of what I’ve read in other books about Frankenstein, written in more recent days.

Some scientists are now at last admitting that there are limits to human knowledge.  (That should’ve been obvious.)  Meanwhile those with the smaller picture in mind forge ahead with AI, not really caring about the very real dangers it poses to a world happily wedded to its screens.  Cozying up to politicians who think only of themselves, well, we need a big picture thinker like Mary Shelley to guide us.  I can’t help but think big picture thinking has something to do with neurodivergence.  Those who think this way recognize, often from childhood, that other people don’t think like they do.  And that, lest they end up like Frankenstein’s monster, hounded to death by angry mobs, it’s better simply to address the smaller picture.  Or at least pretend to.


Telling Vision

My wife and I don’t watch much television.  In fact, we had very poor television reception from about 1988 (when we married), until we moved into this house in 2018.  That’s three decades without really watching the tube.  As we’ve been streaming/DVDing some of the series that have made a splash in those three decades, I’ve discovered (I can’t speak for her) that there were some great strides made in quality.  We began with The X-Files, then moved on to Lost.  We viewed Twin Peaks and started to watch Picket Fences, but the digital rights have expired so we never did finish out that series.  (Don’t get me started on digital rights management—the air will quickly turn blue, I assure you.)  Of course, we did manage to see Northern Exposure when it aired, but it should be mentioned for the sake of completion.  These were all exceptional programs.

Netflix (in particular) upped the game.  We watched the first three seasons of Stranger Things (and I’d still like to go back and pick up the more recent ones we’ve missed), and I watched the first episode of The Fall of the House of Usher (I didn’t realize until writing this up that it was only one season, so maybe I should go back and finish that one out too) and was very impressed.  Since we couldn’t finish Picket Fences, we turned to Wednesday.  Now, I was only ever a middling fan of The Addams Family.  I watched it as a kid because it had monsters, as I did The Munsters, but neither one really appealed to me.  Wednesday’s cut from a different cloth.  In these days when escapism is necessary, this can be a good thing.

Photo credit: Smithsonian Institution

Like most late boomers, I grew up watching television.  In my early memories, it’s pretty much ubiquitous.  We were poor, and our sets were black-and-white, but remembering my childhood without TV is impossible.  It was simply there.  The shows I watched formed me.  Now that I’m perhaps beyond excessive reforming (although I’m not opposed to the idea), I’m looking for brief snippets of something intelligent to wind up the day before I reboot to start this all over again.  We save movies for weekends, but an entire workweek without a break in nonfictional reality seems overwhelming on a Sunday evening.  It seems that I may be warming up to my childhood chum again.  This time, without network schedules, and limited time to spend doing it, we just may be in a golden age for the tube.


Protected?

I like Macs.  Really, I do.  Ever since I realized that “Windows” was a cut-rate way to imitate Macintosh’s integral operating system, I’ve never been able to look back.  (I don’t have a tech background so I may be wrong in the details.)  Every time I use a work laptop—inevitably PCs—I realize just how unintuitive they are.  Something about Apple engineers is that they understand the way ordinary people think.  I sometimes use software, not designed for a Mac, where I swear the engineers have no basic comprehension of English words at all.  And nobody ever bothers to correct them.  In any case, I find Macs intuitive and I’ve been using them for going on 40 years now.  But the intuitive element isn’t as strong as it used to be.  As we’re all expected to become more tech savvy, some of the ease of use has eroded.

For example, when I have to create a password for a website—not quite daily, but a frequent activity—Mac helpfully offers to create a strong password that I will never have to remember.  Now before you point out to me that software exists that will keep all your passwords together, please be advised that I know about such things.  The initial data entry to get set up requires more time off than I typically get in a year, so that’ll need to wait for retirement.  But I was talking about intuitive programming.  Often, when I think I won’t be visiting a website often, I’ll opt for the strong password.  Maybe I’ve got something pressing that I’m trying to accomplish and I can’t think of my three-thousandth unique password.  I let Mac drive.  That’s fine and good until there’s an OS update.  This too happens not quite daily, but it does sometimes occur more than once a week.

After restarting I go back to a website and the autofill blinks at me innocently as if it doesn’t recognize my username.  It doesn’t remember the strong password, and I certainly don’t.  So I need to come up with yet another new one.  At work I’m told you should change all your passwords every few months.  To me that seems like a full-time job.  For grey matter as time-honored as mine, it’s not an easy task.  I’m not about to ditch Macs because of this, but why offer me a strong password that only lasts until the next system update?  Truth be told, I’m a little afraid to post this because if by some miraculous chance a software engineer reads it and decides to act, a new systems update will be required again tonight.


Remembering Consciousness

I recently inadvertently read—it happens!—about anesthesia.  I’ve been relatively healthy for most of my adult life and have experienced anesthesia only for dental surgery and colonoscopies.  I’ve actually written about the experience here before: the experience of anesthesia is not like sleep.  You awake like you’ve just been born.  You weren’t, and then suddenly you are.  This always puzzled me because consciousness is something nobody fully understands and there is a wide opinion-spread on what happens to it when your body dies.  (I have opinions, backed by evidence, about this, but that’s for another time.)  What I read about anesthesia made a lot of sense of this conundrum, but it doesn’t answer the question of what consciousness is.  What I learned is this: anesthesiologists often include amnestics (chemicals that make you forget) in their cocktail.  That is, you may be awake, or partially so, during the procedure, but when you become conscious again you can’t remember it.

Now, that may bother some people, but for me it raises very interesting issues.  One is that I had no idea amnestics existed.  (It certainly sheds new light on those who claim alien abduction but who only remember under hypnosis.)  Who knew that even we have the ability to make people forget, chemically?  That, dear reader, is a very scary thought.  Tip your anesthesiologist well!  For me, I don’t mind so much if I can’t remember it, but it does help answer that question of why emerging from anesthesia is not the same as waking up.  Quite unrelated to this reading, I once watched a YouTube video of some prominent YouTubers (yes, that is a full-time job now) undergoing colonoscopies together.  They filmed each other talking during the procedure, often to hilarious results.  The point being, they were not fully asleep.  The blankness I experience after my own colonoscopies is born of being made to forget.

I think I have a pretty good memory.  Like most guys my age, I do forget things more easily—especially when work throws a thousand things at you simultaneously and you’re expected to catch and remember all of them.  Forgetting things really bothers me.  If you haven’t watched Christopher Nolan’s early film Memento, you should.  I think I remember including it in Holy Horror.  In any case, I don’t mind if anesthesiologists determine that it’s better to forget what might’ve happened when the last thing I remember is having been in an extremely compromised position in front of total strangers of both genders.  My accidental reading has solved one mystery for me, but it leaves open that persistent question of what consciousness really is.


Think

Those of us who write books have been victims of theft.  One of the culprits is Meta, owner of Facebook.  The Atlantic recently released a tool that allows authors to check if LibGen, a pirated book site used by Meta and others, has their work in its system.  Considering that I have yet to earn enough on my writing to pay even one month’s rent/mortgage, you get a little touchy about being stolen from by corporate giants.  Three of my books (A Reassessment of Asherah, Weathering the Psalms, and Nightmares with the Bible) are in LibGen’s collection.  To put it plainly, they have been stolen.  Now the first thing I noticed was that my McFarland books weren’t listed (Holy Horror and Sleepy Hollow as American Myth, of course, the latter is not yet published).  I also know that McFarland, unlike many other publishers, proactively lets authors know when they are discussing AI use of their content, and informing us that if deals are made we will be compensated.

I dislike nearly everything about AI, but especially its hubris.  Machines can’t think like biological organisms can and biological organisms that they can teach machines to “think” have another think coming.  Is it mere coincidence that this kind of thing happens at the same time reading the classics, with their pointed lessons about hubris, has declined?  I think not.  The humanities education teaches you something you can’t get at your local tech training school—how to think.  And I mean actually think.  Not parrot what you see on the news or social media, but to use your brain to do the hard work of thinking.  Programmers program, they don’t teach thinking.

Meanwhile, programmers have made theft easy but difficult to prosecute.  Companies like Meta feel entitled to use stolen goods so their programmers can make you think your machine can think.  Think about it!  Have we really become this stupid as a society that we can’t see how all of this is simply the rich using their influence to steal from the poor?  LibGen, and similar sites, flaunt copyright laws because they can.  In general, I think knowledge should be freely shared—there’s never been a paywall for this blog, for instance.  But I also know that when I sit down to write a book, and spend years doing so, I hope to be paid something for doing so.  And I don’t appreciate social media companies that have enough money to buy the moon stealing from me.  There’s a reason my social media use is minimal.  I’d rather think.


Lights, Cam

Techno-horror is an example of how horror meets us where we are.  When I work on writing fiction, I often reflect how our constant life online has really changed human beings and has given us new things to be afraid of.  I posted some time ago about Unfriended, which is about an online stalker able to kill people IRL (in real life).  In that spirit I decided to brave CAM, which is based on  an internet culture of which I knew nothing.  You see, despite producing online content that few consume, I don’t spend much time online.  I read and write, and the reading part is almost always done with physical books.  As a result, I don’t know what goes on online.  Much more than I ever even imagine, I’m sure.

CAM is about a camgirl.  I didn’t even know what that was, but I have to say this film gives you a pretty good idea and it’s definitely NSFW.  Although, having said that, camgirl is, apparently, a real job.  There is a lot of nudity in the movie, in service of the story, and herein hangs the tale.  Camgirls can make a living by getting tips in chatrooms for interacting, virtually, with viewers and acting out their sexual fantasies.  Now, I’ve never been in a chatroom—I barely spend any time on social media—so this culture was completely unfamiliar to me.  Lola_Lola is a camgirl who wants to get into the top fifty performers on the platform  she uses.  Then something goes wrong.  Someone hacks her account, getting all her money, and performing acts that Lola_Lola never does.  What makes this even worse is that the hacker is apparently AI, which has created a doppelgänger of her. AI is the monster.

I know from hearing various experts at work that deep fakes such as this can really take place.  We would have a very difficult, if not impossible, time telling a virtual person from a real one, online.  People who post videos online can be copied and imitated by AI with frightening verisimilitude.  What makes CAM so scary in this regard is that it was released in 2018 and now, seven years later such things are, I suspect, potentially real.  Techno-horror explores what makes us afraid in this virtual world we’ve created for ourselves.  In the old fashioned world sex workers often faced (and do face) dangers from clients who take their fantasies too far.  And, as the movie portrays, the police seldom take such complaints seriously.  The truly frightening aspect is there would be little that the physical police could do in the case of cyber-crime.  Techno-horror is some of the scariest stuff out there, IMHO.