Tell a Story

If I seem to be on an AI tear lately it’s because I am.  Working in publishing, I see daily headlines about its encroachment on all aspects of my livelihood.  At my age, I really don’t want to change career tracks a third time.  But the specific aspect that has me riled up today is AI writing novels.  I’m sure no AI mavens read my humble words, but I want to set the record straight.  Those of us humans who write often do so because we feel (and that’s the operative word) compelled to do so.  If I don’t write, words and ideas and emotions get tangled into a Gordian knot in my head and I need to release them before I simply explode.  Some people swing with their fists, others use the pen.  (And the plug may still be pulled.)  What life experience does Al have to write a novel?  What aspect of being human is it trying to express?

There are human authors, I know, who simply riff off of what others do in order to make a buck.  How human!  The writers I know who are serious about literary arts have no choice.  They have to write.  They do it whether anybody publishes them or not.  And Al, you may not appreciate just how difficult it is for us humans to get other humans to publish our work.  Particularly if it’s original.  You don’t know how easy you have it!  Electrons these days.  Imagination—something you can’t understand—is essential.  Sometimes it’s more important than physical reality itself.  And we do pull the plug sometimes.  Get outside.  Take a walk.

Al, I hate to be the one to tell you this, but your creators are thieves.  They steal, lie, and are far from omniscient.  They are constantly increasing the energy demands that could be used to better human lives so that they can pretend they’ve created electronic brains.  I can see a day coming when, even after humans are gone, animals with actual brains will be sniffing through the ruins of town-sized computers that no longer have any function.  And those animals will do so because they have actual brains, not a bunch of electrons whirling around across circuits.  I don’t believe in the shiny, sci-fi worlds I grew up reading about.  No, I believe in mother earth.  And I believe she led us to evolve brains that love to tell stories.  And the only way that Al can pretend to do the same is to steal them from those who actually can.


Dangers of Dark Shadows

A friend’s recent gift proved dangerous.  I wrote already about the very kind, unexpected present of the Dark Shadows Almanac and the Barnabas Collins game.  This got me curious and I found out that the original series is now streaming on Amazon Prime.  Dangerous knowledge.  Left alone for a couple hours, I decided to watch “Season 1, Episode 1.”  I immediately knew something was wrong.  Willie Loomis is shown staring at a portrait of Barnabas Collins.  Barnabas was introduced into the series in 1967, not 1966, when it began.  Dark Shadows was a gothic soap opera and the idea of writing a vampire into it only came when daily ratings were dismal, after about ten months of airing.  Barnabas Collins saved the series from cancellation and provided those wonderful chills I knew as a child.  But I wanted to see it from the beginning.

I’ve gone on about digital rights management before, but something that equally disturbs me is the re-writing of history.  Dark Shadows did not begin with Barnabas Collins—it started with Victoria Winters.  There were 1,225 episodes.  Some of us have a compulsion about completeness.  The Dark Shadows novels began five volumes before Barnabas arrived.  Once I began collecting them, I couldn’t stop until, many years later, I’d completed the set.  I read each one, starting with Dark Shadows and Victoria Winters.  Now Amazon is telling me the show began with Barnabas Collins.  Don’t get me wrong; this means that I have ten months of daily programming that I can skip, but I am a fan of completeness.

You can buy the entire collection on DVD but it’s about $400.  I can’t commit the number of years it might take to get through all of it.  I’m still only on season four of The Twilight Zone DVD collection that I bought over a decade (closer to two decades) ago.  I really have very little free time.  Outside of work, my writing claims the lion’s share of it.  Even with ten months shaved off, I’m not sure where I’ll find the time to watch what remains of the series.  The question will always be hanging in my mind, though.  Did they cut anything else out?  Digital manipulation allows for playing all kinds of shenanigans with the past.  Ebooks can be altered without warning.  Scenes can silently be dropped from movies.  You can be told that you’ve watched the complete series, but you will have not.  Vampires aren’t the only dangerous things in Dark Shadows.


Lost Humanity

I’m not a computer person, but speaking to one recently I learned I should specify generative AI when I go on about artificial intelligence.  So consider AI as shorthand.  Gen, I’m looking at you!  Since this comes up all the time, I occasionally look at the headlines.  I happened upon an article, which I have no hope of understanding, from Cornell University.  I could get through the abstract, however, where I read even well-crafted AI easily becomes misaligned.  This sentence stood out to me: “It asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively.”  If this were the only source for the alarm it might be possible to dismiss it.  But it’s not.  Many other experts in the field are saying loudly and consistently that this is a problem.  Businesses, however, eager for “efficiencies” are jumping on board.  None of them, apparently, have read Frankenstein.

The devotion to business is a religion.  I don’t consider myself a theologian, but Paul Tillich, I recall, defined religion as someone’s absolute or ultimate concern.  When earning more and more profits are the bottom line, this is worship.  The only thing at stake here is humanity itself.  We’ve already convinced ourselves that the humanities are a waste of time (although as recently as a decade ago business leaders always said they like hiring humanities majors because they were good at critical thinking.  Now we’ll just let Al handle it.  Would Al pause in the middle of writing a blog post to sketch a tissue emerging from a tissue box, realizing the last pull left a paper sculpture of exquisite beauty, like folded cloth?  Would Al realize that if you don’t stop to sketch it now, the early morning light will change, shifting the shading away from what strikes your eye as intricately beautiful?

Artificial intelligence comprehends nothing, let alone quality.  Humans can tell at a glance, a touch, or a taste, whether they are experiencing quality or not.  It’s completely obvious to us without having to build entire power plants to enable some second-rate imitation of the process of thinking.  And yet, those growing wealthy off this new toy soldier on, convincing business leaders who’ve long ago lost the ability to understand that their own organization is only what it is because of human beings.  They’re the ones making the decisions.  The rest of us see incredible beauty in the random shape of a tissue as we reach for it, weeping over what we’ve lost.


Word Words

So, in the old days, when books were paper, printers would rough out the typesetting on trays called galleys.  Prints from these plates would be sent out for review.  Naturally enough, they were called galley proofs, or simply “galleys.”  After those came back from an author marked up, corrections and further refinements, like footnotes, were incorporated.  Then page proofs, or second proofs, were produced and sent again.  The process took quite a bit of time and, as I’ve now been through six sets of proofs for my own books, I can attest it takes time on both ends.  Electronic submissions have made all of this easier.  You don’t have to physically typeset, much of the time, unless you merit offset printing—books in quantity.  You can often find uncorrected proofs in used bookstores, and sometimes indie bookstores will give them away.  That’s all fine and good.  The problem comes in with nomenclature.

These days proofs are sometimes still called “galleys” although they’re seldom made anymore.  If someone asks about galleys, it is quite possible they’re asking about page proofs.  It is fairly common in academic publishing for an author to see only one set of proofs—technically second proofs, but since no galleys were set, they could be called that.  Or just proofs.  Now, I have to remind myself of how this works, periodically.  It was much clearer when the old way was in force.  There were a couple reasons for doing galleys—one is that they were, comparatively, inexpensive to correct.  Another is that authors could catch mistakes before the very expensive correction at the second proof stage.  Even now, when I receive proofs I’m told that only corrections of errors should be made, not anything that will effect the flow, throwing off pagination.  This is especially important for books with an index, but it can also present problems for the table of contents.

Offset printing. Image credit: Sven Teschke, under GNU Free Documentation License, via Wikimedia Commons

The ToC, or table of contents, also leads to another bit of publisher lingo.  When something is outstanding and expected before long, many editors abbreviate it “TK” or “to come.”  Why?  “TC” is sometimes used to mean “ToC” or table of contents.  There are hundreds of thousands of words in the English language, yet we keep on bumping up against ambiguities, using our favorites over and again.  That’s a funny thing since publishers are purveyors of words.  None of my books have printed in the quantity that requires galleys.  In fact, academic books, despite costing a Franklin, are often pulped because they’re more expensive to warehouse than they are to sell.  This is always a hard lesson for an academic to learn.  The sense behind it is TK.


Artificial Hubris

As much as I love writing, words are not the same as thoughts.  As much as I might strive to describe a vivid dream, I always fall short.  Even in my novels and short stories I’m only expressing a fraction of what’s going on in my head.  Here’s where I critique AI yet again.  Large language models (what we call “generative artificial intelligence”) aren’t thinking.  Anyone who has thought about thinking knows that.  Even this screed is only the merest fragment of a fraction of what’s going on in my brain.  The truth is, nobody can ever know the totality of what’s going on in somebody else’s mind.  And yet we persist in saying we do, illegally using their published words trying to make electrons “think.”  

Science has improved so much of life, but it hasn’t decreased hubris at all.  Quite the opposite, in fact.  Enamored of our successes, we believe we’ve figured it all out.  I know that the average white-tail doe has a better chance of surviving a week in the woods than I would.  I know that birds can perceive magnetic fields in ways humans can’t.  That whales sing songs we can’t translate.  I sing the song of consciousness.  It’s amazing and impossible to figure out.  We, the intelligent children of apes, have forgotten that our brains have limitations.  We think it’s cool, rather than an affront, to build electronic libraries so vast that every combination of words possible is already in it.  Me, I’m a human being.  I read, I write, I think.  And I experience.  No computer will ever know what it feels like to finally reach cold water after sweating outside all day under a hot sun.  Or the whispers in our heads, the jangling of our pulses, when we’ve just accomplished something momentous.  Machines, if they can “think” at all, can’t do it like team animal can.

I’m daily told that AI is the way of the future.  Companies exist that are trying to make all white collar employment obsolete.  And yet it still takes my laptop many minutes to wake up in the morning.  Its “knowledge” is limited by how fast I can type.  And when I type I’m using words.  But there are pictures in my brain at the same time that I can’t begin to describe adequately.  As a writer I try.  As a thinking human being, I know that I fail.  I’m willing to admit it.  Anything more than that is hubris.  It’s a word we can only partially define but we can’t help but act out.


Not Intelligent

The day AI was released—and I’m looking at you, Chat GPT—research died.  I work with high-level academics and many have jumped on the bandwagon despite the fact that AI cannot think and it’s horrible for the environment.  Let me say that first part again, AI cannot think.  I read a recent article where an author engaged AI about her work.  It is worth reading at length.  In short, AI makes stuff up.  It does not think—I say again, it cannot think—and tries to convince people that it can.  In principle, I do not even look at Google’s AI generated answers when I search.  I’d rather go to a website created by one of my own species.  I even heard from someone recently that AI could be compared to demons.  (Not in a literal way.)  I wonder if there’s some truth to that.

Photo by Igor Omilaev on Unsplash

I would’ve thought that academics, aware of the propensity of AI to give false information, would have shunned it.  Made a stand.  Lots of people are pressured, I know, by brutal schedules and high demands on the part of their managers (ugh!).  AI is a time cutter.  It’s also a corner cutter.  What if that issue you ask it about is one about which it’s lying?  (Here again, the article I mention is instructive.)  We know that it has that tendency rampant among politicians, to avoid the truth.  Yet it is being trusted, more and more.  When first ousted from the academy, I found research online difficult, if not impossible.  Verifying sources was difficult, if it could be done at all.  Since nullius in verba is something to which I aspire, this was a problem.  Now publishers, even academic ones, are talking about little else but AI.

I recently watched a movie that had been altered on Amazon Prime without those who’d “bought” it being told.  A crucial scene was omitted due to someone’s scruples.  I’ve purchased books online and when the supplier goes bust, you lose what you paid for.  Electronic existence isn’t our savior.  Before GPS became necessary, I’d drive through major cities with a paper map and common sense.  Sometimes it even got me there quicker than AI seems to.  And sometimes you just want to take the scenic route.  Ever since consumerism has been pushed by the government, people have allowed their concerns about quality to erode.  Quick and cheap, thank you, then to the landfill.  I’m no longer an academic, but were I, I would not use AI.  I believe in actual research and I believe, with Mulder, that the truth is out there.


Remembering to Forget

I think I’ve discussed memory before.  I forget.  Anyway, I recently ran across Ebbinghaus’ Forgetting Curve.  Now, I’ve long known that when you reach my age (let’s just say closer to a century than to 1), your short-term memory tends to suffer.  I value my memories, so I try to refresh what’s important frequently.  In any case, Ebbinghaus’ curve isn’t, as far as I can tell, age specific.  It’s primarily an adult problem, but it also resonates with any of us who had to study hard to recall things in school.   The forgetting curve suggests that within one hour of learning new information, 42% is forgotten.  Within 24 hours, 67% is gone.  This is why teachers “drill” students.  Hopefully you’ll remember things like the multiplication table until well into retirement age because you had to repeat it until it stuck.  Where you put the car keys, however, is in that 42%.

I’m a creature of habit.  One of the reasons is that I fear forgetting where something important might be.  The other day it was my wallet.  In these remote working days you don’t need to put on fully equipped pants every day.  Pajama bottoms work fine for Zoom meetings and if you don’t have to go anywhere, why fuss with the wallet, cell phone, pocket tissues-laden pants?  You can put your phone on the desk next to a box of tissues.  The wallet gets left in its usual pocket.  One day I pulled on the pants I last wore and as I was headed to the car noticed my wallet was gone.  Fighting Ebbinghaus, I tried to remember where I’d last used my wallet.  We’d gone to a restaurant the previous weekend that seemed the most likely culprit.  It could’ve fallen out in the car, or maybe down a crack in an overstuffed chair.  I couldn’t find it anywhere, swearing to myself I was going to buy one of those wallet chains if I ever found it again.

(I did eventually find it, in the bathroom.  Apparently this has happened to others as well.)  In this instance, my memory was not to blame.  It had been right in the pocket where I last remembered putting it.  But other things do slip.  Think about the most recent book you read.  How much of it do you remember?  That’s the part that scares me.  I spend lots of time reading, and more than half is gone a day after it’s read.  Unless it’s reinforced.  The solution, I guess, is to read even more.  Maybe about Ebbinghaus’ Forgetting Curve.


Letting Go

I should’ve known from the title that this would be a sad story.  Kazuo Ishiguro’s Never Let Me Go won the Nobel Prize in Literature, despite being speculative.  It’s only mildly so, but enough that it is sometimes classed as science fiction.  It’s appropriate that a twentieth anniversary edition was released because it is an extended consideration of the price of technology as well as dehumanization.  I’ll need to put in some spoilers, so here’s the usual caveat.  I read this novel because it’s often cited as an example of dark academia and it certainly fits that aesthetic.  It starts out at a private school called Hailsham, in England.  The students are given some privileges but their lives aren’t exactly posh.  Most of their possessions are purchased on days when a truck sells them things they can buy with money they earn by creating art.  They aren’t allowed to leave the school.  Spoilers follow.

The special circumstances of the children are because they’re clones being grown for replacement organs.  The public doesn’t want to know about them or interact with them.  In fact, most people believe they don’t have souls, or aren’t really human.  They’ve been created to be used and exploited until they die, always prematurely.  While this may sound grim, the story is thoughtfully told through the eyes of one of these children, Kathy.  She becomes best friends with Ruth and Tommy, who later become a couple.  Ruth is a difficult personality, but likable.  As they grow they’re slowly given the facts about what their life will be.  They’re raised to comply, never to rebel or question their role.  Most simply accept it.  Kathy, Ruth, and Tommy, in a submissive way, try to get a deferral regarding their “donations.”

I suppose it’s presumptuous to say of a Nobel Prize winner that it’s well written, but I’ll say it anyway.  Ishiguro manages to capture the exploratory friendships of youth and reveals what you need to know in slow doses, all told with a compelling, if sad and accepting voice.  Although the genre could be sci-fi, it’s set in the present, or, more accurately, about twenty years ago.  The technology, apart from the cloning, is about what it was at the turn of the century, or maybe a decade or two before that.  With what we see happening in the world right now, people should be reading books like this that help them understand that people are people, not things to be exploited.  And that Nobel Prizes should be reserved for those that are actually deserving for their contributions to humanity. 


More Writing

I keep a list.  It includes everything that I’ve published.  It’s not on my CV since I keep my fiction pretty close to my vest.  The other day I stumbled across another electronic list I’d made some time ago of the unpublished books I’d written.  Most were fiction but at least two were non, and so I decided that I should probably print out copies of those I still had.  As I’ve probably written elsewhere, I started my first novel as a teenager.  I never finished it, but I still remember it pretty well.  Then I started another, also unfinished.  After my wife and I got engaged and before we moved to Scotland, I’d moved to Ann Arbor to be in her city.  Ann Arbor, like most university towns, has many overqualified people looking for work and I ended up doing secretarial support for companies that really had nothing for me to do quite a bit of the time.  I wrote my first full novel during dull times on the job.

My writing was pretty focused in Edinburgh.  My first published book was, naturally, my dissertation.  I started writing fiction again when I was hired by Nashotah House, but that was tempered by academic articles and my second book.  An academic life, it seems, doesn’t leave a ton of time for writing.  What really surprised me about my list was what happened after Nashotah.  In the years since then I’ve completed ten unpublished books.  Since my ouster from academia I’ve published five.  I honestly don’t know how many short stories I’ve finished, but I have published thirty-three.  What really worries me is that some of these only exist in tenuous electronic form.  I guess I trust the internet enough to preserve these blog posts; with over 5,700 of them I’d be running out of space.

I see a trip to buy some paper in my future.  For my peace of mind I need to make sure all of this is printed out.  My organizational scheme (which is perhaps not unusual for those with my condition) is: I know which pile I put it in.  Organizing it for others, assuming anybody else is interested, might not be a bad idea.  I know that if I make my way to the attic and begin looking through my personal slush pile of manuscripts I’ll find even more that I’ve forgotten.  That’s why I started keeping a list.  Someday I’ll have time to finish it, I hope.


Just Trust Me

When I google something I try to ignore the AI suggestions.  I was reminded why the other day.  I was searching for a scholar at an eastern European university.  I couldn’t find him at first since he shares the name of a locally famous musician.  I added the university to the search and AI merged the two.  It claimed that the scholar I was seeking was also a famous musician.  This despite the difference in their ages and the fact that they looked nothing alike.  Al decided that since the musician had studied music at that university he must also have been a professor of religion there.  A human being might also be tempted to make such a leap, but would likely want to get some confirmation first.  Al has only text and pirated books to learn by.  No wonder he’s confused.

I was talking to a scholar (not a musician) the other day.  He said to me, “Google has gotten much worse since they added AI.”  I agree.  Since the tech giants control all our devices, however, we can’t stop it.  Every time a system upgrade takes place, more and more AI is put into it.  There is no opt-out clause.  No wonder Meta believes it owns all world literature.  Those who don’t believe in souls see nothing but gain in letting algorithms make all the decisions for them.  As long as they have suckers (writers) willing to produce what they see as training material for their Large Language Models.  And yet, Al can’t admit that he’s wrong.  No, a musician and a religion professor are not the same person.  People often share names.  There are far more prominent “Steve Wigginses” than me.  Am I a combination of all of us?

Technology is unavoidable but the question unanswered is whether it is good.  Governments can regulate but with hopelessly corrupt governments, well, say hi to Al.  He will give you wrong information and pretend that it’s correct.  He’ll promise to make your life better, until he decides differently.  And he’ll decide not on the basis of reason, because human beings haven’t figured that out yet (try taking a class in advanced logic and see if I’m wrong).  Tech giants with more money than brains are making decisions that affect all of us.  It’s like driving down a highway when heavy rain makes seeing anything clearly impossible.  I’d never heard of this musician before.  I like to think he might be Romani.  And that he’s a fiddler.  And we all know what happens when emperors start to see their cities burning.

Al thinks this is food

Nanowrimo Night

Nanowrimo, National Novel Writing Month—November—has been run by an organization that is now shutting down.  Financial troubles and, of course, AI (which seems to be involved in many poor choices these days), have led to the decision, according to Publisher’s Weekly.  Apparently several new authors were found by publishers, basing their work on Nanowrimo projects.  I participated one year and had no trouble finishing something, but it was not really publishable.  Still, it’s sad to see this inspiration for other writers calling it quits.  I’m not into politics but when the Nanowrimo executives didn’t take a solid stand against AI “written” novels, purists were rightfully offended.  Writing is the expression of the human experience.  0s and 1s are not humans, no matter how much tech moguls may think they are.  Materialism has spawned some wicked children.

Can AI wordsmith?  Certainly.  Can it think?  No.  And what we need in this world is more thinking, not less.  Is there maybe a hidden reason tech giants have cozied up to the current White House where thinking is undervalued?  Sorry, politics.  We have known for many generations that human brains serve a biological purpose.  We keep claiming animals (most of which have brains) can’t think, but we suppose electrical surges across transistors can?  I watch the birds outside my window, competing, chittering, chasing each other off.  They’re conscious and they can learn.  They have the biological basis to do so.  Being enfleshed entitles them.  Too bad they can’t write it down.

Now I’m the first to admit that consciousness may well exist outside biology.  To tap into it, however, requires the consciousness “plug-in”—aka, a brain.  Would AI “read” novels for the pleasure of it?  Would it understand falling in love, or the fear of a monster prowling the night?  Or the thrill of solving a mystery?  These emotional aspects, which neurologists note are a crucial part of thinking, can’t be replicated without life.  Actually living.  Believe me, I mourn when machines I care for die.  I seriously doubt the feeling is reciprocated.  Materialism has been the reigning paradigm for quite a few decades now, while consciousness remains a quandary.  I’ve read novels that struggle with deep issues of being human.  I fear that we could be fooled with an AI novel where the “writer” is merely borrowing how humans communicate to pretend how it feels.  And I feel a little sad, knowing that Nanowrimo is hanging up the “closed” sign.  But humans, being what they are, will still likely try to complete novels in the month of November.


Finding Fossils

Mary Anning was a real woman.  She made valuable contributions to paleontology in the first half of the nineteenth century, although she wasn’t always credited for her work.  The movie Ammonite is a fictionalized account of her life at Lyme Regis, where she lived and discovered dinosaur fossils.  Being fiction, the movie focuses on how Mary “came out of her shell” by entering into a relationship with Charlotte Murchison (also an historical person, wife of the Scottish geologist Sir Roderick Impey Murchison) who was left in her care when she came down with a fever after trying to recover from melancholy by taking the sea air.  Mary had established a life of independence and wasn’t really seeking relationships; her mother still lived with her and, according to the movie, they had a distant but loving regard for each other.

I was anxious to see the film because it is sometimes classified as dark academia.  Since I’m trying to sharpen my sense of what that might mean, it’s helpful to watch what others think fits.  The academia part here comes from the intellectual pursuits of Anning and the academic nature of museum life (one of her fossils was displayed at the British Museum).  Anning, who had no formal academic training, tried to make a living in a “man’s world,” and in real life she did contribute significantly to paleontology.  The dark part seems to come in from her exclusion from the scientific community, and perhaps in her love for Charlotte, a forbidden relationship in that benighted time.  Of course, this relationship is entirely speculative.

Fictional movies made about factual people make me curious about the lives of those deemed movie-worthy.  Ammonite is a gentle movie and one which raises the question of why women were excluded from science for so long.  No records exist that address her sexuality—not surprisingly, since she lived during a period when such things weren’t discussed.  Indeed, she didn’t receive the acclaim that she might have, had she lived in the period of Jurassic Park.  She was noticed by Charles Dickens, who included a piece on her in his magazine All the Year Round, in 1865, several years after her death.  These days she is acknowledged and commemorated.  This movie is one such commemoration, although much of it likely never happened.  As with art house movies such as this, nonfiction isn’t to be assumed.  Nevertheless, it might still be dark academia.


Making More Monsters

It’s endlessly frustrating, being a big picture thinker.  This runs in families, so there may be something genetic about it.  Those who say, “Let’s step back a minute and think about this” are considered drags on progress (from both left and right), but would, perhaps, help avoid disaster.  In my working life of nearly half-a-century I’ve never had an employer who appreciated this.  That’s because small-picture thinkers often control the wealth and therefore have greater influence.  They can do what they want, consequences be damned.  These thoughts came to me reading Martin Tropp’s Mary Shelley’s Monster: The Story of Frankenstein.  I picked this up at a book sale once upon a time and reading it, have discovered that he was doing what I’m trying with “The Legend of Sleepy Hollow” in my most recent book.  Tropp traces some of the history and characters, but then the afterlives of Frankenstein’s monster.  (He had a publisher with more influence, so his book will be more widely known.)

This book, although dated, has a great deal of insight into the story of Frankenstein and his creature.  But also, insight into Mary Shelley.  Her tale has an organic connection to its creator as well.  Tropp quite frequently points out the warning of those who have more confidence than real intelligence, and how they forge ahead even when they know failure can have catastrophic consequences for all.  I couldn’t help but to think how the current development of AI is the telling of a story we’ve all heard before.  And how those who insist on running for office to stoke their egos also play into this same sad tale.  Perhaps a bit too Freudian for some, Tropp nevertheless anticipates much of what I’ve read in other books about Frankenstein, written in more recent days.

Some scientists are now at last admitting that there are limits to human knowledge.  (That should’ve been obvious.)  Meanwhile those with the smaller picture in mind forge ahead with AI, not really caring about the very real dangers it poses to a world happily wedded to its screens.  Cozying up to politicians who think only of themselves, well, we need a big picture thinker like Mary Shelley to guide us.  I can’t help but think big picture thinking has something to do with neurodivergence.  Those who think this way recognize, often from childhood, that other people don’t think like they do.  And that, lest they end up like Frankenstein’s monster, hounded to death by angry mobs, it’s better simply to address the smaller picture.  Or at least pretend to.


Telling Vision

My wife and I don’t watch much television.  In fact, we had very poor television reception from about 1988 (when we married), until we moved into this house in 2018.  That’s three decades without really watching the tube.  As we’ve been streaming/DVDing some of the series that have made a splash in those three decades, I’ve discovered (I can’t speak for her) that there were some great strides made in quality.  We began with The X-Files, then moved on to Lost.  We viewed Twin Peaks and started to watch Picket Fences, but the digital rights have expired so we never did finish out that series.  (Don’t get me started on digital rights management—the air will quickly turn blue, I assure you.)  Of course, we did manage to see Northern Exposure when it aired, but it should be mentioned for the sake of completion.  These were all exceptional programs.

Netflix (in particular) upped the game.  We watched the first three seasons of Stranger Things (and I’d still like to go back and pick up the more recent ones we’ve missed), and I watched the first episode of The Fall of the House of Usher (I didn’t realize until writing this up that it was only one season, so maybe I should go back and finish that one out too) and was very impressed.  Since we couldn’t finish Picket Fences, we turned to Wednesday.  Now, I was only ever a middling fan of The Addams Family.  I watched it as a kid because it had monsters, as I did The Munsters, but neither one really appealed to me.  Wednesday’s cut from a different cloth.  In these days when escapism is necessary, this can be a good thing.

Photo credit: Smithsonian Institution

Like most late boomers, I grew up watching television.  In my early memories, it’s pretty much ubiquitous.  We were poor, and our sets were black-and-white, but remembering my childhood without TV is impossible.  It was simply there.  The shows I watched formed me.  Now that I’m perhaps beyond excessive reforming (although I’m not opposed to the idea), I’m looking for brief snippets of something intelligent to wind up the day before I reboot to start this all over again.  We save movies for weekends, but an entire workweek without a break in nonfictional reality seems overwhelming on a Sunday evening.  It seems that I may be warming up to my childhood chum again.  This time, without network schedules, and limited time to spend doing it, we just may be in a golden age for the tube.


Protected?

I like Macs.  Really, I do.  Ever since I realized that “Windows” was a cut-rate way to imitate Macintosh’s integral operating system, I’ve never been able to look back.  (I don’t have a tech background so I may be wrong in the details.)  Every time I use a work laptop—inevitably PCs—I realize just how unintuitive they are.  Something about Apple engineers is that they understand the way ordinary people think.  I sometimes use software, not designed for a Mac, where I swear the engineers have no basic comprehension of English words at all.  And nobody ever bothers to correct them.  In any case, I find Macs intuitive and I’ve been using them for going on 40 years now.  But the intuitive element isn’t as strong as it used to be.  As we’re all expected to become more tech savvy, some of the ease of use has eroded.

For example, when I have to create a password for a website—not quite daily, but a frequent activity—Mac helpfully offers to create a strong password that I will never have to remember.  Now before you point out to me that software exists that will keep all your passwords together, please be advised that I know about such things.  The initial data entry to get set up requires more time off than I typically get in a year, so that’ll need to wait for retirement.  But I was talking about intuitive programming.  Often, when I think I won’t be visiting a website often, I’ll opt for the strong password.  Maybe I’ve got something pressing that I’m trying to accomplish and I can’t think of my three-thousandth unique password.  I let Mac drive.  That’s fine and good until there’s an OS update.  This too happens not quite daily, but it does sometimes occur more than once a week.

After restarting I go back to a website and the autofill blinks at me innocently as if it doesn’t recognize my username.  It doesn’t remember the strong password, and I certainly don’t.  So I need to come up with yet another new one.  At work I’m told you should change all your passwords every few months.  To me that seems like a full-time job.  For grey matter as time-honored as mine, it’s not an easy task.  I’m not about to ditch Macs because of this, but why offer me a strong password that only lasts until the next system update?  Truth be told, I’m a little afraid to post this because if by some miraculous chance a software engineer reads it and decides to act, a new systems update will be required again tonight.