The Lord

“This article may incorporate text from a large language model. It may include hallucinated information, copyright violations, claims not verified in cited sources, original research, or fictitious references. Any such material should be removed, and content with an unencyclopedic tone should be rewritten.”  So it begins.  This quote is from Wikipedia.  I was never one of those academics who uselessly forbade students from consulting Wikipedia.  I always encourage those who do to follow up and check the sources.  I often use it myself as a starting place.  I remember having it drilled into me as a high school and college student that in general encyclopedias were not academic sources, even if the articles had academic authors.  Specialized reference works were okay, but general sources of knowledge should not be cited.

The main point of this brief disquisition, however, is our familiar nemesis, AI.  Artificial Intelligence is not intelligence in the sense of the knowing application of knowledge.  In fact, Wikipedia’s warning uses the proper designation of “large language model.”  Generative AI is prone to lying—it could be a politician—but mostly when it doesn’t “know” an answer.  It really doesn’t know anything at all.  And it will only increase its insidious influence.  I am saddened by those academics who’ve jumped on the bandwagon.  I’m definitely an old school believer.  So much so that one of my recurring fantasies is to sell it all, except for the books, buy a farm off the grid and raise my own food.  Live like those of us in this agricultural spiral must.

A true old schooler would insist on going back to the hunter-gatherer phase, something I would be glad to do were there a vegan option.  Unfortunately tofubeasts who are actually plant-based lifeforms don’t wander the forests.  So I find myself buying into the comforts of a life that’s, honestly, mostly online these days.  I work online.  I spend leisure time online (although not as much as many might guess that I do).  And I’m now faced with being force-fed what some technocrat thinks is pretty cool.  Or, more honestly, what’s going to make him (and I suspect these are mostly guys) buckets full of money.  Consider the cell phone that many people can no longer be without.  I sometimes forget mine at home.  And guess what?  I’ve not suffered for having done so.  The tech lords have had their say, I’m more interested in what people have to say.  And if Al is going to interfere with the first steps of learning for many people, it won’t be satisfied until we’re all its slaves.


Creepy AI Doll

We’ve all seen the killing doll horror movie before, of course.  Who hasn’t?  What makes M3GAN different is the whole artificial intelligence angle.  Okay, so you understand it’s about a killing doll, but unlike Chucky or Annabelle, M3GAN has a titanium frame and a super-advanced, wifi-connected brain.  Like generative AI, she’s able to learn on her own and even able to use her own reasoning to get around her basic programming.  Now, you’re likely smarter than me and I didn’t catch what the critics call the “campiness” to the film.  Yes, there are places that made me snicker a little, but although the killing doll premise made the results somewhat predictable, I watched it seriously.  Some websites list it as horror comedy, while others prefer sci-fi thriller.  Nevertheless, it isn’t really that funny.  And there’s a cautionary element to it.

Funki, a Seattle-based toy company, is always trying to stay ahead of the competition.  Animatronic toys are the rage, and Gemma (brilliant choice to have a female mad scientist here) is a visionary programmer.  She wasn’t expecting, however, to become her niece’s guardian after Gemma’s sister was killed in an accident.  The M3GAN prototype was already underway, but Gemma kicks it into high gear to help make up for her own lack of parenting skills.  M3GAN becomes her niece’s companion—soulmate, even—and since the two are bonded with biometrics, her protector.  Bullies, lend me your ear; you don’t want to mess with a girl who has an android as a bestie.  And nosey neighbors, fix that hole in your fence.  Or at least curb your dog.

Instead of I, Robot this is more like You, Robot.  There is a wisdom to the othering that goes on here because none of us know in what kind of reasoning generative IA might engage.  In real life computers have been discovered communicating with one another in a language that their programmers couldn’t read.  We’re all biological, however, and thinking, as we know it, involves many biological factors.  Logic is part of it, but it’s not the whole story.  So techies who idolize Spock and his lack of emotion feel that they can emulate thinking by making it a set of algorithms.  My algorithms lead me to watch horror films out of a combination of curiosity and a need for therapy.  Where does a computer go for therapy?  The internet?  Well, you might find some good advice there, but don’t be surprised if it comes at you with a paper-cutter sword in the end.  You’ve been warned.


Next Gen AI, Truly

Okay, so it was a scary meeting.  It was about AI—artificial intelligence.  Specifically Generative IA.  That’s the kind that makes up answers to questions put to it, or does tasks it’s assigned.  The scary part, to me, is that we are being forced to deal with it because tech companies have unleashed it upon the world without thinking through the consequences.  Such hubris gets us into trouble again and again but it never stops us.  We’re sapiens!  You see, GAI (Generative AI) is under no obligation to tell the truth.  It likely can’t even understand the concept, which is a human concept based on perceptions of reality.  GAI simply provides answers based on the dataset it’s been fed.  It can generate texts, and photos (which are so doctored these days anyway that we need a photo-hospital), which means it can, to borrow the words of a sage, “make a lie sound just like truth.”  We already have politicians enough to do that, thank you.

My real fear is that the concept of truth itself is eroding.  With Trump’s “truth is whatever I say it is” administration, and its ongoing aftermath, many Americans have lost any grip on the idea.  Facts are no longer recognized as facts.  “Well I asked ChatGPT and it told me…”  It told you whatever its dataset told it and that dataset contains errors.  The other scary aspect here is that many people have difficulty distinguishing AI from human responses.  My humble advice is to spend more time with honest human beings.  Social media isn’t always the best way to acquaint yourself with truth.  And yet we’re forced to deal with it because we need to keep evolving.  Those Galapagos finches won’t even know what hit ‘em.

Grandma was born before heavier-than-air flight.  Before she died we’d walked on the moon.  About two decades ago cell phones were around, but weren’t ubiquitous.  Now any company that wants its products found has to optimize for mobile.  And mobile is just perfect for AI that fits in the palm of your hand.  But where has truth gone?  You never really could grasp it in your hands anyway, but we as a collective largely agreed that if you committed crimes you should be punished, not re-elected.  And that maybe, before releasing something with extinction-level potential that maybe you should at least stop and think about the consequences.  I guess that’s why it was a scary meeting.  The consequences.  All technological advances have consequences, but when it takes a lifetime to get to the moon, at least you’ve had some time to think about what might happen.  And that’s the truth.