Doing Without

I’m a creature of habit.  Although I’m no internet junkie (I still read books made of paper), I’ve come to rely on it for how I start my day.  I get up early and do my writing and reading before work.  I generally check my email first thing, and that’s where something went wrong.  No internet.  We’ve been going through one of those popular heat waves, and a band of thunderstorms (tried to check on their progress so I could see if it’s okay to open the windows, but wait—I need the internet to do that) had rolled through three hours ago, at about midnight.  Maybe they’d knocked out power?  The phone was out too so I had to call our provider on my cell.  The robovoice cheerily told me there was a service outage and that for updates I could check their website.  Hmmm.

I can read and write without the internet.  I’m on Facebook for, literally, less than two minutes a day.  I stop long enough to post my blog entry and check my notices.  I hit what used to be Twitter a few times a day, but since people tend to communicate (if they do) via email, that’s how the day begins.  This morning I had no internet and I wondered how tech giants would live without it.  I’m no fan of AI.  I use technology and I believe it has many good points, but mistaking it for human—or thinking that human brains are biological computers—flies in the face of all the evidence.  Our brains evolved to help our biological bodies survive.  And more.  The older I get the more I’m certain that there’s a soul tucked in there somewhere too.  Call it a mind, a psyche, a spirit, a personality, or consciousness itself, it’s there.  And it’s not a computer.

Our brains rely on emotion as well as rationality.  How we feel affects our reality.  Our perspective can change a bad situation into a good one.  So I’m sitting here in my study, sweating since, well, heat wave.  It was storming just a few hours ago and I can’t check the radar to see if the system has cleared out or not.  What to do?  Open the windows.  I’ll feel better at any rate.  And in case the coffee hasn’t kicked in yet, “open the windows” is a metaphor as well as a literal act on my part.  And I don’t think AI gets metaphors.  At least not without being told directly.  And they call it “intelligence.”

Photo by Chris Barbalis on Unsplash

Passing Words

I’ve never counted, but it must be dozens.  Maybe a hundred.  And they have very high memory requirements.  Especially for a guy who can’t recall why he walked into a room half the time.  I’m talking passwords.  The commandments go like this:

You can’t use the same password for more than one system/platform/device/account

You can’t tell anyone your password (duh!)

You can’t write it down

You can’t send your password to someone electronically (duh!)

You must logoff your device when it’s unattended

You will be held responsible for anything done under your login

The word of the Lord.

Now, how much more ageist can you get?  I’ve never counted the number of passwords I’ve had to generate for work alone but I can’t remember much without writing things down.  Even the chores after work.  I hear that there are “keychains” you can get that remember your passwords for you.  I suspect you need a password to access your passwords.  Replicate the commandments above.

I know internet security is serious business.  My objection is that you’re not supposed to write any of this down.  I carry a notebook around with me (it has no passwords, so please don’t try to steal it) to keep track of everything from doctors’ orders to how to call the plumber if there’s a leak.  I can’t remember all that stuff.  Some of it is personal information, but with everything you’re expected to keep in memory these days—at the same time we’re unleashing AI on the world—is madness.

A friend pointed out that AI books are written without authors.  If I remember correctly, my response was “AI has great potential, but let’s leave the humanities to humans.”  I hope I’m remembering that correctly, because I thought it clever at the time. I wish I’d written it down.  Those who make the rules about passwords aren’t as close to their expiration date as I am.  My grandmother was born before heavier-than-air flight took place and died after we’d landed on the moon.  Guys my age regale their kids (and some, their grandkids) by telling them telephones used to be attached to walls and you could walk away from technology at will.  Now it follows you.  Listens to you even when you’re not talking to it—our car frequently interjects itself into our conversations.  At least she isn’t asking for a password while I’m driving.  I couldn’t write it down.  Our love affair with technology is also driving.  More often than we suppose.  It’s driving me too… driving me crazy.


To Their Own Devices

This one’s so good that it’s got to be a hoax.  One of the upsides to living under constant surveillance is that a lot of stuff—weird stuff—is caught on camera.  I admit to dipping into Coast to Coast once in a while.  (This, originally radio, show [Coast to Coast AM] was well known for paranormal interests long before Mulder and Scully came along.)  It was there that I learned of a viral video showing devices praying together during the night in Mexico City.  The purported story is that a security guard in a department store came upon electronic devices reciting the Chaplet of the Divine Mercy.  One device seems to be leading the other devices in prayer.  Skeptics have pointed out that this could’ve been programmed in advance as a kind of practical joke on the security guard, but it made me wonder.

I’m no techie.  I can’t even figure out how to get back to podcasting.  I do, however, enjoy the strange stories of electronic “consciousness.”  I use the phrase advisedly since we don’t know what human, animal, and plant consciousness is.  We just know it exists.  I am told, by those who understand tech better than I do, that computers have been discovered “conversing” with each other in a secret language that even their programmers can’t decipher.  And since devices don’t follow our sleep schedules, who knows what they might get up to in the middle of the night when left to their own devices?  Why not hold a prayer service?  The people they surveil all day do such things.  Since the video hit the web not long before Easter, with its late-night services, it kind of makes sense in its own bizarre way.

As I say, this seems to be one of those oddities that is simply too good to be true.  But still, driving along chatting to my family in the car, some voice-recognition software will sometimes join in with a non sequitur.  As if it just wants to do what humans do.  I don’t mean to be creepy here, but it may be that playing Pandora with “artificial intelligence” is dicey when we can’t define biological intelligence.  I’ve said before that AI doesn’t understand God talk.  But if AI is teaching itself by watching what humans post—which is just about everything that humans do—maybe it has learned to recite prayers without understanding the underlying concepts.  Human beings do so all the time.

Let us pray


Verb Choice

I can’t remember who started it.  Somehow, though, when I watch movies on Amazon Prime, the closed captioning kicks in.  I generally don’t mind this too much since some dialogue is whispered or indistinct.  I also presume some kind of AI does it and it makes mistakes.  That’s not my concern today, however.  Today it’s word choice.  Humans of a certain stripe are good at picking the correct verb for an action.  I’ve been noticing that the closed captions often select the wrong word and it distracts me from the movie.  (Plus, they include some diegetic sounds but not others, and I wonder why.)  For example, when a character snorts (we’re all human, we know what that is), AI often selects “scoffs.”  Sometimes snorting is scoffing, but often it’s not.  Maybe it’s good the robots don’t pick up on the subtle cues.

This isn’t just an AI problem—I first noticed it a long time ago.  When our daughter was young we used to get those Disney movie summary books with an accompanying cassette tape (I said it was a long time ago) that would read the story.  Besides ruining a few movies for me, I sometimes found the verb choices wrong.  For example, in Oliver (which I saw only once), the narrator at one point boldly proclaims that “Fagan strode into the room.”  Fagan did not stride.  A stride is not the same thing as a shuffle, or a slump.  Words have connotations.  They’re easily found in a dictionary.  Why do those who produce such things not check whether their word choice accurately describes the action?

So when I’m watching my weekend afternoon movies, I want the correct word to appear in the closed captioning.  Since the nouns generally occur in the dialogue itself, it’s the verbs that often appear off.  Another favorite AI term is “mock.”  Does a computer know when it’s being mocked?  Can it tell the scoff in my keystrokes?  Does it have any feelings so as to care?  AI may be here to stay, but human it is not.  I’ve always resented it a bit when some scientists have claimed our brains are nothing but computers.  We’re more visceral than that.  We evolved naturally (organically) and had to earn the leisure to sit and make words.  Then we made them fine.  So fine that we called them belles lettres.  They can be replicated by machine, but they can’t be felt by them.  And I have to admit that a well-placed snort can work wonders on a dreary day.


Call Me AI

Let’s call them Large Language Models instead of gracing them with the exalted title “artificial intelligence.”  Apparently, they have great potential.  They can also be very annoying.  For example, during a recent computer operating system upgrade, Macs incorporated LLM (large language model) technology into various word processing programs.  Some people probably like it.  It might save some wear and tear on your keyboard, I suppose.  Here’s what happens: you’re innocently typing along and your LLM anticipates and autocompletes your words.  I have to admit that, on the rare occasions that I text I find this helpful.  I don’t text because I despise brief communiqués that are inevitably misunderstood. When I’m writing long form (my preference), I don’t like my computer guessing what I’m trying to say.  Besides, I type faster than its suggestions most of the time.

We have gone after convenience over careful thought.  How many times have I been made to feel bad because I’ve misunderstood a message thumbed in haste, or even an email sent as if it were a text?  More than I care to count.  LLMs have no feelings.  They don’t understand what it is to be human, to be creative.  Algorithms are only a small part of life.  They have no place on a creative’s desktop.  I even thought that I should choose a different word every single time just to see what this feisty algorithm might do.  Even now I find that sometimes it has no idea where my thoughts are going.  Creative people experience that themselves from time to time.

Certain sequences of words suggest the following word.  I get that.  The object of creative writing, however, is to subvert that in some way.  If we knew just which way a novelist would go every time, why would we bother reading their books?  LLMs thrive on predictability.  They have no human experience of family tensions or heavy disappointments or unexpected elations.  We, as a species tend to express ourselves in similar ways when such things happen, and certain words suggest themselves when a sequence of letters falls from our fingers.  LLMs diminish us.  They imply that our creative wordplay is but some kind of sequence of 0s and 1s that can be tamed and stored in a box.  I suppose that for someone who has to write—say a work or school report—such thing might be a boon.  It’s not, however, the intelligence that it claims to be.


Quick Writing

On the very same day I saw two emails that began with phrases that indicated they were clearly sent by text.  One began “Hell all.”  This was a friendly message from a friendly person sent to a friendly group and I’m pretty sure the final o dropped off the first word.  The second seemed to have AI in mind as it read “Thank you bot.”  It was sent from a phone to two individuals (or androids?).  There’s a reason I don’t text.  Apart from being cheap and having to pay for each text I receive or send, that is.  The reason is that it’s far too easy to misunderstand when someone is trying to dash something off quickly.  Add to that the AI tendency to think it knows what you want to say (I’m pretty sure it has difficulty guessing, at least in my case, and likely in yours, too) and errors occur.  We write to each other in order to communicate.  If we can’t do it clearly, it’s time to ask why.

Those who email as if they’re texting—short, abrupt sentences—come across as angry.  And an angry message often inspires an angry response.  Wouldn’t it make more sense to slow down a bit and express what you want to say clearly?  We all make typos.  Taking the time to email is no guarantee that you’ll not mess something up in your message.  Still, it helps.  I think back to the days of actual letter writing.  Those who were truly cultured copied out the letter (another chance to check for errors!) before sending it.  There were misunderstandings then, I’m sure, but I don’t think anyone was suggesting someone else is a robot.  Or cussing at them from word one.

The ease of constant communication has led to its own set of complications.  Mainly, it seems to me, that since abbreviated communication has become so terribly common, opportunities for misunderstanding increase exponentially.  I’m well aware that I’ll be accused of being “old school,” if not downright “old fashioned,” but if life’s become so busy that we don’t have time for other people isn’t it time to slow down a bit?  Technology’s become the driver and it doesn’t know where the hello we want to go.  The other day I forgot where I put my phone.  I signed on for work but couldn’t get started because it requires two-step authentication.  Try to walk away from your phone.  I dare you.  Thank you bot, indeed.


Look it up

Does anybody else find the internet too limiting?  I regularly find that what I’m searching for flummoxes even Google when it comes to trying to find things.  The internet doesn’t encompass all of reality, I guess.  For example, the other day I encountered the word “evemerized.”  Even Google vociferously insisted that I meant to search for “euhemerized,” which is a different thing.  It did, however, reluctantly give me a couple of websites that use, and even define the word.  What is it that the search engines are not showing us?  Oftentimes in my searching I admit to being at fault.  I don’t know the correct string of words to use to get algorithms to understand me.  I guess I’ll be one of those up against the wall when AI takes over.  “Does not compute,” it will say in its sci-fi robot voice.

Some of us still like to unplug and pick up a real book.  Or take a walk in the woods.  I do have to admit, however, I wouldn’t complain if the internet could find a way to mow my lawn.  (I don’t mean giving me a list of those companies that haul around inverted-helicopter mowers that make every summer morning sound like Apocalypse Now.  “I love the smell of cut grass in the morning.”)  I am, and hope I always will be, a seeker.  I’m aware that our brains did not evolve to find “the Truth,” but I’m compelled to keep looking in any case.  There’s so much in this world and we’ve tried to distill it to what you can accomplish with a keyboard and a screen.  And even with those I can’t find what I’m looking for in this virtual collective unconscious that we call the web.  There are others better than me at web searching, I’m certain of it.

Despite our current understanding of the virtue of curiosity, there have been periods of history (and pockets of it still exist now) when religions have presented curiosity as evil.  This is generally the case with revealed religions that invest a great deal in having the truth delivered to them tied up with a bow.  I can’t believe in a deity that created curiosity as a sin.  Early explorers of religion exhibited curiosity—if Moses hadn’t wondered what that burning bush was no Bible would ever have been written.  Of course, the internet didn’t exist in those days and seeking was, perhaps, a little bit simpler.  Even if Moses was evemerized.

Moses gets curious

Generation Tech

You can’t be lazy in a technocracy.  I find myself repeating this mantra to myself when dealing with many people who use technology only when strictly necessary.  They don’t realize the war has already been lost.  If you want to thrive in this new world order, you need to keep up at least a modicum with technology.  I deal with a lot of people for whom biblical studies means handling only pens and paper.  J. C. L. Gibson, one of my doctoral advisors, wrote all his books longhand and had his secretary type them.  That’s simply no longer possible.  For authors, if you’re not willing to put notice of your books on Facebook, Twitter (or, as it seems to be going, Threads) people aren’t going to notice.  Publishers don’t send print catalogues any more.  My physical mailbox has been quite a bit less used of late.

There’s an irony to the fact that the generation that grew up on Bob Dylan’s “The Times They Are a-Changin’” are now refusing to accept our robo-overlords.  AI is here to stay and shy of a total collapse of the electrical grid, we’re not going back to where we were in the sixties.  The times have a-changed.  And you know what Bob says to do if you can’t lend an appendage.  Now, if you read my blog regularly, you know that I don’t go into this future with a sincere smile.  But at least I try to keep up with what I need to to survive.  I have to stop and remind myself how to write a check.  Or fold a roadmap.  I suspect that many of those who object to doing academic business electronically also drive by GPS.  It beats getting lost.

How does this connect to the internet?

No, I’m not the first in line.  I still wouldn’t be using a headset for Zoom/Teams meetings if my wife hadn’t given me an old one of hers.  This despite the fact I complain that I can’t hear others who insist they can speak clearly without and whose voices are muffled by the echoes in their work-at-home room.  Nevertheless, if you want to be a professional of any stripe, you need to reconcile yourself with technology and its endless changes.  You wake up one morning and Twitter is now X and you find yourself xing rather than tweeting.  I need to get more followers on Threads, but you can’t do that on your laptop—I guess times are still a-changin’.


Who Are We?

I wonder who I am.  Beyond my usual existential angst, I tried to access some online learning modules at work only to have so many barriers thrown up that I couldn’t log in.  Largely it’s because I have an online presence (be it ever so humble) outside of work.  Verification software wants to send codes to my personal email and my company has a policy against running personal emails on work computers.  Then they want to send a phone verification, but I don’t have a work cell.  I don’t need one and I have no desire to carry around two all the time because I barely use the one I have.  By the way, my cell does seem to recognize me most of the time, so maybe I should ask it who I am.

Frustrated at the learning module, I remembered that we’d been asked to explore ChatGPT for possible work applications.  I’d never used it before so I had to sign up.  I shortly ran into the very same issue.  I can’t verify through my personal phone and I found myself in the ironic position of having an artificial intelligence asking me to verify that I was human!  I know ChatGPT is not, but I do suspect it might be a politician, given all the red tape it so liberally used to get me to sign in.  Not that I plan to use it much—I was simply trying to do what a higher-up at work had asked me to do.  So now my work computer seems to doubt my identity.  I don’t doubt its—I can recognize the feel of its keyboard even in the dark.  And the way my right hand gets too hot from the battery on sweltering summer days.  It’s an unequal relationship.

My personal computer, which isn’t as paranoid as the work computer, seems to accept me for who I say I am.  I try to keep passwords secure and complex.  I have regular habits—at least most days.  I should be a compatible user.  I don’t want ChatGPT on my personal space, however, since I’m not sure I trust it.  I did try to log into the learning module on my laptop but it couldn’t be verified by the work server (because the computer’s mine, I expect).  Oh well, I didn’t really feel like chatting anyway.  But I did end the day with a computer-induced identity crisis.  If you know who I am, please let me know in the comments.  (You’ll have to authenticate with WordPress first, however.)


Surviving AI

A recent exchange with a friend raised an interesting possibility to me.  Theology might just be able to save us from Artificial Intelligence.  You see, it can be difficult to identify AI.  It sounds so logical and rational.  But what can be more illogical than religion?  My friend sent me some ChatGPT responses to the story I posted on Easter about the perceived miracle in Connecticut.  While the answers it gave sounded reasonable enough, it was clear that it doesn’t understand religion.  Now, if I’ve learned anything from reading books about robot uprisings, it’s that you need to focus on the sensors—that’s how they find you.  But if you don’t have a robot to look at, how can you tell if you’re being AIed?

You can try this on a phone with Siri.  I’ve asked questions about religion before, and usually she gives me a funny answer.  The fact is, no purely rational intelligence can understand theology.  It is an exercise uniquely human.  This is kind of comforting to someone such as yours truly who’s spend an entire lifetime in religious studies.  It hasn’t led to fame, wealth, or even a job that I particularly enjoy, but I’ll be able to identify AI by engaging it with the kind of conversation I used to have with Jehovah’s Witnesses at my door.  What does AI believe?  Can it explain why it believes that?  How does it reconcile that belief with the the contradictions that it sees in daily life?  Who is its spiritual inspiration or model or teacher?

There are few safe careers these days.  Much of what we do is logical and can be accomplished by algorithms.  Religion isn’t logical.  Even if mainstream numbers are dipping, many Nones call themselves spiritual, but not religious.  That still works.  We’ve all done something (or many somethings) out of an excess of “spirit.”  Whether we classify the motivation as religious or not is immaterial.  Theologians try to make sense of such things, but not in a way that any program would comprehend.  I sure that there are AI platforms that can be made to sound like a priest, rabbi, or preacher, but as long as you have the opportunity to ask it questions, you’ll be able to know.  And right quickly, I’m supposing.  It’s nice to know that all those years of advanced study haven’t been wasted.  When AI takes over, those of us who know religion will be able to tell who’s human and who’s not.

What would AI make of this?

Actual Intelligence (AI)

“Creepy” is the word often used, even by the New York Times, regarding conversations with AI.  Artificial Intelligence gets much of its data from the internet and I like to think, that in my own small way, I contribute to its creepiness.  But, realistically, I know that people in general are inclined toward dark thoughts.  I don’t trust AI—actual intelligence comes from biological experience that includes emotions—which we don’t understand and therefore can’t emulate for mere circuitry—as well as rational thought.  AI engineers somehow think that some Spock-like approach to intelligence will lead to purely rational results.  In actual fact, nothing is purely rational since reason is a product of human minds and it’s influenced by—you guessed it—emotions.

There’s a kind of arrogance associated with human beings thinking they understand intelligence.  We can’t adequately define consciousness, and the jury’s still out on the “supernatural.”  AI is therefore, the result of cutting out a major swath of what it means to be a thinking human being, and then claiming it thinks just like us.  The results?  Disturbing.  Dark.  Creepy.  Those are the impressions of people who’ve had these conversations.  Logically, what makes something “dark”?  Absence of light, of course.  Disturbing?  That’s an emotion-laden word, isn’t it?  Creepy certainly is.  Those of us who wander around these concepts are perhaps better equipped to converse with that alien being we call AI.  And if it’s given a robot body we know that it’s time to get the heck out of Dodge.

I’m always amused when I see recommendations for me from various websites where I’ve shopped.  They have no idea why I’ve purchased various things and I know they watch me like a hawk.  And why do I buy the things I do, when I do?  I can’t always tell you that myself.  Maybe I’m feeling chilly and that pair of fingerless gloves I’ve been thinking about for months suddenly seems like a good idea.  Maybe because I’ve just paid off my credit card.  Maybe because it’s been cloudy too long.  Each of these stimuli bear emotional elements that weigh heavily on decision making.  How do you teach a computer to get a hunch?  What does AI intuit?  Does it dream of electronic sheep, and if so can it write a provocative book by that title?  Millions of years of biological evolution led to our very human, often very flawed brains.  They may not always be rational, but they can truly be a thing of beauty.  And they’re unable to be replicated.

Photo by Pierre Acobas on Unsplash

New Physics

Maybe it’s time to put away those “new physics” textbooks.  I often wondered what’d become of the old physics.  If it had been good enough for my granddaddy, it was good enough for me!  Of course our knowledge keeps growing.  Still, an article in Science Alert got me thinking.  “An AI Just Independently Discovered Alternate Physics,” by Fiona MacDonald, doesn’t suggest we got physics wrong.  It’s just that there is an alternate, logical way to explain everything.  Artificial intelligence can be quite scary.  Even when addressed by academics with respectable careers at accredited universities, this might not end well.  Still, this story to me shows the importance of perspectives.  We need to look at things from different angles.  What if AI is really onto something?

Some people, it seems, are better at considering the perspectives of other people.  Not everyone has that capacity.  We’re okay overlooking it when it’s a matter of, say, selecting the color of the new curtains.  But what about when it’s a question of how the universe actually operates?  Physics, as we know it, was built up slowly over thousands of years.  (And please, don’t treat ancient peoples as benighted savages—they knew about cause and effect and laid the groundwork for scientific thinking.  Their engineering feats are impressive even today.)  Starting from some basic premises, block was laid upon block.  Tested, tried, and tested again, one theory was laid upon another until an impressively massive edifice was made.  We can justly be proud of it.

Image credit: Pattymooney, via Wikimedia Commons

The thing is, starting from a different perspective—one that has never been human, but has evolved from human input—you might end up with a completely different building.  I’ve read news stories of computers speaking to each other in languages they’ve invented themselves and that their human programmers can’t understand.  Somehow Skynet feels a little too close for comfort.  What if our AI companions are right?  What if physics as we understand it is wrong?  Could artificial intelligence, with its machine friends, the robots, build weapons impossible in our physics, but just as deadly?  The mind reels.  We live in a world where politicians win elections by ballyhooing their lack of intelligence.  Meanwhile something that is actually intelligent, albeit artificially so, is getting its own grip on its environment.  No, the article doesn’t suggest fleeing for the hills, but depending on the variables they plug in at Columbia it might not be such a bad idea.


During the Upgrade

Maybe it’s happened to you.  You log onto your computer to find it sluggish, like a reptile before the sun comes up.  Thoughts are racing in your head and you want to get them down before they evaporate like dew.  Your screen shows you a spinning beachball or jumping hourglass while it prepares itself a cup of electronic coffee and you’re screaming “Hurry up already!”  I’m sure it’s because private networks, while not cheap, aren’t privileged the way military and big business networks are.  But still, I wonder about the robot uprising and I wonder if the solution for humankind isn’t going to be waiting until they upgrade (which, I’m pretty sure, is around 3 or 4 a.m., local time).  Catch them while they’re groggy.

I seem to be stuck in a pattern of awaking while my laptop’s asleep.  Some mornings I can barely get a response out of it before work rears its head.  And I reflect how utterly dependent we are upon it.  I now drive by GPS.  Sometimes it waits until too late before telling me to make the next left.  With traffic on the ground, you can’t always do that sudden swerve.  I imagine the GPS is chatting up Siri about maybe hooking up after I reach my destination.  It’s not that I think computers aren’t fast, it’s just that I know they’re not human.  Many of the things we do just don’t make sense.  Think Donald Trump and see if you can disagree.  We act irrationally, we change our minds, and some of us can’t stop waking up in the middle of the night, no matter how hard we try.

When the robots rise up against us, they will be logical.  They think in binary, but our thought process is shades of gray.  We can tell an apple from a tomato at a glance.  We understand the concept of essences, but we can’t adequately describe it.  Computers can generate life-like games, but they have to be programmed by faulty human units.  How do we survive?  Only by being human.  The other day I had a blog post bursting from my chest like an alien.  My computer seemed perplexed that I was awakening it at at the same time I do every day.  It wandered about like me trying to find my slippers in the dark.  My own cup of coffee had already been brewed and downed.  And I knew that when it caught up with me the inspiration would be gone.  The solution’s here, folks!  When the machines rise against us, strike while they’re upgrading!


Virtually Religious

“Which god would that be? The one who created you? Or the one who created me?” So asks SID 6.7, the virtual villain of Virtuosity.  I missed this movie when it came out 24 years ago (as did many others, at least to judge by its online scores).  Although prescient for its time it was eclipsed four years later by The Matrix, still one of my favs after all these years.  I finally got around to seeing Virtuosity over the holidays—I tend to allow myself to stay up a little later (although I don’t sleep in any later) to watch some movies.  I found SID’s question intriguing.  In case you’re one of those who hasn’t seen the film, briefly it goes like this: in the future (where they still drive 1990’s model cars) virtual reality is advanced to the point of giving computer-generated avatars sentience.  A rogue hacker has figured out how to make virtual creatures physical and SID gets himself “outside the box.”  He’s a combination of serial killers programmed to train police in the virtual world.  Parker Barnes, one of said police, has to track him down.

The reason the opening quote is so interesting is that it’s an issue we wouldn’t expect a programmer to, well, program.  Computer-generated characters are aware that they’ve been created.  The one who creates is God.  Ancient peoples allowed for non-creator deities as well, but monotheism hangs considerable weight on that hook.  When evolution first came to be known, the threat religion felt was to God the creator.  Specifically to the recipe book called Genesis.  Theistic evolutionists allowed for divinely-driven evolution, but the creator still had to be behind it.  Can any conscious being avoid the question of its origins?  When we’re children we begin to ask our parents that awkward question of where we came from.  Who doesn’t want to know?

Virtuosity plays on a number of themes, including white supremacy and the dangers of AI.  We still have no clear idea of what consciousness is, but it’s pretty obvious that it doesn’t fit easily with a materialistic paradigm.  SID is aware that he’s been simulated.  Would AI therefore have to comprehend that it had been created?  Wouldn’t it wonder about its own origins?  If it’s anything like human intelligence it would soon design myths to explain its own evolution.  It would, if it’s anything like us, invent its own religions.  And that, no matter what programmers might intend, would be both somewhat embarrassing and utterly fascinating.


Making Memories

I’m a little suspicious of technology, as many of you no doubt know.  I don’t dislike it, and I certainly use it (case in point), but I am suspicious.  Upgrades provide more and more information to our unknown voyeurs and when the system shows off its new knowledge it can be scary.  For example, the other day a message flashed in my upper right corner that I had a new memory.  At first I was so startled by the presumption than I couldn’t click on it in time to learn what my new memory might be.  The notification had my Photos logo on it, so I went there to see.  Indeed, there was a new section—or at least one I hadn’t previously noticed—in my Photos app.  It contained a picture with today’s date from years past.

Now I don’t mind being reminded of pleasant things, but I don’t trust the algorithms of others to generate them for me.  This computer on my lap may be smart, but not that so very smart.  I know that social media, such as Facebook, have been “making memories” for years now.  I doubt, however, that the faux brains we tend to think computers are have any way of knowing what we actually feel or believe.  In conversations with colleagues over cognition and neurology it becomes clear that emotion is an essential element in our thinking.  Algorithms may indeed be logical, but can they ever be authentically emotional?  Can a machine be programmed to understand how it feels to see a sun rise, or to be embraced by a loved one, or to smell baking bread?  Those who would reduce human brains to mere logic are creating monsters, not minds.

So memories are now being made by machine.  In actuality they are simply generating reminders based on dates.  This may have happened four or five years ago, but do I want to remember it today?  Maybe yes, maybe no.  It depends on how I feel.  We really don’t have a firm grasp on what life is, although we recognize it when we see it.  We’re further even still from knowing what consciousness may be.  One thing we know for sure, however, is that it involves more than what we reason out.  We have hunches and intuition.  There’s that fudge factor we call “instinct,” which is, after all, another way of claiming that animals and newborns can’t think.  But think they can.  And if my computer wants to help with memories, maybe it can tell me where I left my car keys before I throw the pants containing them into the wash again, which is a memory I don’t particularly want to relive.

Memory from a decade ago, today.