Next Gen AI, Truly

Okay, so it was a scary meeting.  It was about AI—artificial intelligence.  Specifically Generative IA.  That’s the kind that makes up answers to questions put to it, or does tasks it’s assigned.  The scary part, to me, is that we are being forced to deal with it because tech companies have unleashed it upon the world without thinking through the consequences.  Such hubris gets us into trouble again and again but it never stops us.  We’re sapiens!  You see, GAI (Generative AI) is under no obligation to tell the truth.  It likely can’t even understand the concept, which is a human concept based on perceptions of reality.  GAI simply provides answers based on the dataset it’s been fed.  It can generate texts, and photos (which are so doctored these days anyway that we need a photo-hospital), which means it can, to borrow the words of a sage, “make a lie sound just like truth.”  We already have politicians enough to do that, thank you.

My real fear is that the concept of truth itself is eroding.  With Trump’s “truth is whatever I say it is” administration, and its ongoing aftermath, many Americans have lost any grip on the idea.  Facts are no longer recognized as facts.  “Well I asked ChatGPT and it told me…”  It told you whatever its dataset told it and that dataset contains errors.  The other scary aspect here is that many people have difficulty distinguishing AI from human responses.  My humble advice is to spend more time with honest human beings.  Social media isn’t always the best way to acquaint yourself with truth.  And yet we’re forced to deal with it because we need to keep evolving.  Those Galapagos finches won’t even know what hit ‘em.

Grandma was born before heavier-than-air flight.  Before she died we’d walked on the moon.  About two decades ago cell phones were around, but weren’t ubiquitous.  Now any company that wants its products found has to optimize for mobile.  And mobile is just perfect for AI that fits in the palm of your hand.  But where has truth gone?  You never really could grasp it in your hands anyway, but we as a collective largely agreed that if you committed crimes you should be punished, not re-elected.  And that maybe, before releasing something with extinction-level potential that maybe you should at least stop and think about the consequences.  I guess that’s why it was a scary meeting.  The consequences.  All technological advances have consequences, but when it takes a lifetime to get to the moon, at least you’ve had some time to think about what might happen.  And that’s the truth.

6 thoughts on “Next Gen AI, Truly

  1. Did you see the article about using AI to translate cuneiform tablets? It seems to me that the best uses of AI don’t rely on the machine to do the work for you, but to speed up the process. It’s always going to need humans to check facts and verify the accuracy of its work, but that doesn’t mean it’s completely useless.

    As an analogy, Wikipedia can be a pretty unreliable source in some areas, but it’s helpful in reminding me of things that I used to remember, but can’t get all the details of. And I can use its external references and citations (when there are any) as jumping-off points for more reputable sources.

    Likewise, the concept of a self-driving car is kind of scary, but sometimes I’d appreciate letting the car do the parallel parking for me, especially in a tight, but still manageable spot.

    I don’t want a ChatGPT writing sermons for me, but it’s been incredibly helpful in summarizing the work that I’ve already done, and brainstorming simpler ways to say something that I’ve made unnecessarily complex. It’s a better editor than writer.

    Like

    • Good points, all, Jonathan! My main concern with AI is that we all know that viruses exist and if AI can be infected the results could be devastating, if we allow it to do things for us. I’ve read some scary stuff about that already (self-driving cars being taken over by a hacker mid-trip, etc.).

      The Wikipedia analogy is good–you use it the same way I do, as a starting off point to find reliable sources or as a refresher. It’s mostly self correcting (mostly).

      Parallel parking should just be replaced. Really, the better solution is to have fewer cars, maybe?

      If you ever do a ChatGPT sermon please send it my way–I’d be interested in what programmers think a good sermon needs!

      As always, thanks for your thoughtful comments!

      Like

      • I have seen fictional versions of the driverless car that involve there being much fewer of them, because there’s no need for everyone to have a car: when I arrive at my destination, the car heads out to pick up its next passenger, and this rarely needs to park at all, parallel or otherwise. 🤨

        As to sermons, I have *many* thoughts on that! I’ve been teaching ChatGPT to write sermons the way I do: not writing the actual sermon itself, but going through the various reflection stages and identifying areas that I need to research. Any sermons that the AI itself writes have been a kind of scientific control to compare what I create to the drivel that comes from generative text.

        Like

        • That’s fascinating! I’d be interested in seeing the results sometime. A friend I met through this blog has asked ChatGPT some questions about religion. All I can say is that AI doesn’t yet “get” religion at all! Not to say that it might not help with sermon prep, but it was clear to me that it was fudging a bit…

          Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.