Trans-Human

“Rapture of the Nerds,” an article in this week’s Time magazine by Jessica Roy, has me scratching my head. Or it would if I had a head. That is, if I were an uploaded consciousness in a machine. A transhuman. The idea that consciousness is transferable to hardware has been gaining momentum over the last several years during which humans have evolved into illogical machines. Roy’s article about Terasem, which is being called a new religion, explores what the leaders of the movement teach about human consciousness. You write down your thoughts in most intimate detail, download, and viola, send them out to the cosmos. Your soul has been saved. If only we knew what a soul was. Transhumanism has been promising an attenuated kind of immortality for its adherents, but as I sit down to write out my thoughts, I’m aware that there’s always a lot more going on in my brain than the simple ideas I can scrawl down before they evaporate. There’s quick wisps of thoughts about my loved ones, my schedule (what do I have to do today?), what I ate for supper last night, how I feel—all of this while I’m putatively thinking about writing a blog post. Schizophrenia of the soul?

Faith

So much of thought is having a biological body. From early days I have been aware that this body will die. I was taught that the soul would live on, but this thing I call consciousness seems pretty closely tied with this thing I call life. And once the biological input ends, that part will be over. I think. In other words, my thoughts are tied to my biological existence. How can I even begin to write a minute fraction of them down accurately? I used to toy with an idea called meta-thinking. It was something I came up with as a plot element in a science fiction story. The idea was that those who can think two thoughts at once would eventually take over from those of us with lesser mentalities. Those who have two minds in one brain are, it seems, a step closer to the divine.

I use technology on a daily basis, but I am a disingenuous advocate. Some of the most transcendent moments I’ve experienced have been outdoors with technology left behind, under a sunny sky with an ocean breeze blowing in my face and those I love walking beside me. I think I’ve already broadcast that out into the universe by simply being a part of it. I don’t need circuits and motherboards to make me more of what I am. Technology is the follower. It is consciousness that will always remain in the lead. And we still really don’t even have an idea of what consciousness actually is. It’s certainly not this computer that’s sitting on my lap. And I do have to wonder, once my consciousness becomes a robot, what it will do with this strange, primate urge I have, when I’m puzzled, to scratch my head.


Programming God

Robots have been part of my world far longer than I ever recognized. Still, growing up in a small town in the 1960s, their impact was fairly minimal—they may have had a part in the manufacture of the car we drove, and perhaps helped prepare some of the products we bought—but those robots were far away. Far more present were those on television who, for the most part, were funny and helpful. This month’s Wired magazine runs a story entitled “Trusting Our Robots,” by Emily Anthes. The point of her short article is that people feel more comfortable with robots that are programmed to appear uncertain. We don’t trust robots to drive our cars, as she points out, but we give them more, old-fashioned primate sympathy when we make them look like they’re having a hard time. Just a couple weeks back Time magazine had a blurb on how we’re now at the point of programming drones to kill without human input. Add a dose of uncertainty and we get a glimpse of what it must be like to be gods.

Underneath our exteriors, we all know that robots do what they are programmed to do. In many respects—physically, especially—they are superior to us. Nevertheless, human knowledge is not perfect. We, too, are prone to uncertainty. Our robots aren’t better than we are, only more efficient. Doubt is a human quality. Perhaps our most endearing. As Ms. Anthes notes, “even when confronted with evidence of our own inferiority, we resist a robot’s help.” We have evolved over millions of years to interact with other creatures. Those non-biological entities we’ve created and endowed with artificial intelligence (sound familiar?) somehow can’t equal the right we’ve earned from struggling against, and along with, nature for these many eons. Would God really trust us with the keys to the universe?

An early plan for a robot.

An early plan for a robot.

Robots, we are told, are our inevitable future. Some visionaries look forward to uploading human consciousness (even though we have no idea what it is) into a machine and, with replaceable parts, living forever. Before the dead and resurrected Jesus, according to the gospel of John, stood Thomas—the man some traditions said was Jesus’ very twin—and yet he doubted. As much as we claim otherwise, we adore Thomas for it. Evolving even in a world full of religion—itself a product of our evolution—we are so unsure. Our robots, however, programmed by us, have no doubts. Even when they act confused, it’s only because we tell them too. Our minds, as Wired tells us, resist letting robots drive the car for us. We let them pull the trigger, however, and pray our programmers got it right.


Double Blind

When I read Isaac Asimov’s three laws of robotics as a child, I assumed that I’d not live to worry about them in real life. What we don’t know can indeed hurt us. Time magazine frightens me sometimes. This week’s offerings include a small blurb about drones. When I was a kid, a drone was a bee—dangerous in its own right—or it was a verb used to describe an uninspired teacher or preacher’s monotoned wisdom. Now drones are robotic planes that can operate themselves without human input. Time reveals that technology has been developed that would allow drones to kill without human input. Asimov’s laws have become truly science fiction. Proponents argue that “collateral damage” might be minimized if we allow robots to kill with precision, and some have argued that the research should be prohibited. The fact that it has been developed, however, means the line in the sand has been crossed. If it has been done once, it will be done again.

800px-MQ-9_Reaper_in_flight_(2007)

Even as a daily user of technology, a deep ambivalence besets me. Maybe if it weren’t for the fact that every once in a while my computer (most often at work) freezes up and issues a command I can’t understand, I might feel a little more secure. Instead I issue a ticket for IT and when they call on me sometimes even a specialist can’t figure out what went wrong. Once the bullets are flying it’s a little too late to reboot. Maybe I’m just not yet ready to crawl into bed with a technology that might kill me, without feeling.

Just five pages earlier Time notes that 1 in 5 is the “Ratio of people who would have sex with a robot, according to a U.K. study.” All things are fair, it seems, in love and war. The part of the equation that we haven’t accounted for in our artificial intelligence is that thought requires emotion—which we don’t understand—as much as it requires reason, upon which we have only a toddler’s grasp. And yet we continue to build more and more powerful devices that might kill us with ease. Isaac Asimov was a prescient writer and a forward thinker. He was from the generation that aspired to ethics being in place before technology was implemented. At least as an ideal. We’ve reversed the order in our world, where ethics is continually playing catch-up to the new technologies we’ve invented. Now it’s time to decide whether to make love to it or to say our final prayers.


Mad Charles

Moving to New Jersey was made easier by Weird N.J. I found out about the magazine while still domiciled in Wisconsin when the series of books produced a Weird Wisconsin edition. I read it cover-to-cover and learned about the magazine. When weirdness would have it that we’d be moving to that self-same New Jersey, I began reading the magazine religiously. Lately, however, it has become more mainstream and less weird, but still, it is a great source of local information. We landed in Somerville because of its educational reputation and closeness to Piscataway, where I worked. I’ve always had a thing about being able to pronounce the name of the town in which I live (and I’ve even resided in Oconomowoc), so Piscataway was out. In any case, Somerville High School has an engineering program and the expected robotics team that goes along with such pedagogy. When my daughter joined the team, the whole family was drawn into four years of endless fundraising and promotion for an underfinanced club. So it was weird when I saw a story called “Rock em’ [sic] Sock ‘em Robot: Somerville N.J. vs. Mad Charles, the World’s First Singer Songwriter Karate Robot” in the latest Weird N.J. In my four years in the club, I’d never heard of Mad Charles.

Robots and religion are topics I’ve often related on this blog, so I read with amazement that about two decades before FIRST Robotics ever got its start, there was a somewhat famous robot in Somerville. Eugene Viscione was the inventor of Mad Charles, a robot that was built to help improve karate moves. The robot, as often happens in small towns, went on to other things, such as cutting records that, according to the article by J. A. Goins, are quite rare. In the 1970’s, however, Mad Charles was a local sensation, now all but forgotten some four decades later. There were even Mad Charles tee-shirts available. While we sat dreaming up new ways to get money out of the locals, and even set up a booth for the Somerville street fair not far from where Mad Charles at one time could have been found, nobody mentioned the karate robot. I doubt anyone had heard of it.

History is a fickle friend. Of course, being from a small town myself, I know it is very hard to get noticed, and even harder to be remembered. So those sleepy, pre-dawn weekend bus rides to robotics competitions, it was sometimes easy to consider how one gets overlooked. This past November, many hardly noticed as NBC didn’t make a big deal of it, FIRST robots opened the Macy’s Thanksgiving Day Parade. Somerville’s latest robot was not among the horde (we have always had a problem keeping enough charged batteries on hand), but as the robots rolled through Herald Square, I was thinking of Mad Charles and a legacy that has been forgotten. Come to think of it, I guess that is weird after all.

A Somerville robot (center)

A Somerville robot (center)


Read Until Ragnarok

Wpa-marionette-theater-presents-rur“The play’s the thing. Wherein I’ll catch the conscience of the king,” quoth Hamlet. Shortly after the Velvet Revolution, my wife and I were shown about Prague by a friend who’d grown up in Communist ruled Czechoslovakia. As we watched the changing of the guard, he told us how Václav Havel, the final president of Czechoslovakia, had been a playwright and appreciated the need for pageantry in such civil ceremonies. I remember being impressed with what this playwright had accomplished while America had just survived being ruled by a lackluster comedic actor whose major contribution had been the myth of trickle-down economics. Havel was at one point selected as ranking high among the world’s top hundred intellectuals. Somehow Bedtime for Bonzo just didn’t seem to be worth bragging over.

Within another year or two, Czechoslovakia would dissolve, but the world would remain impressed by the Czech playwright. Karel Čapek was another Czech author and playwright of considerable import. Čapek, “public enemy number two” of the invading Nazis, died before the National Socialists could reach him. His brother died in Bergen-Belsen. Čapek is the author of the play R.U.R., or Rossum’s Universal Robots. Indeed, he coined the term “robot,” around which this play revolves. Over the holidays I finally had a chance to read R.U.R., and I was immediately struck by how prominent the meme of God appeared, and also how prescient Čapek was. Like his contemporary Franz Kafka, Čapek had an unsullied vision of human propensities. Not having seen a production of R.U.R., or knowing how it would play out, I was nearly buried under the layers of meaning that such a brief piece could convey. Harry Domin, the general manager of R.U.R., supplies the world with robots for the easement of human labor. These robots eventually acquire souls, through human tampering, but also rely on humans for their reproduction. All of humanity, save a sole survivor kept alive to make new robots, is destroyed. Alquist, the last man alive, realizes when one robot will lay down its life for its mate, they have become a new Adam and Eve, and humanity’s existence is truly at an end.

Although I’ve read about robots since I was a child, I didn’t know about R.U.R. until my daughter joined her high school robotics club. Robots have, in many ways, dominated my life since. Although Čapek’s play is funny in parts, it is dystopian and profoundly troubling. Our robots have evolved since the period of World War I, just after which the play was written, but our moral sensibilities have not kept pace. Helena, the eventual wife of Domin, feels that robots should be given a soul. At first they feel no pain, mental or physical. Once they acquire these, however, they begin their inexorable march to the elimination of humankind. Reading of how technocrats believe that our true function is now to service the robots who do much of our work today, while unemployment just won’t release its grip on the flesh, my thoughts go back to Karel Čapek, Václav Havel, and William Shakespeare. The playwrights create, but the actors just ape.


Arduino Anything?

Before my daughter enrolled in college I’d never heard of an Arduino. Since her high school robotics team leadership has now passed into more able hands, I figured that I’d go back to my naive days of not thinking about automated mechanical devices, devoting my gray matter to grayer matters. Still, over the past several weeks robots keep seeking me out. A spread in Delta’s in-flight magazine for July featured robots, as did an alumni magazine for August. Now the issue of Time for September has a story about robots. When my daughter sent me the Arduino video, by TED, I knew I’d better try to pay attention. Technology will change us, whether we want it to or not. It seems that from the first knapping of flint our destiny was set to manipulating our world and making it into something we create. Robots make us gods.

Arduino_Uno_-_R3

The real issue, however, in the TED video is that Arduino is open-source. Open-source means that the designs, instructions, and application of the device are voluntarily not held under copyright. Academics throughout the world are increasingly favoring open-source material—not just software and hardware, but the knowledge behind them. In my work at a for-profit (i.e. “commercial”) publisher, I know that open-source is a huge concern. It used to be that open-source, that is, free—information, was considered inferior. Like the early stages of recycled grocery bags. Arduino puts the lie to that supposition. An international team has made a device that is extremely flexible in application, and is giving it away. Many academic journals, traditional cash cows for the publishing industry, are now going open-source. Those of us who research and write don’t often do it for money—we just want our ideas shared. Commercial interests, however, are heavily vested in turning a profit from information. It is a clash of worldviews.

Never one of the great capitalists, I find open-source an intriguing concept. The problem is that those who think need to find a way to make a living in a society over-awed by spending. Universities charge tuition because professors have to be paid. Publishers charge a week’s wages for textbooks because editors have to be paid. Knowledge—the most valuable commodity people possess—fits uneasily with entrepreneurial ideals. This blog is open-source. Maybe that is why it has never garnered much attention, like a first-generation recycled paper bag. These same ideas, however, when presented in the context of university classrooms were subject to fees of thousands of dollars. Registration filled up every semester. The source is the same, a guy with a Ph.D. from a major research university making observations about how religion impacts each and every one of us, often in unexpected ways. Some things you can’t even give away. Well, if trends continue I shouldn’t be surprised if someday even this is taken over by a robot. Right, Mr. Čapek?


Paging Dr. Asimov

Who remembers Rock ’em Sock ’em Robots? Plastic “robots” in the boxing ring trying to knock each other’s block’s off was a form of entertainment for kids of the ‘60s before such things as humanoid robots actually existed. So when Boston University’s alumni magazine had an article about dancing robots, I had to see what was up. As regular readers will know, I’ve been exploring some of the problems with reductionism lately. This idea, that humans and animals are just fleshy machines, breaks down when we try to design robots that can do some of the most basic of human activities. Sometimes we dance and we don’t know why. Apart from Wall-e’s dance with Eve, robots have trouble getting the concept. Graduate student John Baillieul notes that this isn’t about “some high school guy who had trouble getting a date, so you get a robot. The ultimate goal is to understand human reaction to gestures and how machines may react to gestures.” Having actually been a high school guy who never even got to the prom, I’m wondering how depressed our robots get when the fem-bots all look the other way.

Rockem Sockem

The reductionistic outlook suggests that we can eventually program robots to respond as humans would, responding fluidly to situations, allowing them to over-ride their “instinct,” which, the article implies, equals programming. We have no idea what instinct is. It is something all biological creatures have, from the heliotrope following the sun to the human dancing her heart out. Do we want machines to replicate our most intimate emotions? Even our most reliable chip-driven devices sometimes freeze up or rebel. My car has recently got the idea in its mechanistic brain that the right-hand side rearview mirror should be rotated as far to the right as possible. We bicker about this all the time when I get in to drive. Well, machines know best. They, after all, are the shape of the future.

So programming robots so that they can react in real time to non-verbal cues, like all sentient beings do, is a desideratum of our mechanistic Weltanschauung. Notes Rich Barlow, the article’s author, “bats, for example, camouflage their motions so that they can sneak up on insect prey, a fake-out familiar to anyone who’s tried to swat a pesky fly.” My question is who is the pesky fly in this robot-human scenario? Who acts irrationally and unpredictably? Isn’t our instinct to smash the fly a result of our annoyance at it landing, yet again, on our sandwich with its dirty feet? And what is that stupid dance that it does when it’s all over our food? Reductionism must, by definition, reduce instinct to the level of a kind of genetic programming. Even this aging blogger, however, knows what it is to dance without knowing why. He also knows what it feels like when your date goes home with somebody else, something to which he’s not convinced that we want robots calculating an “instinctual” response.


They, Robots

Somehow I knew robots would continue to be part of my life. After all, they are a staple of science fiction and they are indeed also a staple of science fact. As my association with FIRST Robotics taught me, robots are everywhere. (And they can play frisbee better than I can.) So when I saw an article in the Chronicle of Higher Education entitled “Robots Aren’t the Problem: It’s Us,” I knew I was in for a scolding. It’s not so much the robots that worry me, it’s what they say about us. People thrive in environments of complexity. Even a simple robot has me standing next to a bunch of teenagers scratching my head. I don’t know what half the parts are and have no idea what the other half do. Even the components can be complex. A good case can be made that the natural world is equally, if not more, complex. I can imagine how, for instance, being confronted by a tiger in the wild would offer a bewildering variety of complex implications. And yet, robots are the world we’ve constructed for our selves.

Not every job is immediately threatened by mechanical replacement, but we know that in the industrialized world some jobs have disappeared. Our choices of how to find meaningful vocation are being conscripted by the machines we make. Like God we make them in our own image. Unlike God, we make them more powerful than ourselves. Richard Florida, in his Chronicle article, notes that some claim the robots will free us to become more human. Only if the economic barons will allow it. Even today, at the very beginnings of the robotics revolution, it is awfully hard to find a satisfying job. Even with very extensive education. I know this from experience. At the end of the day you end up working to make money for somebody else. Robots didn’t steal my vocation, business did.

Al-jazari_robots

I appreciate Florida’s point, but I wonder if we can’t point a finer point on it. All people are greedy, to a point. In most of us a human, all too human, conscience starts to bother us when we realize that we are unfairly advantaged. Some people even actually do something about it. Those who run the business, however, didn’t get to the top by obeying the dictates of conscience. The apotheosizing of money demands that humans be treated like, well, robots. We are all servants to those who aren’t shackled with quibbles and moral qualms. Robots, on one end, are reducing the number of jobs. On the other end entrepreneurs are seeking whom they may devour. The mass of humanity is caught in the middle. When it is time to beg for mercy, from what I’ve seen, the far safer bet is with the robots.


RoboStop

Although few objects are as soulless and mechanistic as robots, I still feel strangely emotional about them. Had my daughter not been interested in them, I would never have become involved with FIRST Robotics, even serving for a year as an officer in the Team 102 foundation to help raise the thousands of dollars needed to run such a club. Like most people with a background deep in the humanities, I would’ve not pondered too deeply how much of ourselves we put into our machines. Right now I’m reevaluating that hypothesis.

No one doubts that an artist or musician puts her- or himself into her or his work. Those who do it best are most highly valued (after we let the artist die off, usually, after having lived a difficult life). We admire those who are able to catch the human spirit in such forms of expression while many scientists inform us that there really is no spirit at all—it is just chemical reactions and electrical circuits in the soft tissue of our brains. When we see the Mona Lisa, or hear Beethoven’s Seventh, however, we know they must be wrong. What we make becomes part of us. And I’m thinking that may apply to robots too.

I’ve just attended my last FIRST Robotics competition. It is difficult to convey, if you’ve never been, what such an event is like. Hundreds of screaming high school students excited about engineering and the thrill of competition. A playing field is constructed to exacting specifications and six teams in two alliances facing off their creations to emulate human—sometimes superhuman—behavior. All the while the thumping rock beat of loud music and the play of colored lights give the event the emotional charge of a football game and homecoming dance rolled into one. Only you really don’t have to move very much at all.

I’m not a robot designer or builder. It is difficult to imagine anything further from my training (except perhaps accounting). Still, I’m a little let down after my last FIRST competition. Four years ago it seemed so novel, and there have been some difficult moments along the way. I’ve seen kids build robots that play soccer, hang inflatable tubes, play basketball, shoot frisbees, climb towers, climb poles, and do many other seemingly impossible tasks. I’ve been up before dawn to ride a chilly school bus across the state to compete, coming home in time to fall in bed to get up early for work the next morning. But most of all, I’ve seen kids putting themselves into more than machines. I’ve seen them putting themselves into a team. Although we didn’t win this year, and next year I probably will be consumed with other concerns, I am proud to have been, in my small way, a part of Team 102. Way to go, Gearheads! Maybe robots do have souls after all.

Team102FinalAlliance


Next to Godliness

Catholics, secularists, and even a Pharaoh or two. Loud, pounding music. Dancing teenagers. It must be FIRST Robotics season again. Although I’m ambivalent about the implications of a world filled with robots, I can’t help but be impressed by what high school students can do when they are mentored so closely by adults eager to share the tricks of the trade. If you’re not familiar with FIRST Robotics, here are the basics: each January a new game objective is released. Participating high schools throughout the world have six weeks to plan, design, construct, and program a robot to perform the tasks spelled out. Since this is a busy time of year for many schools, dedicating extra hours to building a robot leads to complaints and loss of sleep—maybe a skipped supper or two. When they come together on the playing field, however, all that is forgotten and the wonder is that kids, who are often disparaged in our society, have managed to construct a working, complex machine capable of tasks impossible for many adults (for example, doing chin-ups).

Every year I can’t help but think how like a religious service these events are. The robots are like deities to be served and the technology flits about like mechanized angels. There is an increasingly complex hierarchy of officials telling you what you can’t do (now this is beginning to feel like work!). At the end of the day, however, the kids get to be the stars in a competition that puts brains over brawn. And the robots are treated with extreme deference, because we know that we wouldn’t stand a chance if they had a will anything like the deities of yore.

The religious imagery, however, is never absent. Technology represents humans doing things without divine intervention. These are empirically devised devices, performing according to the laws of physics. And yet, teams from Catholic high schools, bearing mythologically-laden names, join in the world where no gods need apply. Robots, as initially named by Karel Čapek, were human servants, the ultimate in godliness—making images in our likeness to do our bidding. And yet we can’t escape the language of religion when thinking about our own creations. The fascination applies to non-parochial schools as well, with some teams claiming names echoing themes from holy writ. Creating autonomous beings is next to godliness. We make our own future, and, god-like, we hope that nothing goes wrong.

Humans and machines

Humans and machines


A Tale of Two Bees

We’re nearing the competition season for FIRST Robotics. The animated, mechanical creatures created from scratch since early January are now set to compete for a kind of ultimate, ultimate frisbee. Only you can’t call it “frisbee,” for copyright reasons. Ironically drone bombers have been in the headlines this past week. Drones are robotic planes that fly their missions with human pilots sitting safely hundreds, or even thousands of miles away from the action. People are beginning to wonder—is this ethical? I pull out the Scientific American I purchased at Bush International in Houston last week. There’s an article about robo-bees. In a scare that seems like it could have come straight from the X-Files, I’ve been reading about the disappearance of bees. There are people seriously worried about this. It does seem that we failed to learn the lesson of Rachel Carson, and a land of milk and honey just doesn’t appeal without the honey.

The robo-bees are the size and roughly the shape of biological bees. They can be programmed to behave like bees and pollinate plants that our missing bees have been, well, missing. There may be hope for the flowers after all. But I wonder about the honey. No doubt, technology will come to the rescue. Those labs that gave us sucralose, aspartame, and stevia can surely invent a golden, viscous liquid sweetener that drips from a pipette. No cause for worry here. We can recreate the natural world in the laboratory. Honey has been reputed to have medicinal effects, but we can synthesize medicines in the lab as well. You might not want to dribble those on your biscuits, however.

Honey is made from nectar, the mythological food of the gods. Hinduism, Buddhism, and Judaism all recognize the religious significance of honey. Those of us who’ve been stung realize that a price has to be paid for such divine sweetness. The gods are like that. Roses have thorns for a reason. Not that I’m not impressed with the technology behind robo-bees. I am astounded that tiny robots can be built to fly and perform as we understand nature to dictate the Apis genus. They don’t, however, have the minds of bees. Mind is not the same as brain, as we’re beginning to learn. And minds are not limited to Homo sapiens. I recall when in our arrogance we thought we could improve the productivity of bees (capitalist bees) by breeding them with their Africanized cousins, biologically separated by an ocean. Many nightmares haunted me of the resulting killer bees. Yes, I had been stung as a child. Just by regular, garden-variety bees. From those painful events I learned a valuable lesson. We tinker with nature at our own cost. I, for one, am willing to deal with real-life stingers to taste the very food of the gods.

True bee or not true bee?

True bee or not true bee?


The Computer of Dr. Caligari

TheAtlanticTo be human is to be ethical. Not always in the best way, unfortunately. Nevertheless, our moral sensors are pretty much constantly running as we try our best to make the right moral decisions. This thought occurred to me while reading Jonathan Cohn’s article, “The Robot Will See You Now,” in this month’s The Atlantic. Having been a sideline watcher of FIRST Robotics for about four years now, I have heard countless stories of how robots perform some surgeries more efficiently than clumsy humans can. Cohn’s article starts off with the impressive potential of IBM’s Watson to sort through millions and millions of bits of data—far beyond any human capacity—and make more informed recommendations about medical treatments. After all, Watson won on Jeopardy!, so we know “he”’s smart. But he isn’t really a he at all. Still, in our reductionist world where humans are just “soft machines” computers and robots should be quite capable of helping us heal. To survive longer.

I am a veteran of Saturday afternoon science fiction movies and weekday episodes of Star Trek and Battlestar Galactica (original series, both). The present is starting to feel like that impossible future I watched as a starry-eyed child. But what of Dr. McCoy? I remember literally cheering (something I haven’t done much in recent decades) when DeForest Kelley’s name appeared on the opening credits of Star Trek when season three began. Bones was always one of my favorite characters—the doctor who didn’t trust the machines upon which he relied so heavily. He was a down-to-earth country doctor, who seemed to feel out of touch with the human (and occasional alien) element with machines interposed between them. Medicine is, after all, a very personal thing. Our bodies are our souls. I know; scientists tell us we have no souls. Embodiment studies, however, suggest otherwise. That robot coming at me with needles and scalpels may know how to heal me, but does it have my best interests at heart? Where is its heart? Its soul?

Better health care is certainly much to be desired. But in a country where our lawmakers continually debate whether the poorest should have access to Watson and his ilk, I wonder where ethics has gone. Robot doctors, I’m sure, will not accept patients with no insurance. Does not compute. Having gone without health insurance myself for several years, despite holding advanced degrees, I know that if I’d had a health crisis I’d have been rightly ranked down there with the blue collar folk that I consider kin. You see, to be human is to be ethical. That doesn’t mean we’ll always make the right decisions. It’s a safe bet that Watson can play the odds mighty finely. And the soulless machine may be making the decisions about who lives and who does not. Now that I have insurance again, when I’m on that cold slab I may have a shot at seeing a robot doctor. If that ever happens, I’m going to hope that Dr. McCoy is at least standing in the corner, and that those waiting outside the comfortable walls of affluence will somehow enter Watson’s scientific calculations with me.


Robotics FIRST

Wired

I knew it! It was right there on the cover of Wired magazine. “The Robots Take Over.” And it is also the very day of the FIRST Robotics kickoff, the day when Dean Kamen and his team announce to thousands of high school kids, teachers, engineers, and interested parents, what the 2013 FIRST robotics competition will be, spurring us into six frenzied weeks of designing, planning, and building a robot to take to competitions. First Atlanta, then the world! It must’ve been their plan all along.

The article in Wired, by Kevin Kelly, does have hints of cheekiness throughout, but for the most part is on target. How many of us already use computers or some kind of robotic devices to complete our jobs? Kelly points to the inevitable: robots can do it better. The upside is that when robots take away jobs they create new ones, like Charlie Bucket’s dad getting a job repairing the robot arm that took his job away at the toothpaste factory. If you don’t want a tech job, too bad. That’s what the new definition of work is becoming, since labor is already being taken over by robots. Those who can look far enough ahead can see robots doing, as Kelly puts it, any job. What makes this sound apocalyptic to me is the fact that we, as a society, undervalue education. What will the undereducated do? Their jobs are the first to go. I feel the tremors of a revolution that hasn’t even started yet. People need something to do.

It is apparently without irony that Kelly suggests that any job people do, including in the service industry, can be done by robots. I am an editor. A robot may be able to find grammatical errors (Word and Pages already do this), but they can’t capture the soul of a writer. We write for the enjoyment of other people who experience being people in the same way that we do. There is an inherent arrogance in the Artificial Intelligence movement that believes (yes, it is a belief) that intelligence and mind are the same thing. There is no room for a soul in this machine. Many biologists would agree: we’ve looked, no soul. But even biologists know that they’ve got an identity, aspirations, contradictions, and emotions. It is the unique blend of these things that make, what we can for convenience call, the soul. There are entire industries built around the care for that soul.

Many scientists are still betting on the end of religion, the ultimate repository of those who believe they have souls. Religion, however, is not going away. When we see robot psychiatrists, robot social workers, robot clergy, robot writers and artists, and robot Popes, we’ll know the apocalypse has truly transpired.


Kermit’s Secret

When I was a post-graduate student in that Gothic city of Edinburgh, I decided to spend some time reading Umberto Eco’s Foucault’s Pendulum. It was intended as harmless entertainment, but as anyone who has read it knows, the story soon unravels into an unbelievable world of dark religions that haunt a naive protagonist. While I was reading it, a packet, hand-addressed to me, with no return address, came to my student mailbox. The contents consisted of several tracts, in German, warming of the dangers of Satanism. No letter, no explanation. Foucault’ s Pendulum had me paranoid already, and this strange package completely unnerved me. Well, I’m still here to tell the tale. While reading Victoria Nelson’s brilliant The Secret Life of Puppets, I learned that she had a strange episode while reading the same novel. It was an apt synchronicity.

Nelson is a scholar who should be more widely known. I found her because her recent Gothicka was prominently displayed in the Brown University bookstore in May. I saw it after taking a personal walking tour of H. P. Lovecraft sites. Synchronicity. I had read, in a completely unrelated selection just a couple of months ago, Jeffrey Kripal’s Authors of the Impossible. Synchronicity. For many years I have honed my Aristotelean sensibilities, following devotedly in the footsteps of science. Problem is, I have an open mind. It seems to me that to discount that which defies conventional explanation is dirty pool in the lounge of reality seekers. I have always been haunted by reality.

I’m not ready to give up on science. Not by a long shot. Like Nelson, however, I believe that there may be more than material in this vast universe we inhabit. Indeed, if the universe is infinite it is the ultimate unquantifiable. The Secret Life of Puppets is alive with possibility and anyone who has ever wondered how we’ve come to be such monolithic thinkers should indulge a little. For me it was a journey of discovery as aspects of my academic and personal interest, strictly compartmentalized, were brought together by an adept, literary mind. Religion and its development play key roles in the uncanny world of puppets. Those who wish to traverse the realms they inhabit would do well to take along a guide like Nelson who has spent some time getting into the puppets’ heads.


Battle Bots

Our local high school robotics meetings start up again this week. Actually, they’ve been going on all summer since robots do not require the rest and mental downtime that we mere creatures of flesh do. Glancing through the headlines of the Chronicle of Higher Education I saw a leading article on a topic I’ve been reading about: the military use of robots. On a college campus visit last semester I came across a robotics display and, since I’ve picked up some of the lingo, I engaged an engineering student sitting nearby. He told me that most of the funding for robotics at the collegiate level (there, anyway) came from the Department of Defense. Earlier this year I had read Wired for War, a book as stunning as it is frightening. In fact, P. W. Singer is cited in the article. What makes this interesting, however, was the role of Ronald Arkin, a Georgia Tech professor of robotics and ethics. Dr. Arkin believes robots to be morally superior to humans at making battlefield decisions. He’s not alone in that assessment.

The more I pondered this the more troubled I became. Morality is not a scientific purview. Ethics that have been quantified always fail to satisfy because life is just a little too messy for that. Who is more morally culpable: the policeman who shot a thief dead when the man was only stealing bread because his family was starving? Hands down the most challenging undergraduate class I took was bio-medical ethics. It was thornier than falling into a greenhouse full of roses. Sick minds and reality cooperated to draw scenario after scenario of morally ambiguous situations. I left class with two more things than I’d taken in: a headache and a conviction that there are no easy answers. Having a robot vacuum your floor or assemble your car is one thing, having one decide who to kill is entirely another.

The article cites the rules of war. The main rule seems to be that no matter what, some people will always kill others. We try to sanitize it by making the inevitable death-dealing follow ethical conventions. While religion often takes a bad rap these days, one of the things that it is capable of doing pretty well is providing an ethical foundation. People may not always live up to the standards, but religions only in very rare situations give people an excuse to hurt others. Nearly all religions discourage it. The rules of a science-based morality would likely fall along a logical algorithm. Unfortunately, there’s more gray than black or white in this equation. Algorithms, in my experience, are not so forgiving. So as I get ready for my first robotics meeting of the year I need to remind myself that the robots are capable of great good as well as great evil. Like with humans, it all depends on who does the programming.