When Robots Come To Pray
Sean LorenzCrunch Network Contributor
Sean Lorenz is the founder of Senter, a startup that seeks to improve chronic care with Internet of Things and deep learning in the home.
How to join the network
A developer colleague of mine recently went on and on about Google Photos. He knew my background in computational neuroscience and thought I would be interested in what Google was doing with deep learning. That night I moved all my iPhone photos from an external hardware to The Magical Cloud, then forgot about it for a week. Like every other tired Boston subway passenger, I checked my phone religiously and opened the app to find images of my wife, kids and friends as separate photo clusters.
Well done, Google. Later in the day I brought up a certain wine I liked in conversation but couldn’t remember the name. I did, however, take a photo of the label and typed “wine” into the Google Photos app search for shits and giggles. Of course it found the photo of my wine — and that’s the moment I began to realize just how powerful Google’s technology is becoming.
The more jaded of you out there might say, “It classified items in some pictures. Big deal.” Well, my jaded friend, it is a big deal. Figure-ground segregation, i.e., the ability to discriminate an object in the foreground from what’s behind it, is something computer vision researchers have been working on for decades.
Today we can throw massive amounts of images into a deep learning algorithm and fairly accurately pick out a cow from the field in which it’s grazing. The thing is, deep learning has actually been around as backpropagation (with some recently added tricks by machine learning godfather, Geoffrey Hinton) since the days of Cabbage Patch Kids and Bruce Willis singing R&B.
Now that we have a combination of massive compute power and obscene amounts of data thanks to tech titans like Google and Amazon, deep learning algorithms keep getting better, causing the likes of Elon Musk and Stephen Hawking to speak up about the many future possible dangers of artificial intelligence.
A few words of warranted caution from intelligent minds is often translated as “SkyNet is coming!!!” in the general press. Can you blame them? Just about every movie with robots and artificial intelligence involves some sort of dystopian future requiring Schwarzeneggerian brute force to overcome our future overlords.
Despite being called “neural networks,” deep learning in its current form is not even close to how biological brains process information. Yes, vaguely speaking we process an input (touch, taste, smell) and multiply that by a weight (a synapse somewhere in the brain) to send an output (move my hand). But that’s where the similarity ends.
Just about every movie with robots and artificial intelligence involves some sort of dystopian future.
Remember our figure-ground example? The brain doesn’t require knowledge of all existing priors to solve the problem. Infants are born with twice the number of neurons required to figure out what is important in the world around them. Regarding the vision system, babies wire their wee brains by learning basic things like line orientation, depth perception and motion. They then use subtle eye movements, called saccades, to assess what’s happening in a scene, combining it with what they learned regarding shapes and depth to know where a coffee cup ends and where the table begins.
Companies like Neurala and Brain Corp. are foregoing the typical flavors of deep learning to build adaptive biological models for helping robots learn about their environment. In other words, a camera lens could act as an eye, sending signals to AWS for replicating a human retina, thalamus, primary visual cortex up through middle temporal and inferior temporal cortex for higher-level understanding of “cup” or “table.”
Biologically inspired neural models require massively parallel computation and an understanding of how each cortical and subcortical region work together to elicit what we call consciousness. The cause for concern should really come when tech giants discover the limitations of their current deep learning models and turn to neuroscientists for coding functions like detecting your wife’s face, driving around potholes or feeling empathy for someone who lost a loved one.
This is when things get interesting. This is when multisensory integration, cognitive control and neural synchrony combine to give rise to something new — qualitative experiences (or qualia) in non-biological systems. This is when embodied machines learn from their experiences in a physical world. The Internet of Things (IoT) is the precursor to this. Right now, IoT devices are mostly dumb telemetry devices connected to the Internet or other machines, but people are already starting to apply neural models to sensor data.
What we learn from processing sensors on IoT products will soon carry over to robots with touch, vestibular, heat, vision and other sensors. Just like humans, robots with bio-inspired brains will make mistakes like we do by motor babbling while constantly updating information from their sensors to learn higher and higher depths of association from the world around them.
There’s a famous philosophy-of-mind thought experiment called Mary’s Room where a scientist named Mary was stuck her entire life in a black-and-white room, but has read everything to know about color theory. One day Mary is allowed to leave the room and sees a bright red apple. Everything she read about the color red could not prepare her for the conscious experience of “redness” in that moment. Can robots have an experience of redness like Mary did? Or is it all just vapid linear number crunching?
I believe the only way for robots to become truly conscious and experience “redness” would be for them to be embodied. Simulations won’t do. Why? Because it is the physical, electrical synchrony of all those different brain regions working together at the same time that elicits an “OH MY GLOB” moment of a novel, pleasurable stimulus experience. If you’re interested in the details on the physical dependencies for robot consciousness, check out my post here.
What happens when a robot wants to join our church, synagogue or temple?
So now we are living with conscious robots. Crazy. What does a mixed society with reasoning, empathetic non-biological machines and human beings look like? And, finally, getting to the topic at hand — what happens when a robot wants to join our church, synagogue or temple? Despite some critics who see religion as a nefarious byproduct of human evolution, a majority of scholars believe religion serves evolutionarily advantageous purposes.
For example, Jewish tradition has numerous food and body restrictions centered on the topic of cleanliness. Avoiding “unclean” eating habits or the act of circumcision likely increased the Jewish population’s natural selection fitness in a time before hand sanitizer. There are, of course, other social and group dynamic benefits, as well. All this is to say, if we are able to replicate human brain function in a synthetic brain, there’s a good chance something like religious and spiritual sentiments could arise in robots.
As a practicing Christian, this possibility gives me a bit of the chills. Throughout Judeo-Christian history, humans are told that we are built in the image of God — the Imago Dei — but now there may be a robot that tells us it had a spiritual experience while worshipping in a church service on Sunday. Did it really? Was that a truly conscious experience? And is the soul separate from our conscious life or not? If robots are conscious, does that mean they have souls, or is that something different? I hope this is making both atheists and believers alike squirm.
I have no idea what the difference between the soul and consciousness might be. This gets at the very heart of who we are as humans, and whether or not some piece of us physically lives on after we die. Are there higher dimensions that house our soul, then send down insights via consciousness to our four-dimensional world? Or is this all we get?
As someone who, for better or worse, holds to a faith in something larger than myself, I truly want to believe the former. Either way, there is likely going to be a time when we have to address both scenarios as machines adapt and become more like us.
Amen.
Comments