It's Time to Start Talking About Robot Morals

It's Time to Start Talking About Robot Morals

אודות הפרק

Computer programmers are injecting machines with consciousness and the power of thought. It's time we stop and ask, 'which thoughts?'

In this episode we hear how robots can become self-aware and teach themselves new behaviors in the same way a baby might learn to wiggle his toes and learn to crawl. Though this is happening now, Hod Lipson, Cornell researcher, tells us that uttering the word consciousness to roboticists is like saying the "C" word. It could get you fired. We say, it's time to start talking about robot morals. 

However you look at it, Google's self-driving car is a robot and it will be entering our lives soon. So we talk with psychologist Adam Waytz of Northwestern University about his experiments measuring how people form bonds with robots, and how we naturally project human characteristics onto machines — for better or worse — including a friendly driver-less car named Iris.  

By the end of this episode, we raise a lot of questions and offer a few answers about the ethics of living in a robot world. Please consider this the start of a conversation and let us know what else you want us to ask, answer, cover or investigate, including who you want us to interview next. 

You can get in touch with us through Twitter, @NewTechCity or email us at newtechcity (at) wnyc.org. And if you like this episode, please subscribe on iTunes, or via RSS. It's easier than finding your toes. 

 

VIDEOS:

We mention a few videos in the podcast. Here they are in the order they appear in the show. 

 

Watch the full event with Hod Lipson showing off his thinking robots. He shows off his "Evil Starfish" starting around 14 minutes in. It "gimps along" best at 28 minutes in.  

 

And here is Google's promotional video for it's first fully driver-less car.

Home » podcast itemes » Note to Self » It's Time to Start Talking About Robot Morals