“I’m sure you’re familiar with the Turing test. Mere formality, simple question of algorithms and computing capacity. What interests me is whether machines are capable of empathy. I call it “the Kamski test”, it’s very simple, you’ll see.”
Kamski in “Detroit — Become Human”
I recently watched an interview about female psychopaths and watched a walkthrough of “Detroit — Become Human” (don’t judge, as I see it, the game isn’t much more than a walkthrough anyway).
What was interesting was the combination of both. And they fit neatly together.
The game itself had a nice scene in which the creator of the Androids did pose an empathy test for Androids. Turing test is easy, but what about an Android feeling empathy for another android?
The interview had a nice example of what makes a psychopath: Being able to understand emotions, but not feel it. Another person hits his finger and the psychopath understands that the person feels pain, but she doesn’t flinch, i.e., doesn’t feel a tingling in her finger herself.
So, putting two and two together — or just restating the point of the game — what about an AI that does have mirror neurons. That not only understands, has a model of the world, but actually feels with it? That part of the AI might actually prevent them from going homicidal, or rather, from not caring when hurting humans.
(Update: And yeah, that means Androids/AIs have to be able to feel pain. Otherwise it’s not going to work. Evolution was — unfortunately — right here. Pain is a great teacher.)
And yeah, if that particular part of the AI’s network gets corrupted, you’d have a psychopath-AI. But if it’s running, it might do a lot to prevent harm (unless it goes Data and switches off the emotion chip).
Might be well worth the neurons.