Why not treat a learning AI like you would treat a dog?

«All major theme parks have delays. When they opened Disneyland in 1956, nothing worked.»
«Yeah, but John, if the Pirates of the Caribbean breaks down, the pirates don’t eat the tourists.»
John Hammond and Ian Malcolm in «Jurassic Park»

One presentation during the MinD-Akademie was about autonomous vehicles. A side-issue was the problem of responsibility when a system learns while it is in use. Learning is great for adaptive systems, but brings with it the problem of responsibility. What if the system was fine when it was bought, but learns in a way that makes it a danger to others?

Let’s face it, humans aren’t the best teachers and it’s hard to be a perfect system when you adapt to an imperfect world.

I wonder whether the problem couldn’t be solved by treating these kinds of domestic robots like dogs. In a positive way, not the figurative way.

After all, what happens when you buy a dog? You likely get it from someone you trust — or makes a trustworthy impression — so whatever the dog learned in the first few months hopefully didn’t screw it up. Same with a domestic robot. Once it’s your dog, you are responsible for it. You have to train it, make sure it doesn’t make a mess in the house (if it didn’t already come housebroken). You can train it tricks, make it guard the house or do other duties, but again, you are responsible. You may  (must) take it on a leash or make sure it doesn’t run away, at least in the beginning. And that it doesn’t run into traffic or bites the mailman. And if the dog does become disturbed, you have to deal with it as well (e.g., ask an expert, go to dog school).

Same could apply to domestic robots. First a phase when you make sure it learns the right things, mostly under close supervision (with the benefit that you can switch off the robot), later it can mostly run on its own. And when problems begin to manifest — and if the robot is developed well, there will be early warning indicators that the robot learns the wrong things — you can ask an expert. Hell, you can even reset it. If you can «forgive it» when it makes a mistake. And yeah, you might even go for the robot equivalent of an attack dog. But once you go that route — and same with dogs that are trained to attack humans — you better know what you are doing.

Not sure and just a thought, but it might work.