In a town ten miles from my house, a GPS system directed a driver onto the the railroad tracks. The driver escaped, but the car was demolished. This happened again a few weeks later to another driver. Do we trust computers? Yes. Even when we shouldn’t.
We are most likely to trust them with factual information. Watson was correct more quickly than humans in spouting Jeopardy answers. Siri directs people (and even tells a few jokes.) I think that we trust more with use and with how natural the interaction is.
If true, we will see more people driving on to the railroad tracks of life as interfaces become more familiar and feel more natural. And, as computers learn about us and adjust their styles to those that comfort us, we’ll get more use out of them, but we’ll be even more vulnerable becoming too trusting. (I remember a time before Photoshop when people believed pictures couldn’t lie.)
Things will get even more complicated as the interfaces offer opinions (sometimes being the face for crowdsourced answers) and advice. We will need to penetrate another level to assess the credibility of the responses. This is already a problem where people (and search programs) have turned the Internet into an echo chamber of sometimes radical perspectives and where review sites have gotten clogged with biased views (sometimes by providers themselves).
Imagine how powerful computers as advisors will become as they watch our facial expressions and respond with their own faces in ways that inspire confidence. Take that a step further to body language. In theory, computers can become wonderful liars.
Can we turn this around? Can we use independent computer agents to assess the answers and provide warnings or ratings of credibility?
Perhaps we could go even further and created agents that will ask US questions and challenge our answers. Just as I make it a point to have people in my circle of friends and associates who have divergent points of view, I would love to have an online Devil’s Advocate, challenging my opinions and «facts.» This seems doable (perhaps first as systems that emulate pundits), and it might help us avoid some disasters.
And this more tragic case happened recently in Spain: The driver drowning by GPS.
http://www.tomsguide.com/us/Spain-Drowning-Drive-Lake-La-Serena,news-8202.html
As Pattie Maes stated delegating decision to software agents poses two main challenges: competence and trust. I like the way you showed how an asymmetric relation between both features may lead to disaster. Do you know examples in the other direction?. I think that predictive and recommender systems have an important challenge in gaining people trust even being very competence. For instance, Nate Silver’s algorithmic-based approach for predicting has raised furious controversies.
By the way, your Socratic software agent proposal seems very appealing to me.