Compassion, AI and the uncanny valley
24 June 2016
In his second column in AdExchanger's "Data-driven thinking" series, our very own Al Boyle, Global Client Partner and NA Head of Strategy, argues that artificial intelligence can help advertisers be more compassionate and connect more meaningfully with their audiences. But it isn't without potential pitfalls:
"One of the potential pitfalls of using AI to communicate is the “uncanny valley” effect. Originating from the field of robotics, this hypothesis holds that human emotional response to robots becomes increasingly positive as robots appear more human, but suddenly dips when robots look almost-but-not-quite human, bearing an “uncanny” resemblance. The same hypothesis can also be used to explain human responses to AI behavior, as well as our reaction to badly targeted ads.
When an ad is completely irrelevant, we are unlikely to notice it, never mind respond to it. As ads become more relevant to us and our situation, we are more likely to respond positively. However, when we reach a point where ads are highly targeted in the wrong way, we may respond negatively.
We’ve all seen unfortunate pairings of advertising messages and website content, such as ads for knives next to articles about a stabbing. We’ve all been stalked around the internet for days by creepy ads for that pair of shoes we viewed online but bought in-store or that hotel we considered but decided not to book. The systems behind these ads perceive their environment and attempt to maximize their chance of success by showing a relevant message. But they’re getting it wrong, and our reaction is often worse than if they weren’t targeted at all."
Read the full article on AdExchanger.