When Viv co-founder Dag Kittlaus unveiled his startup’s new personal assistant earlier this week, he showed off how easy it was to send his mother flowers, order a ride and book a hotel room through voice commands. “Have you ever seen a hotel booking that was that simple before?” he asked.
It was simple not only because Viv’s software could parse complex questions, but because Viv served Mr. Kittlaus an option of just one vendor for each request— ProFlowers.com, Uber and Hotels.com—instead of a slew of search results.
A proliferation of companies are trying to build new twists on mobile search engines by giving users “personal assistants” that will offer simple recommendations for products and services. But a key question is whether the services are transparent about how they come up with those recommendations.
That very simplicity touches on a key question facing Viv and other personal assistants under development: How will they explain how they came up with product recommendations, including whether they’re based on data collected about consumer habits or are the result of arrangements with companies? Without an explanation, consumers might not trust the recommendations and therefore be less willing to use the services.
But being transparent about the reason for recommendations will be tricky, especially on a smartphone user interface that may be no bigger than a chat box, or on a mobile device with no screen at all.
Facebook faces a similar issue with Messenger’s M, which is now being tested and is run mostly by a few dozen humans training a machine-learning algorithm. In the near future, M will link to business bots to handle transactions, Stan Chudnovsky, head of product of Messenger told The Information last month. But how M determines which business bot to contact—will it recommend a booking using an Expedia-owned or Priceline-owned merchant if someone wants a hotel?—is uncertain. Facebook officials didn’t return requests for comment.
Data on consumer preferences collected by the personal assistants is likely to be a major factor influencing recommendations. Mr. Kittlaus, for instance, noted that Viv suggested a hotel he’d stayed at before, implying it had learned that from other interactions. But that’s likely to reinforce longstanding privacy concerns about how companies like Facebook and Google rely on user data to sell advertising.
And the privacy and consumer protection challenges involved in personal assistants may be even greater because the goal of these interfaces is presenting consumers with an easier way to tap “buy.” The idea is that consumers won’t have to think as much about what company is selling a product or why it’s being recommended.
“These services are based on developing rich profiles about your preferences. The question is who will have access to them, what will they be used for and will your life be curated for you in a way you don’t notice or have control over?”
“These services are based on developing rich profiles about your preferences,” said Georgetown law professor David Vladeck, former Director of the Bureau of Consumer Protection of the Federal Trade Commission. “The question is who will have access to them, what will they be used for and will your life be curated for you in a way you don’t notice or have control over?”
Some of these new questions are getting worked out. Adam Cheyer, co-founder of Viv, told The Information earlier this year that companies building these assistants will face a “tradeoff” of protecting privacy of users versus building more robust and intelligent assistants. He said he thought Siri, which Viv’s founders created before selling it to Apple, hadn’t improved much since its launch in part because Apple chose not to leverage lots of data of users. Viv wants to harness data to give people better results but the company’s approach will be “to have a lot of transparency about what Viv knows about a user.” (Mr. Cheyer and Mr. Kittlaus didn’t return requests for comment this week.)
How M and other personal assistants decide to share data on users with third parties is also worth watching. Messenger opened up its chat platform for some businesses last month, letting companies not only handle customer service interactions but also sell goods in the app via automated “bot” responses. Facebook so far has been careful to not give a lot of data to businesses. Partners plugged into the Messenger API only get to see users’ profile pictures and names—not even email—one partner said.
“With the first generation bots, we’re starting with minimal access. When consumers develop trust, we’ll get more access, which is the right way to do it,” added Puneet Mehta, CEO of the startup msg.ai, which partners with Facebook and runs artificial intelligence services for brands.