Chatbots, UX, and Privacy
Chatbots, or conversational programs that simulate interactive human speech patterns, are a hot topic in UX right now. Microsoft CEO Satya Nadella recently claimed that “bots are the new apps”, and that they are the interface of the future for tasks like ordering food and booking transportation. In San Francisco, tech elites use a multitude of oft-parodied services like Wag to find dog walkers and Rinse to have their laundry done. However, the appeal of a single integrated interface to multiple apps is obvious from a UX point of view, even as the social implications of so much “efficiency” are still being debated.
Back to the Command Line
Prompt bills itself as the “command line for the real world”. It uses text to integrate with over 1,000 services – including commerce (e.g. Domino’s Pizza), productivity (e.g. Evernote), and home automation (e.g. Nest). With Prompt, it’s possible to get directions from Google Maps or order an Uber to drive you there simply by sending text commands.
Screenshot from Prompt.io.
Driving everyone from the interactive world of apps to the visually impoverished world of the command line feels like a step backwards to many designers, including me. But we can interpret this shift as a response to the usability challenges of working across multiple apps on a mobile OS.
Chatbots versus Better Apps
Dan Grover’s excellent post Bots Won’t Replace Apps, Better Apps Will Replace Apps clearly illustrates the UX implications of what he describes as “Silicon Valley phone OS makers’ growing failure to fully serve users’ needs, particularly in other parts of the world.” I recommend reading the whole article, but the screenshots alone tell a compelling story.
Dan is product manager at Chinese mobile messaging platform WeChat, which works to embed services in its core interface graphically rather than textually. His examples offer a view into the world of the Chinese-language mobile experience and serve as a counterpoint to the hype around chatbot interfaces. For example, he contrasts a pizza ordered via 73 keystroke taps in a conversational UI with 16 taps in the graphical WeChat equivalent. Even though click/tap counts are an imperfect way to evaluate usability, they are one illustration that advocates of the so-called efficiency of chatbots might not have the whole story. Textual interfaces work well for some users in some contexts (system administrators and programmers have embraced them for decades!), but that doesn’t doesn’t mean that they will work everywhere for everything.
Example transaction from Microsoft Bot Framework showing 73 taps to order pizza. Image from Microsoft.
WeChat interface for in-app ordering Pizza Hut, showing 16 taps needed to complete the transaction. Image from Dan Grover.
These chatbot-versus-graphical interactions show different relationships between messaging apps and other special-purpose apps. For example, ride-sharing service Lyft uses the phone’s native text-messaging app to notify passengers that their ride has arrived, but passengers can’t order a ride from within the native messaging app. WeChat started as a messaging app and has expanded to take on activities done by special-purpose apps in other contexts.
Security Implications of Chatbots
Telegram, which tries to position itself as a platform that keeps users’ data secure in a credible way (despite significant challenges on that front), gives developers tools for building bots. It even offers prize money to developers using the Telegram Bot API. But how do privacy and security fit into this landscape? Should we be advocating for the equivalent of end-to-end encryption in this kind of chatbot universe?
From a human-centered point of view, we can expect that communicating with a bot sets end-user expectations that their messages are being read by machines. It’s an easy inference that their messages are saved and archived by the bot owner and used as training material to improve the program over time. Just as people who call a customer hotline are informed that “This call may be monitored or recorded for training purposes,” people have an expectation that some unseen entity is eventually reading the message. Otherwise how would they know what kind of pizza to send to which house?
The expectation that “secure” chats are read by unknown parties has the potential to change users’ mental models of privacy and confuse their understanding of what “secure messaging” means in other contexts. Further research is needed to understand the implications and how to communicate security properties of different platforms.
Chatbots as Security Coaches?
Chatbots are an intriguing output format for explaining security concepts. In this example from Slack, a bot messages me to let me know that a file’s actual sharing permissions have changed.
Screenshot from Slack.
This is an effective message because it’s actionable. The proactive information (which appeared to me in a private channel, with accompanying notification) gives a sense of immediacy. I know who shared what file with whom, and it’s easy to check the contents of the file. I am one click away from being able to ask Scout about the action she has just taken.
This approach could be adapted to a number of contexts. Many large service providers send notifications by email when a user’s password has been modified or other important account details have changed. A conversational UI could not only be a prompt and friendly way to share this information with users, but could offer users an opportunity to take immediate action if the change was unwanted. Thinking more aspirationally to connected homes, smart cities, and IoT applications, chatbots could help people understand the chain of custody of their data. For example, they could notify people that their image has been captured on video and shared with a third party – or offer them an opportunity to opt out of such a recording. The details of such systems would be complex, but new interfaces could help make the exchange of complicated information more easy and accessible.
I’m optimistic that chatbots can help people understand how their data is being used. I’m especially excited by the potential to use chatbots not just to control commerce, but to empower us to manage our personal data. Privacy-minded people should look for opportunities to make chatbots more than just glorified mechanisms of corporate surveillance. We should strive to instead create tools that will help people understand their data and their capacity to control it in an actionable, friendly way.