Catching Issues in Evolving Interfaces

You may remember this summer’s media frenzy surrounding adultery-matchmaking site Ashley Madison. In brief, the company had its password database hacked, stolen, and posted online with great fanfare. Amidst the stories focusing on noteworthy individuals and the demographics of the membership as a whole, some people have been investigating other aspects of the site’s operations, from their “Affair Guarantee” package to their practice of charging to delete a user’s account from their servers.

"Leaky" Interfaces

One researcher uncovered a quirk of the site’s password-recovery form that actually allows someone to check whether an email address is associated with an account. In security we often refer to such a flaw as “leaking” sensitive information.

Usually leaks that occur with sign-in or password-recovery forms involve the text of the interface – e.g., a sign-in form that responds “The password entered does not match the one on file for this email address” as opposed to the more broad “The email address and/or password entered do not match our records.”

The Ashley Madison password-recovery form actually uses the same text whether or not the email address entered is in their database. However, in one case the text-input field and the button stay present in the screen, and in the other case they disappear.

Image: Screenshots from Ashley Madison’s password-recovery form.
Screenshots of Ashley Madison’s password-recovery form when the email address is not (left) and is (right) part of their database.

Supporting graceful product evolution

Given the site’s focus on discretion – and the carefully-worded textual content of the form – it’s unlikely that someone sat down and intentionally designed it to behave this way. It’s more likely that this interaction snuck in as parts of the site’s architecture were reworked over time. Since the folks working on the site likely don’t reset their passwords on a regular basis (much less compare the result when the email address is and is not in the databse), it's easy to see how the team missed the addition once it was added.

This is an example of why it's important to think of designing not just the product, but also processes to support the product's graceful evolution over time.

Here are some ideas to help catch interface problems that sneak in:

  • Create UX reviewers. Just as teams conduct code reviews before a set of code changes are committed, it can be useful to have UX reviews as well. These can be performed by a designer – advisable when an interface is being implemented against a set of mockups that the designer created – or by another engineer when the change is small. The goal is to make sure that at least one other person takes a solid, critical look at the user-facing implications of the changes, just as the code implications are examined.
  • Create an adversary persona. Many teams craft user personas to help them design interaction patterns that will meet the needs of their diverse user population. Why not also create one or more personas representing attackers? (Thanks to @gretared for her take on @jorm's idea of creating a troll persona – "because you can't design for good without understanding the evil"). This adversary persona can help UX reviewers identify ways that the interface might inadvertantly leak information.
  • Regularly audit against known best practices. Armed with your attacker persona and other approaches for threat modeling, try to identify a set of principles or clear protection goals that you can then use to evaluate the user experience on a regular basis. For example, many websites require users to reauthenticate before accessing sensitive parts of their account; this is a best practice that protects against both accidental and some intentional forms of data compromise. Keep the list of best practices as short as you can, to make it feasible to schedule a regular review that assures your interface hasn't evolved too far from its original privacy-driven design.

Screenshots of Ashley Madison password-recovery forms captured by Troy Hunt and used on his blog, which is published under a CC BY-SA 3.0 license.


Video Roundup

It’s always great to attend security and privacy conferences in person. But in cases where you have to miss an event, online videos of the talks can be a great way to stay current with the ongoing conversation. Art, Design, and The Future of Privacy As I promised back in September, the videos of the event we co-hosted with DIS Magazine at Pioneer Works are available online. The DIS blog had a great writeup with summaries of the different panels, and you can find transcripts over at Open Transcripts.

Learning from Drones

Last week, I encountered discussions of drones in two unimaginably different contexts: in an academic presentation at USENIX Security 2016 and on the TV comedy Portlandia. As distant genres, they offer different perspectives that have equally important UX implications for privacy preservation. In the opening keynote of USENIX Security, Dr. Jeannette Wing examined the trustworthiness of cyber-physical systems, which are engineered systems with tight coordination between the computational and physical worlds.

Notes from the Internet Freedom Festival

I really enjoyed my time at the Internet Freedom Festival in Valencia, Spain. I was inspired and humbled to meet so many talented people as part of a global event about internet freedom. From powerful conversations about privilege to UX design jam sessions, it was a great week. With more than 600 people registered and 160+ sessions, there was more terrific discussion than I could be part of, but here are some themes that stuck with me.