Latest news & updates

  • Security is a “Design in Tech” Trend

    We’re honored to see Simply Secure included in the Design in Tech Report 2017. I’ve learned a lot from previous versions of the reports, which describe design trends from a business ecosystem perspective. I wrote up highlights from the 2016 Report, and it’s exciting to see how far the industry has come in a year.

    John Maeda presented this year’s report on trends in design and business at the SXSW festival. Simply Secure is mentioned at 56:00 of the video.

    John Maeda presenting the Design In Tech Report 2017 at SXSW.

    Making Progress

    We still have lots of hard work ahead of us to reach the vision I described last year of Silicon Valley shifting focus away from customer data as a commodity and toward user privacy as a core value proposition, but I’m glad to see this attention on security. Designers are urgently needed to help build products and services people trust. And there are clear connections between security and several of the other trends highlighted in the 2017 Design in Tech Report including inclusive design and voice- and chat-based interfaces.

    Start Here

    If you’re new to security, and want to start by improving your digital footprint, start by reading our Four Concrete Security Tips, or Securing Your Digital Life Like a Normal Person by Martin Shelton. Designers getting started with security challenges, can also check out How to Fight Phishing: Security for Designers.

    Let’s Keep This Going

    Simply Secure has a public Slack channel for discussing issues related to user experience, security, privacy, and design. Email for an invitation.

    More in these interviews:

  • Professional knowledge and IoT

    This year's Consumer Electronics Show (CES ‘17) showcased numerous internet of things (IoT) devices but was found wanting when it came to security concerns. In his UX of IoT report from CES, Scott Jenson assesses that “companies really, really, REALLY want to make home automation systems,” but how can we begin to consider the ethics when developers don’t even consider security risks?

    IoT systems pose two security challenges. First, they can be manipulated as surveillance infrastructure to target vulnerable people. Second, insecure devices can be turned into remotely controlled networks that cause harm, such as taking websites offline. In fall 2016, malicious actors harnessed about 500,000 hacked devices such as CCTV cameras to comprise the Mirai botnet. This botnet used distributed-denial-of-service (DDoS) attacks to take parts of the internet offline. This included Twitter and all top-level domains from the country of Liberia.

    Understanding risks

    When people put IoT devices in their homes, they open themselves up to the risk that those devices could be used by a malicious actor to learn more about them. As former U.S. Intelligence Chief James Clapper has said, “in the future, intelligence services might use the [Internet of Things] for identification, surveillance, monitoring, location-tracking, and targeting.”

    Some people may not be concerned about governments spying on them, but if they enjoy using the internet, they should consider how their devices can be harnessed as bots to affect the greater good. Botnets are a direct threat to the open internet, as the cost to secure sites against such an attack is estimated at $150,000 or more per year. Since such protection is unaffordable for both individuals and most organizations, DDoS attacks are effectively a form of censorship. It’s not a far stretch to see how hacked devices could be commandeered to silence journalists.

    An insights toolkit for people who build connected things

    The upsides of IoT are self-explanatory, but at Simply Secure, we want IoT systems to be built with a bias toward protecting people’s privacy. As a first step, we have been working with Mozilla’s Open IoT Studio to identify gaps in professional knowledge. Our shared goal is to assemble tools that developers can use to protect people’s privacy and the open internet.

    At MozFest in London and ThingsCon in Amsterdam, Simply Secure conducted research to understand participants’ priorities around IoT and privacy. We created what we call the Insights Toolkit for Building a Trustworthy IoT. This toolkit is meant to guide discussions that will help developers and technologists identify technologies that inspire and worry them and to learn where they can go for more information. The toolkit is on GitHub and includes questions, worksheets, and other assets. It can be forked, translated into other languages, adapted, localized, and continually improved.

    Photos of study materials, such as colored bits of paper and user responses on a visual survey.
    A workshop we facilitated at MozFest in London, where we used the Insights Toolkit to understand developers’ priorities for IoT security. (October 2016)

    Top insights: Resources and fighting botnets

    Based on discussions with about 30 developers at MozFest, the greatest challenges to get IoT developers to consider better security practices are 1) identifying reliable sources of security information (e.g., blogs) to respond to their skepticism and 2) equipping developers who are already on board with stories to convince their colleagues of the benefits of good security practices.

    “Stackoverflow isn't going to cut it for security.”

    Developers are curious and eager to get more information, but there is no consensus about reliable sources of security information. MozFest and ThingsCon were consistently mentioned as places where professionals expect to get information about privacy-preserving technology, but there is still a need for more specific, actionable recommendations that developers can deploy immediately. Stackoverflow, a knowledge-sharing website that is popular among developers, was mentioned as a useful resource for some questions but viewed as unreliable for security information.

    “What’s the problem with using a Raspberry Pi to get a cat to come to a web cam?”

    One of the most evocative stories during our MozFest workshop was a developer whose colleague programmed a Raspberry Pi to move a mechanical arm and make a rattling noise that would summon his cat to a home video camera. This participant scoffed at the possibility that his home was vulnerable because he had not programmed any security into his device. To the colleague, this Raspberry Pi hack was a bit of harmless fun. It didn’t occur to him that his program could be manipulated by a third party either to gather private data about his home devices or to become part of a botnet that would harm other parts of the internet. When challenged about these security risks, he retorted, “it’s just scaremongering by people who want your money.”

    “It’s just scaremongering by people who want your money.”

    The colleague made a fair point. Both consumers and developers question the value of privacy due to the assumption that companies manufactured concerns to increase sales of security products. For years, marketing teams told consumers that antivirus products were essential for their computers. However, research reports such as Google’s comparison of experts’ and novices’ behavior for online safety showed that experts don’t consider antivirus protection to be an important element of security. Those kinds of mixed messages can make people distrustful.

    Conclusion: Building professional knowledge

    There is a lot of commercial hype about IoT applications (brilliantly skewered by the Internet of Shit Twitter account). Developers who want to build their professional reputations intensify the hype in their eagerness to master new frameworks and APIs. From their vantage point, privacy advocates bring an unwelcoming message that quashes good-natured enthusiasm for building new things. However, no one doubts that IoT devices can benefit society in many ways (i.e., saving energy and conserving natural resources). The challenge for people who care about an ethical IoT is to support developers in building things they want to build by incorporating security practices into their workflows.

    Simply Secure’s initial research indicates that developers can be skeptical of security claims and unsure where to go for accurate technical advice. At Simply Secure, we want to distribute open resources to help developers do their best work while preserving privacy for IoT applications and beyond. If you’re working on an IoT project, use our Trustworthy IoT toolkit to build awareness with your colleagues about security.

  • Four concrete security tips for the new year

    In November, I had the opportunity to speak at the O’Reilly Security Conference in New York City. I shared a number of insights that we have discussed here on the Simply Secure blog, including findings from Ame’s New York City study on privacy for mobile messaging.

    I also sat down with Mac Slocum to talk about the importance of human factors in security (you can watch the interview here). Our conversation focused on security from a software developer’s perspective, advice on how to make tools more human-centric (e.g., talk to your actual users), and perspectives on big challenges in security (e.g., the Internet of Things).

    However, most people aren’t security-focused software developers and aren’t intimately familiar with the technology that can help protect their online data. If you’re a UX expert and interested in learning more about security and privacy, your own digital footprint is a great place to start. Here are some concrete steps that you and your loved ones can take in the new year.

    Steps for better privacy & security

    As I have said before, security isn’t a binary property. Unlike Santa’s clear-cut naughty vs. nice list, the analysis of whether a system or app is secure has many shades of gray. Different people worry about different threats to their data; to use security terminology, everyone has their own threat model. To help you thoroughly consider the risks facing your data, the suggestions below dissect the kinds of threat each tip is meant to protect against.

    Illustration showing Google product icons behind both a gated fence and a high-tech locked forcefield bubble.
    Two-factor authentication adds an extra layer of security to your account by requiring both your password and a second bit of information to sign in. Image from Google’s 2-step verification page.

    1. Use two-factor authentication

    Whether it’s called two-step verification, login approvals, or login verification, two-factor authentication requires you to have both your password and a “second factor” before you can sign in to your account. In most cases, this second factor is a code generated on your mobile phone or sent to you as an SMS message, but some systems (including Google, Dropbox, and Github) support special USB devices or voice calls.

    Protects against: Two-factor authentication prevents an attacker who has access to your password from gaining access to your account. You don’t have to be a famous person or have especially sensitive data to be a target of this kind of attack; most victims are everyday people. The vast majority of the online public would benefit from using two-factor authentication, especially for their email account, which can serve as a gateway to all their other accounts. If you are an activist or person facing dedicated attackers, it is even more essential that you sign up as many accounts as possible – and that you generate codes on your phone rather than receive them via SMS.

    Words of caution: If you lose your phone or second factor, be prepared, as it may take up to a week or more to regain access to some accounts. Many services provide a set of single-use codes that you can print out and save for emergencies. Tucking them into your sock drawer is generally sufficient, but if you’re worried about targeted threats, you might consider writing them down by hand in a place that is harder for others to find.

    Screenshots of a password-generation interface, one where strength meter is low and one where the strength meter is high.
    1Password is one of the more reputable password managers. In addition to following best practices around security (the company has collaborated in a number of third-party audits and documented the application’s security architecture publicly), it features handy details like a password generator with an engaging strength meter. Screenshot from 1Password 6.5.3.

    2. Use a password manager

    I was a late convert to using a password manager. I thought that my own schemes for remembering passwords (and in some cases, reusing them for low-value sites) were sufficient. But at a certain point – after a big data breach became public or almost falling for a phishing attempt – I realized that I needed to admit my limitations. I now recommend 1Password to my friends and family. The default is their cloud-based version, which allows you to easily sync passwords across devices. As of this writing, they also offer versions for teams and for families as well as a non-synced edition for a flat license fee.

    Protects against: Password managers make it easy for you to use long, complex passwords that are hard for attackers to crack. They also allow you to use a unique password for every site or application, which makes it impossible for an attacker to use a data breach on one platform to compromise your data on another. Finally, they make it less painful to change your password if a breach happens. They thus protect you from malicious attackers seeking to gain access to your personal information online through a compromised password. Again, most attacks against passwords aren’t targeted at famous people but common members of the population. Even your “low value” accounts, such as old social media profiles or email addresses, are worth protecting. Attackers often use them to try and extort money from your friends or to harvest personal information that they can then use in attacking a higher-value target.

    Words of caution: Don’t use the first free password manager you come across. If you use a cloud-based password manager, try to avoid accessing it from a public computer that may be infected with spyware, such as one in an internet café or a hotel business center. If you are in a pinch and have to do so, change your vault password (the password that you use to unlock your password manager) as soon as you can get to your own computing device. If you are an activist who is worried that your government might be able to compel the password management company to share your data, consider using a version that isn’t based in the cloud. For example, since KeePass is an open source password manager built by a decentralized, borderless community of developers, it would be hard to legally compel them to share your data.

    3. Lock and encrypt your devices

    When you turn on or wake up your computer or phone, do you use a password, PIN, pattern, or fingerprint to sign in? If not, even casual attackers can get access to your data when left alone with your device for just a few moments. While having to unlock your screen multiple times per day can be annoying, it is an important protective step for many people. Similarly, encrypting your device’s long term storage (e.g., its hard drive) prevents slightly more dedicated attackers from stealing your data by plugging it into their computer and scanning it. This method can allow them to access even devices that have a screen lock.

    Protects against: Screen locks prevent would-be attackers from getting into your device when they have physical access to it. This includes cases where you forgot your phone in a taxi or where your computer was stolen from your home. It may also include cases where your device is in a locked environment that others have access to, such as a locked hotel room, or during targeted searches such as at international border crossings. Encryption also protects against more dedicated attacks by people who have physical access to your device for an extended period of time. With the important caveat noted below, both device encryption and screen locks pose very little downside and significant upside.

    Words of caution: If you encrypt your device, it’s important to write down the password, PIN, or pattern you use to unlock it in a safe place. Losing access to your unlock code means that you permanently lose access to the data on the device. If you are worried about targeted in-person threats, screen locks that employ codes or passphrases may be a better choice than fingerprints, as there’s evidence that biometric readers can be fooled.

    Screenshot of a dialog box prompting the user to install a Windows update.
    It’s easy to ignore update notifications when you’re busy. Don’t postpone them for too long! Screenshot by mynetx, CC BY-SA 2.0.

    4. Install updates

    Writing software is a human process, which means that all software can have flaws. Software updates decrease the likelihood of your data falling prey to known vulnerabilities. Just as a homeowner regularly changes the batteries on a smoke detector and cleans out the gutters, it’s important to make sure the devices and software you rely on have up-to-date software and firmware. This includes your phone and computer operating systems, your web browser, and your apps. It also includes smart TVs, game consoles, and routers. If you’re not sure how to tell whether something is up to date, a web search for “check to see whether [product name] is up to date” often yields good instructions.

    Protects against: Software updates protect against digital attacks that target known vulnerabilities. These can be manual, such as a malicious hacker trying to steal your data, or automated attacks, including worms that jump from device to device. Once again, these attacks aren’t designed just for famous or powerful people but anyone who has devices on the internet.

    Words of caution: Make sure that you’re using the official channel to get authentic software updates. Don’t download updates from links that you see in emails, which may be phishing attacks, or in online advertisements, which may be vectors for malware infection.

    More resources

    The beginning of a new year is always a great time to pick up new habits and new knowledge. For some additional guides on getting started with privacy- and security-conscious practices, check out Martin Shelton’s Medium stream. To learn more about the big picture of protecting your data from surveillance, which can be especially important if you’re an activist or someone who might be subject to particular scrutiny by your government, check out EFF’s site on surveillance self-defense. Many of its resources are also useful for everyday people, too, such as its overview animation on password managers or its tutorial on encrypting your iPhone.

    Are you a designer or UX researcher interested in learning more and integrating a greater awareness of privacy and security into your practice? Join our Slack community!

  • Essential non-technical skills for working in security

    There’s a misconception that highly technical skills like cryptography are required to work in security. That’s not true. With critical threats to internet freedom and individual privacy, there is an urgent need for designers to get involved with security projects.

    UX designers are an important part of reframing the conversation about security. Instead of assuming that security’s UX needs to start with ways to discourage undesirable behavior, we should start from a positive mindset of elucidating its benefits to users. Here are three ways that non-technical UX skills can improve security.

    Privacy as part of a brand promise

    Copywriters and brand strategists have a role to play in protecting privacy, so think of their skills as part of the UX toolbox. Canadian VPN TunnelBear uses gentle humor and illustrations to convey the benefits of security to customers. Instead of the militaristic language of defensive cyber-security, TunnelBear communicates a warm but firm commitment to privacy.

    Screenshot of the TunnelBear website, which describes its security and privacy benefits in detail.
    Description from the website

    Other places where skilled writing can communicate how your service protects privacy include the product description in app stores and during new feature updates.

    Service design for two-factor authentication

    Composing messages and notifications is hard, but instead of being generic and detached, copywriters can cultivate a more personable presence.

    Screenshots of a smart oven companion app. There are lots of notifications, many of which are contradictory or hard to understand.
    Image from Mark Wilson’s review of the June oven in Fast Company
    These notifications from an IoT oven (which have since been rewritten) show just how difficult it is to write copy for an alert

    Coming at security from a service-design lens would improve the UX of two-factor authentication (2FA), which adds an extra layer of protection beyond a password. For example, Google accounts with 2FA enabled will send a code to users’ phones. But these messages are a missed opportunity to build a relationship with users who are taking positive steps to protect their privacy.

    Screenshot showing a series of messages containing authentication codes from Google.
    Google’s periodic text messages for 2FA codes

    Interrupting people’s workflow when they are trying to access their accounts isn’t ideal, but 2FA messages are an overlooked touch point. Considering the rise of SMS chatbots, these messages could be opportunities to applaud people for secure behavior or to act as a concierge for security practices. For example, it might answer questions such as “How do I make sure my router has the latest security patches?” or “Are other people getting a warning when they try to login to the Bank of America website?”

    Interaction design to communicate system behavior

    Visual design and animation are powerful mechanisms for communicating how systems work. Giving users insight into messaging systems could increase demand for privacy-preserving systems. For example, messaging systems such as Facebook use an animated graphic to let users know that someone is typing a comment, thereby making an invisible action visible. What kinds of new graphics could convey that a message has been encrypted? Or delivered? Or expose where the data are stored?

    Screenshot of a notification from Facbook that reads: A friend is writing a comment...
    Facebook use an animated graphic to indicate when someone else is typing a comment

    Many apps, including WhatsApp and iOS iMessage, use read receipts to reveal that someone has seen a message. These simple visual vocabularies are powerful in communicating how the system works. iMessage displays the words “read” and “delivered” to indicate status while WhatsApp uses check marks to indicate if the message has been seen.

    Screenshot of the WhatsApp chat interface
    WhatsApp uses two check marks to indicate that both the sender and receiver have seen the message

    Read receipts demonstrate how UX can change and normalize new behaviors. In the past, users who didn’t have time to respond to text messages could still read them. Now, users who don’t know that they can disable read receipts may avoid opening WhatsApp until they have the time to respond. This way, they can maintain plausible deniability for a slow response time. Designers can help users develop behaviors that protect their security and privacy in a similar fashion.

    For example, the Google Chrome team takes UX seriously and has done research on how the visual design of browser warnings impacts user behavior. The browser currently displays a neutral “information” icon for sites that don’t support encryption, or that have problems with their encryption. The team has announced that this will change to a scarier, more prominent warning icon next month; they are using this as an opportunity to engage users in understanding risks to their security.

    Screenshot of the Chrome browser showing an icon in the URL bar which consists of an i with a circle around it. A menu extends from the icon and contains a variety of information about the webpage.
    Chrome currently warns users that they are visiting an insecure site with an information icon

    Security and positive design

    Security teams often have a reputation for saying “no” because they think that the best way to protect users is to limit their behavior. But a good UX can also impress the benefits of security upon users through an affirmative approach. Non-technical UX skills such as brand strategy, copywriting, and visual design all have pivotal roles to play in helping people protect their privacy.

  • Fighting phishing in the browser: Security for designers

    My previous posts (part one and part two) explored what phishing attacks are and ways that designers can help prevent their products from becoming a target. In this post, I’d like to examine some more technical countermeasures. If you’re a designer interested in fighting phishing, this can be useful background information, and it can help prepare you for discussions with your more technical teammates. I also hope this post will highlight that current technical solutions alone are not enough to help users fight phishing.

    How browser companies fight phishing

    Web-browser companies work hard to fight phishing. Services such as the Safe Browsing initiative provide a continually updated catalog of probable phishing sites and help users of Chrome, Firefox, and Safari avoid them. These browsers pop up a warning message when users navigate to a site in the catalog. Some anti-virus companies provide software that performs a similar function. These services work best against phishing sites that have been around for a few hours or days but are less effective for ones that just launched or that target a limited number of high-value users (such as the spear phishing attacks I described in my first post).

    Screenshot of an eye-catching browser warning.
    An example of Firefox’s phishing warning for a site that was registered on the Safe Browsing blacklist. Adapted from this image by Paul Jacobson, which was released under a CC BY-NC-SA 2.0 license.

    In considering the browser’s efforts to protect users, one common misconception is that the lock icon in the URL bar communicates the authenticity of a website. For example, some people might think that a lock next to a URL containing the word “Amazon” means that you’re viewing a page legitimately owned by In fact, the lock symbol is meant to convey whether the connection between your computer and the web server is encrypted. It’s entirely possible for the creator of a phishing site to set up encryption on a bogus site, so relying on the presence of a lock icon alone can’t keep you from falling for an attack.

    Image of a green lock and https prompt.
    While reassuring, a lock icon in the URL of a browser does necessarily mean that the site in question is legitimate.

    Although the lock itself isn’t necessarily meaningful in the fight against phishing, the information you get when you click on it can be if you know what to look for. For most modern web browsers, clicking on the lock will show you security details about the site, including information about its SSL certificate. This certificate includes information about the organization’s name, its location, and what website(s) are affiliated with it. In theory, these certificates are only issued to an organization after a certification authority such as Symantec or Entrust verifies these aspects of its identity. When the identify-verification process works well, it means that someone pretending to be Inc. and located in Seattle, WA will be prevented from getting an SSL certificate tying their website to that company’s name and location.

    Screenshot of a SSL certificate, composed of data fields with values indicating that the certificate belongs to the Bank of America.
    The SSL certificate I received when viewing Bank of America’s website. It is issued by the Symantec Corporation’s “certification authority” and has a specific assurance level.

    In practice, the process can be very messy and subject to corruption or subversion. This was the case in 2011 when the webmail of up to 300,000 Iranians was compromised after a certification authority was hacked. By issuing fraudulent SSL certificates, the attackers were able to more accurately impersonate domains such as, compromise a number of Iranian users’ credentials, and spy on them. Even sophisticated users were fooled.

    When the classic certificate-based system fails, there are newer lines of defense such as key pinning and certificate transparency. Key pinning is a browser feature to verify that the SSL certificates for a company's sites are actually issued by the certification authorities that the company uses.

    In the case of the 2011 webmail compromise, there is evidence that Google was able to detect the attack because it monitored error messages generated by Chrome's key pinning feature. Certificate transparency is another approach to protecting against malicious or compromised certification authorities. It creates a public, auditable record of the certificates that are issued. Since it’s an independent service that doesn't rely on any web browsers, it makes it possible for anyone with the technical know-how to monitor the certificate infrastructure and detect when something fishy is going on.

    Although the certificate system was designed to help users verify the authenticity of a website, it is not very accessible to the average person. As the above figure of a certificate shows, it’s difficult to communicate a digital certificate’s contents in a way that is meaningful to non-experts.

    Instead of relying on users to manually check digital certificates, many web browser teams are now trying to surface certificate information more proactively. One recent trend includes putting certificate information directly in the URL bar. While this can confirm that a site is legitimate, its absence does not serve to alert users where the site is a fake, in part because users don’t understand what the information is trying to address in the first place.

    Screenshots from three different web browsers.
    Screenshots of the URL bar in Firefox, Chrome, and Safari (in descending order). Each browser has a slightly different way of signifying the presence of a high-assurance SSL certificate.

    Phishing is not an issue of “stupid users”

    Although browsers work hard to help users protect themselves from phishing attacks, many of the mechanisms in place are not useful for non-expert users. As my first post discussed, phishing attacks are growing ever-more sophisticated. They target victims with carefully crafted messages that reference specific cultural touchpoints to put them at ease. Thus, it’s not surprising when even savvy people fall prey to a well-designed attack, especially if it takes advantage of a particularly stressful moment or situation.

    That’s why I find it so frustrating when so much anti-phishing advice is focused entirely on the behavior of the would-be victims. For example, given how today’s dynamic email content works, tidbits like these are often not practical:

    “Never use links in an email to connect to a Web site. Instead, open a new browser window and type the URL directly into the address bar.“ – Advice from Norton

    I recently received a message from and wanted to give them feedback on their NPR One app via the link they provided. If I followed the above advice literally, I would have to type the following URL into my browser before I could contact NPR!

    Image of a long and complicated URL.

    This example highlights that the “type the URL into the web browser” method is outdated and impractical, especially as our society moves toward mobile form factors. Unfortunately, most people attempting to follow this one piece of advice are likely to get frustrated, feel overwhelmed, and end up altogether ignoring information about fighting phishing. This highlights a hard problem: It’s difficult to give advice to non-expert users that is both accessible and useful.

    The UX-research community needs to do more to understand what kind of anti-phishing advice is actually helpful to end users and what mechanisms for conveying it are the most effective. This, combined with continued work on the technical and design side, will help keep the threat at bay.

  • Blink and you’ll miss it: Notifications in an AI world

    I’ve been enjoying the videos from AI Now, an exploration of artificial intelligence and ethics hosted by the U.S. White House and NYU’s Information Law Institute. Co-chairs Kate Crawford and Simply Secure co-founder Meredith Whittaker put together a program focused on issues of social inequality, labor, and ethics in artificial intelligence.

    AI inspiration

    Looking at the program through a UX design lens, there were abundant design opportunities to make AI systems more effective, transparent, and fair. For example, Human-Computer Interaction pioneer Lucy Suchman called for the demystification of artificial intelligence in the video below.

    Lucy Suchman’s presentation from the AI Now plenary begins at 27:30.

    Suchman also showed two photos side-by-side, one of an elderly woman with her caregiver and the other of an elderly man with a robot caregiver. She described the ways in which the human caregiver dynamically orients herself to the elderly woman whereas the robot cannot do the same. To me, her example illuminated how a human-centered design approach could offer improvements in the way robot caregivers interact with their human patients. One way to put Suchman’s observations to work is to design the robot with longer arms so that the man doesn’t need to lean so far forward to reach it.

    Notifications as a UX challenge

    Beyond hardware, there are many design opportunities for software as well, one of which is the ethical design of on-screen interfaces for AI interactions. I encourage you to read the entire AI Now summary report. In particular, the following recommendation stood out as a clear opportunity for design.

    Support research to develop the means of measuring and assessing AI systems’ accuracy and fairness during the design and deployment stage. Similarly, support research to develop means of measuring and addressing AI errors and harms once in ­use, including accountability mechanisms involving notification, rectification, and redress for those subject to AI systems’ automated decision making.
    – from The AI Now Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term (emphasis mine)

    Notification is critical to helping people understand how their data are accessed and used, but the UX of notification is a hard design problem. On mobile phones, managing notifications is complex. Let’s begin with a few relatively inconsequential examples from everyday apps.

    iOS notifications from food delivery service Foodora and language learning app Duolingo
    iOS notifications from food delivery service Foodora and language learning app Duolingo

    These notifications sit on the border between being useful and being spam. My phone is set up to get very few notifications, but when I was experimenting with location-based services, I was flooded with spam notifications. Frequent notifications that [someone I barely know] is going to [an event I don’t care about] at [a time when I’m preoccupied] were frustrating.

    Even without sophisticated context detection, these apps seem to be using the intersection of my past behaviors and the time to make inferences. For example, I might flinch when Foodora thinks that I’m at home and craving pizza at 8 P.M. on a Saturday night, but I can’t dispute that my order history makes it a reasonable assumption.

    A better example of accounting for past behavior when designing notifications is Duolingo’s reminder feature. You can set reminder notifications in Duolingo, but if you don’t log in after receiving alerts over several days, Duolingo will cease to send them. Duolingo’s designers predicted that I might ignore the alerts and then designed for it. Many other spam creators could learn from Duolingo’s approach.

    Alert from Duolingo, saying: These reminders don't seem to be working. We'll stop sending them for now.

    UX for user control

    Notifications are invitations to interact with systems. They suggest that an app or process needs the user’s attention. Consider the iPhone’s settings for managing alerts. On a per-app basis, users can control which kind of notifications they want to see and how they want to see them. At first glance, it may seem like fine-grained control, but this style of notification control is already at the end of its utility.

    iOS notifications settings. Left: list of applications that generate notifications. Right: controls for an individual application.
    iOS notifications settings. Left: list of applications that generate notifications. Right: controls for an individual application.

    Instagram notifications for popular accounts expose how the current UX can’t scale. Notifications work fine as long as an Instagrammer has a bounded number of followers, but the below video shows how overwhelming notifications can become when an account has 8 million followers. The account holder behind @433, a popular soccer Instagram account, captured what it’s like to receive so many notifications. Since posting the video, @433 has accrued another 4 million followers.

    Overwhelming Instagram alerts for accounts with many followers demonstrate challenges in scaling notifications.

    Towards UX for AI

    In my examples, getting alerts to order food, use a language app, or interact with Instagram followers may be annoying, but the consequences of ignoring them are relatively minor. Skeptics might suggest management strategies such as not using annoying apps or having fewer followers, but people will have no choice but to use some AI systems. For some applications such as credit scoring or medical care, the consequences of ignoring a notification may be drastic, even life-or-death. As part of a larger ecosystem, poor app notification practices by less “consequential” apps threaten the utility of notifications overall. If your notifications feed is crowded with spam about your Instagram followers, you’re bound to miss a critical health alert or two.

    Privacy, ethics, and consent come into play when considering scenarios such as how to alert people to automatic facial recognition in public places. Just as we have “video surveillance in use” signs, a design suggestion I have encountered is for cameras to broadcast their presence to anyone within their recording radius. The alert could also invite people to interact with the recorded data. There are already numerous challenges across law, economics, and hardware manufacturing that make this unrealistic. Even if we set those aside, when drones with cameras are as small as bugs, a person could be on tens of thousands of cameras at once. It would trigger a notification nightmare akin to Instagrammer @433’s video.

    Mobile alert mechanisms aren’t designed for users to interact comfortably with a high volume of notifications. Without better UX, people can’t be effectively notified of how their faces are being recognized and the different ways their data are being used. What if they are misrecognized once in a thousand times? How would they wade through all the messages to identify and rectify the error? A poor UX will limit people’s ability to hold systems accountable.

    Without better notifications, people will not be able to identify misjudgments by automated systems and redress them, and this has potentially devastating personal consequences.

    Challenges for tomorrow’s designers

    The social, economic, and ethical implications of AI systems are critically important, and as designers, we must push the frontiers of UX to accommodate these new and proliferating systems. UX for AI is an emerging field, and we need to reimagine training and support for designers who are building their professional practice in this area. What projects are doing a good job at UX for AI or notifications in general? Let us know at or tweet us @simplysecureorg

  • How to fight phishing: security for designers

    My last post examined the concept of phishing, which is a type of social-engineering attack to con people into divulging private information like passwords or credit card numbers.

    When you look for advice on how to protect against phishing, most of what you’ll find is tired wisdom such as “check the email carefully” or “never click on links in emails.” This type of advice assumes that the burden is entirely on would-be victims to protect themselves. While there are important steps that everyone should take, design and security professionals must do more than simply blame users.

    In this post we’ll examine some things that you can do to fight phishing and help your users develop healthy security habits. In a future post, we will explore some of the technical ways web browsers do the same.

    How designers and front-end devs can help

    Phishing isn’t just a technical problem, it’s a human problem. Here are some concrete human-centered tips for fighting phishing.

    1. Consistently polish your designs. One simple action is to make sure that the UX of your organization’s emails and websites are consistently polished. While this is not foolproof (it’s easy for a phisher to hire an unethical designer to replicate what you do), polished designs raise the bar for phishers and make it easier for users to recognize poorly-crafted impersonations. Make sure that not just the welcome email and the homepage are refined and on-brand but also lesser-used designs such as those for password resets. Defining and adhering to a formal style guide can help you maintain consistent polish.

    Mockup of two password-reset pages, one that has a polished design and one that does not.
    It’s harder for users to fall for a phishing attempt if they’re used to seeing refined designs everywhere (left) than if they are used to seeing the occasional page whose design has been neglected (right).

    2. Consider the habits you encourage. Many social-network companies use email updates to increase user engagement. These emails can unintentionally condition users to click on links that will prompt them to sign in. For example, Facebook sends me an email whenever someone tags me in a post or photo (it’s possible that newer users of the service have to opt in to these notifications), and this has habituated me to seeing notifications in my inbox. It would be easy for a phisher to lull me in by crafting an attack based on this message. If the linked page mimicked the real Facebook site, I might end up falling for the attack.

    An example email notification from Facebook.
    Email messages like these train users to click on links and sign in to their account on the web page that pops up. Phishers can easily craft a message that takes advantage of this habit.

    3. Tell users what you don’t do. Emails from financial institutions frequently append anti-phishing boilerplate to their emails. While these additions communicate important information about things that the bank will never do (in particular, ask for sensitive information over email), few users probably ever read or learn from them. Work with your design team to brainstorm ways to communicate this information more clearly. For example, would it make sense to send a brief and engaging email to your users on an annual basis? Are there opportunities to share similar information at key points in your users’ workflows, such as after they’ve reset their password? Or are there ways to make the email footer more eye-catching?

    Sample Boilerplate text from TD Bank and Bank of America emails.
    Boilerplate anti-phishing text from TD Bank and Bank Of America, followed by a potentially more eye-catching alternative.

    4. Make user-friendly URLs. People who are equipped to assess the authenticity of your site by looking at its URLs find it a valuable signal. Try to identify the URLs that people are likely to link to, and make them short and reasonably easy to decode. People have a harder time evaluating long URLs when they wrap to multiple lines or scroll off the screen in a browser window. For example, if you’re trying to track click-throughs on your most recent email campaign, consider a system that allows you to use a link like, rather than

    5. Don’t use shady alternative domains. If your organization’s main website lives at, don’t send your users to sites such as or Using third-party vendor domains can train users to expect unfamiliar websites as part of your communications and make phishing attacks with shady domains seem more credible. The cost savings associated with using these domains may be outweighed by the frustration that your users experience from getting phished.

    6. Support two-factor authentication. Allowing users to configure two-factor authentication using their mobile phones is the single strongest protection that you can offer. Two-factor authentication requires users to sign in with both their password and a one-time code either generated with a smartphone app or received by SMS. (Note: members of the security community consider app-generated codes to be much more secure than those received over SMS, especially when users live in countries where the cellular network may be surveilled.) Even if users fall prey to a phishing attack, the attacker will not be able to use their password to access their account at a later date. It can take a significant investment to support two-factor authentication, but the protection it offers is unparalleled.

    These are just some ideas of how you might protect your users from phishing. Happily, you aren’t alone in the anti-phishing fight; there’s a lot going on in users’ web browsers, too. I’ll explore some of this work in my next post.

    In the meantime, tell us about your experiences. If you’re a designer, what have you been doing to fight phishing? Do you have a particularly egregious example of a third-party vendor site? Share it with us on Twitter or in our Slack channel.

  • What ‘90s London raves can teach us about infosec

    One of the highlights of HybridConf 2016 was hearing writer Stevyn Colgan talk about his time as a police officer at London's Scotland Yard. He entertained the audience of UX designers and front-end developers with stories from his book, Why Did the Policeman Cross the Road?. As someone who is concerned about the state of policing (in line with recent protests in the United States), I did not expect to be impressed, but Colgan's design-thinking approach to crime prevention took me by surprise.

    Design Thinking + Policing

    Colgan was a founding member of the Problem Solving Unit, which operated differently from the rest of Scotland Yard. Instead of solving crimes, they made it their duty to prevent them. Colgan didn't use dystopic tools to identify future criminals. Rather, his team borrowed techniques from cognitive science, marketing, urban planning, and other fields to consider the influence of environmental factors. It is this holistic approach - contemplating physical, technical, and social systems - that makes him a design thinker.

    Colgan shared many stories about his 30 years with the police force, and a few of them were particularly relevant to the security crowd. Information security is about keeping unauthorized people from accessing sensitive content, so in a sense, infosec overlaps with law enforcement in its commitment to crime prevention. Instead of only taking a classic defensive-security stance, borrow from Colgan's Problem Solving Unit and find inspiration by thinking like a creative cop. Here are a few pieces of advice from Colgan's stories:

    • Make your stuff less attractive - Something as simple as covering a motorcycle decreases the likelihood that it will be stolen. The added friction of needing to uncover the motorcycle will redirect thieves to other more-accessible targets nearby.

    A covered motorcycle is less attractive to thieves than uncovered motorcycles nearby (from Stevyn Colgan's Hybrid Conf talk)
    A covered motorcycle is less attractive to thieves than uncovered motorcycles nearby (from Stevyn Colgan’s Hybrid Conf talk)

    • Identify the weakness - In many enclaves, trash collection happens on a set day. Residents wheel their garbage bins to the curb and bring them back after they've been emptied. In Colgan's city, the only distinguishing factor across these bins is the owner's house number scrawled on the side. Uncollected bins signal that people aren't home; with one glance, thieves can deduce which houses would make the best targets for daytime break-ins.

      After uncovering the garbage bin problem, the Problem Solving Unit settled on a social engineering solution. Colgan's team organized neighborhood meet-and-greets so that residents could come up with a plan to wheel one another's bins in if their neighbors couldn't wheel theirs in right away. The result was a dramatic decrease in daytime break-ins.

    Garbage bins are identical except for the house number written on the side (from Stevyn Colgan's Hybrid Conf talk)
    Garbage bins are identical except for the house number written on the side (from Stevyn Colgan’s Hybrid Conf talk)

    • Constantly adapt your techniques - In the 1990s, London was a center for raves. While these gatherings were a mainstay of cultural life for many people during that era, the police considered them to be dangerous because of illegal drug use, sexual assaults, and overcrowding in the case of fire. Before the internet, people relied on posters to learn when and where raves would be held. The Problem Solving Unit made it difficult for promoters to attach posters by adding diagonal braces to walls, which meant that fewer people learned of the raves. Inclement weather played a role, too. The posters were easily damaged when it was wet or windy because they were posted on uneven surfaces.

      In response, determined promoters hung angled posters specifically designed to fit between the diagonal braces. The police came back with an inexpensive solution: They covered the time and place on the posters with "cancelled" stickers, and attendance continued to go down.

    Diagonal bracing made it more difficult for promoters to attach posters to the wall wall (from Stevyn Colgan's Hybrid Conf talk)
    Diagonal bracing made it more difficult for promoters to attach posters to the wall wall (from Stevyn Colgan’s Hybrid Conf talk)

    Implications for infosec

    Colgan's stories of social engineering drew on observations of human behavior and environmental signals, and the Problem Solving Unit's successes and can be applied to infosec UX. Key takeaways include:

    • Basic precautions are good enough for most people
      Withstanding a targeted attack by a powerful adversary is difficult, but deflecting crime is easier. Just as covering your motorcycle redirects attention, simple deterrents can save your data from harm.
    • Look with fresh eyes
      Identical garbage bins are unremarkable features in many landscapes because they're so common. Thinking like a designer means looking past the surface and seeing what can be tweaked. Removing bin numbers - the superficial solution - would have been a complex and impractical response, but nudging people to change their behavior worked just as well.
    • Consider the entire user journey
      Rather than focusing only on undesirable behavior at raves, Colgan mapped the entire user journey from the very moment that people learn of a rave. By looking for the starting point, Colgan's team came up with the clever solution to use "cancelled" stickers.

    I was surprised to find a police officer at a design conference, but Colgan's stories demonstrate that a design mindset always has a place, and technical problems don't always need technical solutions. Colgan's solutions may have been in plain sight, but they were elegant. Instead of signaling a lack of originality, tactics like the "cancelled" stickers are markers of success.

    Sometimes, the best adjustments are so trivial that we overlook or discount them. When crafting new technologies, what simple solutions have been sitting in front of you, waiting to be discovered?

  • One Phish, Two Phish: Security for designers

    Most people who spend time online have a general idea of what "phishing" is, but it can be hard for folks outside of the security community to pin down an exact definition. Understanding the threat that phishing attacks pose can help designers and other UX experts become effective advocates for experiences that protect users. In this post, we will explore the basics of how phishing attacks work, and in a follow-up post, we will examine some of the mechanisms that protect users against them.

    Phishing is social engineering

    As of this writing, Wikipedia defines phishing as "the attempt to obtain sensitive information such as usernames, passwords, and credit card details (and sometimes, indirectly, money), often for malicious reasons, by masquerading as a trustworthy entity in an electronic communication."

    Image defining phishing as an attempt to obtain sensitive information such as usernames, passwords, or credit card details by masquerading as a trustworthy entity in an electronic communication.
    A definition of the term “phishing”, adapted from Wikipedia.

    What does this really mean? Implicit in this definition is the idea that phishing attacks target people; they are an example of what security experts call a social engineering attack. This is in contrast to many of the other digital threats we hear about, such as exploits that take advantage of flaws in particular software programs (e.g.: buffer overflows, SQL injection points, or cross-site scripting opportunities) or assaults that take aim at the limitations of a computer system (e.g.: denial-of-service attacks).

    Social engineering attacks are just a modern take on the classic confidence trick, which derives its name from the attacker's methodology of building false confidence – or trust – with the target before attempting to defraud them.

    Image with text defining the term confidence trick as an attempt to defraud a person or group after first gaining their confidence.
    A definition of the term “confidence trick”, adapted from Wikipedia.

    Other examples of social-engineering attacks include advance-fee scams (beware of people claiming to be Nigerian royalty!) and the elaborate scheme that Mary McDonnell's character uses to steal a key card, thus allowing Robert Redford's character to access a locked building in the movie Sneakers.

    A sample attack

    Again, the aim of a phishing attack is to harvest confidential information from users. Let's walk through through a hypothetical example to see what this looks like in practice.

    1. Juanita gets an email that looks like it's from Bank of America, saying her password needs to be reset, and it offers a link that allows her to take this action
    2. Juanita clicks on the link and sees a webpage that looks similar to the one she is used to using
    3. She enters in her username and her password
    4. The page returns an error, saying that the password she entered is incorrect. Like many people, Juanita has a small number of passwords that she reuses across many sites. She tries a few different passwords, trying to find one that works
    5. Juanita eventually gives up, clicks the "Forgot Passcode" link, and sees that the site returns an error message asking her to sign in again later

    At this point, the attackers have probably gotten:

    • Juanita's bank username
    • Juanita's bank password
    • The passwords of several other services Juanita uses, potentially including the one she uses on her email account

    They accomplished this because:

    • They sent a message pretending to be from Bank of America to an actual Bank of America customer
    • They spoofed the "from" address in the email, so it looked like it was really coming from Juanita's bank
    • They created an email that looked and felt similar to the emails Juanita regularly gets from her bank
    • They created a webpage that looked and felt similar to the one she's accustomed to
    • They took advantage of Juanita's uncomfortable relationship with passwords; she wasn't sure that she was typing the right one, so inadvertently shared several others as well

    Image of a mocked-up phishing email and sign-in page.
    A mocked-up phishing email and sign-in page.

    The practical threats of phishing

    For their attacks to be successful, phishers must create an environment where people feel comfortable sharing confidential details. Attackers harvesting credit card numbers might create a fake version of a popular online retailer or a government website to collect social security numbers.

    Email credentials – the username and password you enter when you sign in to your email account – are a particularly juicy target because most sites use email as a password-reset mechanism (for example, when you click the "Forgot your password?" link on's sign-in page, they send a code to your email account as the first step in resetting your password). Thus, attacks against your email account are about more than getting access to your email messages; they're about using your email account as a jumping-off point to get access to the rest of your digital life, too. There are also other ways in which a phishing attack may be just the first of a multi-step attack; if you reuse passwords, one successful phish can end up compromising many accounts.

    Similarly, if you reuse passwords from one account to another, a successful phishing attack against one account can easily end up escalating into something more serious. Where possible, try to use unique passwords for your high-value accounts and consider using a reputable password manager that isn't based in the cloud, like 1Password (stay tuned for my next post on phishing, where I will explore this and other defensive mechanisms in more detail).

    Advanced attacks

    Early phishers focused on compromising a large number of random accounts, but their attacks quickly evolved to become more targeted. Rather than send emails to a million accounts, attackers will sometimes do meticulous research and craft a message specifically designed to appeal to the staff of a particular organization. This message might be designed to look like the sign-in page for the organization's internal web portal or for its health insurance provider. If the attacker is an employee or knows someone who works at the organization, they may reference information that only insiders would know.

    For example, imagine that Hamidou works for Collective Insurance of Brooklyn, a large company that conducts much of its business online. He recently started working there and is still learning how to navigate the company's employee benefits website, which he thinks looks very outdated. This benefits site is managed by a firm called BenefitsDigital, which specializes in benefits management but hasn't updated the styling on their site for a long time. While accessing it during new-employee orientation, Hamidou noticed that it has a strange URL like, but his HR representative explained that this is to be expected because the site is hosted by the benefits administration company.

    An attacker saw Collective Insurance of Brooklyn listed on BenefitsDigital's "Our Clients" page, and used a LinkedIn search to discover that Hamidou and a few other people started working there recently. Further sleuthing revealed that most employees at the company have email addresses of the form firstname.lastname@. From there, it was easy for the attackers to send this small group of employees a customized message that looked like it came from BenefitsDigital. The message tells them that they need to sign in to the website and perform their quarterly benefits review to prevent a discontinuation of their 401(k) matching, and helpfully reminds them that they are eligible for a $200/month commuting reimbursement.

    Hamidou clicks on the link in the email, which brings him to His eyes skim over the long URL and he doesn't notice that the site is hosted at (a site the attackers set up to mimic the legitimate benefits administrator), not The website is just as weirdly outdated and buggy as ever, so nothing seems out of place to Hamidou. When he tries to sign in, he gets an error message that the site is down for maintenance. He decides he will try again a few days later.

    Image defining spear phishing as a personalized phishing attack against a high-value target, either an individual or an organization.
    A definition of the term “spear phishing.”

    Attacks similar to this example is not just believable, but becoming increasingly common. As you might imagine, a customized message that references an organization's cultural touchstones is less likely to set off alarm bells for its victims and has a higher probability of success. If the phisher's goal is to gain access to the organization's internal systems, a customized attack can can be successful if just a single employee bites.

    Even more sophisticated attacks combine inside knowledge with a sense of social pressure by making the emails personalized to individual targets, and making it seem like the message is coming from a senior member of the organization, such as its CEO (apparently, this kind of attack is now called "whaling"). If you're a lowly payroll processor and you get an urgent email from someone six levels above you in the corporate hierarchy – on the day where the rest of the department is at a retreat! – it can be hard to keep your cool and tune in to the possibility that the inquiry may not be legitimate. And, even if you do get a sense that the request may not be legitimate, how do you verify your hunch without simultaneously insulting a bigwig and torpedoing your career?

    Phishing is about confidence

    As I remarked before, phishing is just one modern take on the idea of a confidence game. A successful attack depends on the user developing confidence that the request for their information is legitimate. Social pressures, such as in the whaling example, can make it hard for users to see an attack for what it is. So can general discomfort with computers or a lack of experience dealing with sensitive information.

    Still image from the movie Catch Me If You Can, picturing Leonardo DiCaprio dressed as an airline pilot surrounded by eight young women dressed as stewardesses.
    The 2002 movie Catch Me If You Can publicized Frank Abegnale Jr.’s adventures as a young confidence trickster. Frank (played by Leonardo DiCaprio) is the epitome of the term “confidence man,” or “con man” for short.

    It's important to note that falling for a phishing attack does not indicate any kind of failing in a person's intelligence. The skills we've evolved over millennia for developing trust in other human beings – evaluating appearance, behavior, and pattern-matching – do not serve us well in a digital context. The channels we have for receiving trust information from computers, such as the visual design of a website or the "from" header of an email, are simply too easily spoofed. When push comes to shove, if you create a website that is a faithful replica of, you will find people who will trust it based on its visual design alone.

    In a future post, I will review some of the mechanisms that exist to help users and organizations protect against phishing attacks and explore ways that designers can contribute safeguards through their products. In the meantime, do you have a favorite story about phishing or other forms of social engineering? Connect with us on Twitter and tell us all about it!

  • Learning from Drones

    Last week, I encountered discussions of drones in two unimaginably different contexts: in an academic presentation at USENIX Security 2016 and on the TV comedy Portlandia. As distant genres, they offer different perspectives that have equally important UX implications for privacy preservation.

    In the opening keynote of USENIX Security, Dr. Jeannette Wing examined the trustworthiness of cyber-physical systems, which are engineered systems with tight coordination between the computational and physical worlds. Some of her examples included the Nest thermostat and the Apple watch, which are very exciting user experiences to UX designers. To a designer like me, her concrete, design-based examples set an inviting tone for such a technical conference.

    Drones from a technical security perspective

    Dr. Wing spent time exploring points of vulnerability in which the integrity of systems, such as drones, could be compromised. This uncertainty needs management at multiple levels. For example, drones need to manage unexpected atmospheric conditions and sensor malfunctions while still operating safely. However, when flying and data collection already draw on limited battery life, there's barely power left for anything else. Dr. Wing called for securing the full set of software running on IoT devices, from low-level device identity in hardware through secure boot and storage up to encrypted communications and secure configuration.

    Slide from Jeannette Wing's keynote at USENIX Security 2016.
    Slide from Jeannette Wing’s “Crashing Drones” keynote at USENIX Security 2016.

    But preserving privacy is more than just a technical stack; the challenges that users have when operating drones contain UX lessons for privacy preservation. For that, we turn to Portlandia, which picks up where Dr. Wing dropped off.

    Drones in Pop Culture

    In the "Pickathon" episode of Portlandia, the two protagonists use drones to experience the Pickathon music festival virtually. By flying their drones to the front row of the concert, the two operators can see and hear everything from their couch. Although they manage to avoid long lines and smelly port-a-potties, their impact on the physical world is still felt: They injure other concert-goers—intentionally or not—as they navigate the festival.

    Fake advertisement in the Portlandia Pickathon episode for drone rentals.
    Fake advertisement in the Portlandia “Pickathon” episode for drone rentals.

    The plot points of the sketch unpack UX for privacy in a humorous way:

    • Bystanders don't know who's operating them or why
      It's unclear at the beginning that remote concert-goers are in control of the drones and that they have nothing to do with the official music festival.
    • Bystanders don't know what data is being collected
      In the sketch, concert-goers assume that the drones contain cameras but are uncertain whether audio and other data are also being captured.
    • Subjects under observation are unsure of how to interact with remote operators
      The operators can broadcast their voices through the drones, and one annoyed concert-goer was surprised to discover that "this thing can talk." The operator taunted the concert-goer into a fist fight in which the concert-goer curses the absent operator all the while getting beat up by the drone.
    • Bystanders and subjects don't know where the drone operator is located
      When the freshly bruised concert-goer breaks open the drone, he sees a label with the operators' address. He shows up at their house to beat up the operator who goaded him during the fight.
    • When operating a third-party drone, an external carrier might be able to both monitor and override control of the drone from afar
      At the end of the episode, the bruised concert-goer makes peace with the operators and joins them on their couch. While all three enjoy the concert via drone operation, the entrepreneur who rented the drone to the operators appears in the background. In his quest to make music festivals appealing to music fans over age 40, he had snuck into their home to watch them and gather data points.

    The concert-goer who was cut up by the operator's drone in Portlandia's Pickathon episode.
    The concert-goer who was cut up by the operator’s drone in Portlandia’s “Pickathon” episode.

    How drones challenge us to design for privacy

    Drones are a useful example when considering privacy because they make every actor in the system visible. In contrast to contexts such as mobile messaging on a phone, drones are less abstract because you can see the operator, owner, and subjects under observation.

    Here are two key UX design challenges for people working on IoT applications more broadly:

    • How can embedded sensors disclose the data they are collecting, who is collecting it, and for what purposes?
      Without a service design component, policies such as a hypothetical drone registration service won't help people understand the who, what, where, and why behind the drone's operation. As an analogy, many commercial trust don't just have a license plate to identify them. They also display decals that ask, "How's my driving? Call 1-800-XXX-XXXX for complaints about vehicle number N." Because of the decal, other drivers now know of another channel to learn more and hold truck drivers accountable. Nothing similar exists for drones.
    • What best practices for zero-UI or design-beyond-the-screen can be used to help bystanders interact with drones?
      Dismantling a drone to find a physical street address like in Portlandia may be comedic, but it is neither common nor scalable. With weight and power at a premium on drones, there is no space to add instructions on how bystanders should interact with the drone. One solution is to use bystanders' cell phones displays, but that approach is also problematic. It is highly unlikely that we can scale the use of mobile notifications to inform bystanders of their rights and the drones' purpose. Even if notifications were mandatory, the thousands of IoT devices alerting bystanders through their smartphones would be an unnavigable user experience. We need to explore these issues more deeply.

    Drones lead the way

    Drones are complex cyber-physical systems with poor ability to disclose how they work to bystanders. In contrast to the relatively well-understood domains of email and mobile messaging, privacy-preserving measures for drones are significantly more complicated. Because the challenges for drones are new to both the public and the technical community, they provide an opportunity to engage a mass audience in critical thinking about how we want to interact with the systems.

    Through drones, designers can explore particular challenges such as alert management. By thinking about how multiple parties (owner, operator, subjects under observation, privacy advocates, etc.) want to interact with drones, thoughtful UX design can empower people to manage their privacy in a variety of contexts.

  • Your software can help at-risk people, too

    Web browsers are utility software; they are designed to work for all people. Not only must their features meet the needs of average members of a population, they must also work for people with special needs. As Firefox says on its mobile accessibility features page, the browser has been "designed to meet the needs of the broadest population possible," but "sometimes that is not enough." In particular, software that is built for everyone can too often leave people with specific security or privacy needs at risk.

    As a counterexample, I recently noticed that Chrome's Android app has a small but tremendously valuable feature for people at risk of in-person surveillance. At-risk users might include someone whose phone is subject to regular inspection at government checkpoints, which has been reported to be the case in Syria. It could also be someone whose boss requires them to hand over their phone at the start of each retail-job shift, as Ame learned in her New York City study.

    The feature is part of Chrome's incognito mode, which lets users browse the web without worrying that their device will record the history or cookies from their session. The browser automatically deletes this information when all incognito tabs are closed. Now, on phones running recent versions of the Android operating system, Chrome can push a message to the notification shade and to the lock screen that allows you to easily close all incognito tabs in a single action.

    Images depicting the Chrome notification.
    Chrome’s “Close all incognito tabs” notification, as an icon in the notification bar (left), expanded in the notification shade (center), and on the lock screen (right).

    This hits the sweet spot for features that are incredibly useful for a subset of an app's user population:

    • It's unobtrusive. If you don't use incognito mode, you won't ever encounter or be confused by this feature.
    • It's automatic. If you do use incognito mode, you don't have to take extra steps to enable it; it's on by default.
    • It's simple and elegant. Since it's a notification, it's easily available from anywhere in the operating system. Its functionality is immediately apparent and immediately effective.

    Of course, not everyone is a fan of this feature; a quick web search reveals that some users who keep incognito tabs open for extended periods find this notification annoying. In creating this feature, the designers had to choose whether to build in more protection (having the notification on by default) or less, and they erred on the side of more. Given the goals of incognito mode, I think this is appropriate. That said, Google might address criticizers' concerns by offering a setting to opt out of the notifications.

    Learning from this example

    While we often cheer the loudest when apps integrate features like end-to-end encryption (e.g.: Whatsapp), smaller features can make a big impact, too. This Chrome feature shows that apps designed for a general population can directly help people concerned about their data security in simple, elegant ways.

    Right now, members of the western technology community often perceive at-risk users as being a niche population. It's easy to envision opposition fighters in Syria, activists like Ai Weiwei, or journalists like Laura Poitras, who worked with Edward Snowden to publish his documents. But as you learn more about the prevalence of surveillance and the concerns that people have about intrusions on their private data, you will discover that this group is larger than you think – and it's growing. At-risk users are people who worry about their security, either physical or digital. As online data plays an ever-increasing role in life around the globe, its potential for exploitation by corrupt officials, domestic abusers, and organized crime escalates as well. It's important to get a jump-start thinking about these users now.

    What small features can your project add to help at-risk people? If you need help brainstorming or have released a successful project in this vein, let us know.

  • Respecting participants in privacy-related user studies

    I was in Darmstadt for Privacy and Security Week last week to present Simply Secure's work on ethics in user research at HotPETS. You can check out the paper and slides on GitHub.

    Resources for ethical research

    In 2015, we did a field study that we named Straight Talk: New Yorkers on Mobile Messaging and Implications for Privacy. We have since used it as a case study to demonstrate how to work with study participants. Here is a list of resources for user studies that draws from the case.

    Additionally, our previous blog posts explain how to use participant recruiting screeners, model releases for photography, and the participant's bill of rights.

    Participating in research shouldn't harm people

    The Internet Freedom community is keenly aware of the need to protect sources who speak with journalists, but we tend to overlook how to protect research participants as well. Here are some questions to ask yourself next time you do a study:

    • Do you need to record audio and video — are they necessary for publishing results — or will handwritten notes suffice?
    • Are the research participants representative of your target user population? Think not only demographics but also attitudes. If your participants are willing to install logging software and be recorded, are they reasonable approximations of your intended users?
    • Site analytics can encroach on privacy. Given this, what alternatives can you consider that will also help you quantify usability?
    • What's the threat model? What could happen to your participants if others find out that they participated in your research?
    • If your organization requires standard language on an informed consent document, are there options to include a second page that's written in human-centered, accessible language?

    Photo of Ame looking at papers with a research participant.
    In this interview, the consent forms and model release forms were visibly laid out on the table.

    Safeguarding participants is important

    It was encouraging at EuroUSEC, PETS, and HotPETS to see so many presenters who shared the results of their user studies. At Simply Secure, we want to assist researchers in approaching their studies mindfully. Please adapt our resources for your use. Our goal is to provide tools that match the needs of a global audience, so let us know where they can use improvement.

  • Don't Let Color Drown Out Your Message

    Last month, I wrote about the importance of visual design for creating compelling software and shared resources for learning about color and choosing a good color scheme.

    I also cautioned readers that using color in "moderation can go a long way toward making your project look professional and credible." Today's post will dissect that advice in greater detail.

    Use color as an accent

    Color is great for logos and accents — it makes a statement, it conveys personality — but it can also overwhelm users when it dominates the main body of the interface. Designers use color to focus people's attention on the most important content, and a little bit can go a long way. Whites, grays, and blacks may seem visually boring when you have a rainbow of options to choose from, but they provide a neutral basis that allows the colorful elements of your UI to stand out.

    Two mockups of a weather newsletter, one with a dark blue background and the other with a white background.
    Newsletters and other information-heavy designs are particularly well-served by light backgrounds. When choosing a color for a graphic element, don’t just select an attractive hue; first examine the importance of the information it conveys and whether it deserves to be highlighted.

    Two mockups of an email sign-in page, one with a bright purple background and one with a white background.
    A white background and gray text in the text fields makes the sign-in flow to the right more approachable and professional. This color combination also helps the actionable elements – the button and the “Learn more” link – stand out more. Photograph used under CC BY-SA 2.5.

    Neither too bright nor too dark

    In order to present your user with an engaging and warm experience, try to use light, neutral tones as the background for the majority of your interface. Unlike software developers, who may find light text on dark backgrounds easier to read, members of the general public can interpret dark colors as unfriendly — sometimes threatening — which just makes things harder if you're trying to build software that inspires confidence and trust with new users. Avoid using black or dark gray unless you are confident that your target audience will interpret it as a cultural reference (as is the case when you're catering to hackers) or there is a functional reason for a dark background (e.g. when it's a tool for working with color or when it's an optional customization in a text editor).

    Some developers working with cryptography have told me that they prefer dark backgrounds because they feel it reflects the consequential nature of using the software. However, the general public tends to associate dark color palettes with video games and entertainment, so they are likely to perceive your tool as unprofessional, playful, or game-like. If you're concerned that a bright palette would compromise the gravity of your work, using darker shades as accent colors can communicate a more business-like attitude.

    Two mockups of an email sign-in page, one with a dark background and bright colors, the other with a white background and darker colors.
    Avoid dark backgrounds as a general rule. If you want to convey a serious feeling, consider using darker shades for your accent colors.

    Don't over-saturate

    Highly saturated colors should be used carefully. Saturated colors can suggest liveliness and cheer but can easily go too far and make it painful to look at the interface.

    Two mockups of an infographic with a red background; one uses a very saturated red, the other uses a less-saturated red.
    The graphic to the left uses #FF0000 for its red, which has a saturation of 100%. The graphic on the right uses #FF4040, which is a tone of the same hue and brightness that has a saturation of 75%. The difference is subtle, but it makes the contrast with the colors and repeated pattern of the cat outlines a little less jarring.

    If you don't know how saturated a color is, look at the "S" value on an HSB (hue, saturation, brightness) color picker in an image-editing program. The more saturated a color is, the closer its value is to 100. The Simply Secure color palette is bright and cheerful with a variety of hues but balances this brightness by dampening the saturation.

    An image of a color picker and an image of the five shades of Simply Secure's color palette.
    To the left, the Sketch color picker displays the values for hue (H), saturation (S), and brightness (B). To the right, Simply Secure’s color palette has been labeled with each color’s saturation value.

    Think expansively and find a friend

    Even people who have spent years studying the subject have a hard time getting color right. When you're working on a new palette or layout, we recommend that you ask a color-savvy friend who is removed from the work to give you feedback. Instead of a single design that they have to give a thumbs-up or thumbs-down to, present multiple design directions and have a conversation about the advantages and disadvantages of each. When possible, repeat this process several times, ideally with different critics.

    Privacy-preserving technology is important, and color can make it approachable to a broad audience. As with many aspects of software, aiming for simplicity and elegance in color is usually a safe bet. If you don't know where to start, try finding inspiration from other technologies that are designed to appeal to a broad audience. Otherwise stick to the basics: use a clean and light background, don't over-saturate, and focus on using color to accent the most important information. This will help you create a design that is approachable, useful, and conveys trustworthiness.

    If you need a friendly pair of eyes to take a look at your latest creation, please consider joining the #design channel of our Slack community! You can request an invitation by emailing us at

  • Illustrated quick-start intro to wireframing

    If you're new to UX design, wireframing is a powerful tool to understand how users experience your software. People with technical backgrounds benefit from wireframing because it forces them to take a step back from their coding mentality. Rather than focusing on the technical architecture, wireframing exposes the user-experience structure: how the user moves from one screen to another.

    Image showing one wireframe with a set of buttons representing options, and another with a drop-down menu of options.
    Example wireframes taken from Both show the same content organized with two different structures, but the left wireframe is better because it discloses choices rather than keeping them hidden.

    Wireframes are useful because they offer a stripped down, visual experience made up of plain boxes and lines—no formatting, no styling, no graphic design. That's right; improving your UX skills starts with abandoning colors and styles. If you can draw mostly straight lines, you've got what it takes.

    Why wireframe?

    Teams use wireframes at many points during the design process. They can use it to conceptualize a design, to critique their design internally, and to get reactions from testers. Here are three advantages of wireframing:

    1. Start off right.
      If you're implementing a new feature, start by analyzing how a different app handled a similar challenge. The act of sketching wireframes can trigger different thought processes and impart a deeper understanding than what you would get by just looking at an interface on screen.
    2. Get user feedback faster than with code.
      It only takes an afternoon to complete a set of wireframes. If you're not sure which features need to be implemented first, experimenting with wireframes and seeing how testers react is a smart investment.
    3. Find the problem.
      Is something about your app confusing to people, but you can't figure out how to fix it? Support tickets and pull-requests can only go so far. Sitting down to wireframe some alternatives can help your team untangle its thinking.

    The 90-minute-no-drawing-skills-needed wireframe bootcamp

    Wireframing is a quick way to validate your design. It saves on wasted development time, and anyone can do it. When you are ready to test out a new interface, wireframing will help you drill down to the best UI and help you organize your development priorities.

    You can start wireframing with little more than an hour and some basic office supplies. There are numerous apps such as Balsamiq and Sketch for making wireframes, but doing it by hand is more effective when you're new to the process. Don't waste time mastering wireframe software; spend that time building your UX acumen.

    Materials needed:

    • Paper
    • Black Sharpie marker
    • Grey Sharpie (could be something like a highlighter, too)
    • Different color marker for callouts (traditionally red)
    • Optional: window to the outside for easy tracing

    First, pick the website or app you want to analyze. I picked Twitter's iOS app as the basis for my wireframes in this exercise. Even though I know the app well, taking the time to wireframe it helped me see the app more critically, and what I learned from wireframing will inform my design choices for similar apps.

    Make Paper Printouts

    This tutorial works by tracing an existing app or website. To do that, you need paper printouts. If you're analyzing a website, just print every page. If you picked an iOS app like me, here's one way to print from iOS:

    1. Capture screenshots of the screens you want to work on. I did this on my iPhone by pressing the power button and soft button at the same time. I sent the screenshots to my laptop, which is connected to a printer.
    2. On a laptop, open every screenshot image and merge them into one file. You can do this on a Mac by using the Preview application to open the images. Go to View -> Thumbnail menu option and drag thumbnail images from different files into one thumbnail tray. Now you have a multipage file.
    3. Decide what scale or size works best for your needs.
    4. Below, you can see what the screens look like when printed in the "4 up" setting from the Preview application. This is a good layout for mobile apps because it mimics a small screen and is easy to cut down to size. Image of four screenshots printed in greyscale on a single page.
      Image of screenshots in "4-up" setting.
      Image of several screenshots pinned up on a board.
      Image of pinned-up architecture.
      Cut out each screen to lay out the architecture. The typical way to lay out architecture is to make every item in the top row a global navigation choice. In the top row of my cutouts, I drew red squares around specific global navigation buttons to indicate the current state of the app. Each item underneath the top row represents a substate. In this example, only the "Me" column to the far right has other substates beneath it.
    5. Sometimes it's easier to work with enlarged wireframes so there's plenty of room to write out button names. In this example, I use words instead of icons to indicate what the buttons do.
      If you want to work with an enlarged version of the screen, you can sketch based on what you see on screen, but if you're not used to sketching, you can start by tracing your printouts. You don't need any equipment other than a window to do this. It works best if it's brighter outside than inside. You can see the result of my tracing below.
    6. Image of the author working with screenshots held up against a window.
      Tracing at a window.
      Image of a traced wireframe.
      Tracing result.

    Wireframing at the right level of abstraction

    Since navigation is the focus of my wireframe, I only use color (gray shading in this case) to indicate what state the UI is in. Use abstractions such as wavy lines or lorem ipsum copy to represent user-generated content. Abstract representations keep testers focused on the navigation.

    Other elements can also cause confusion, even if users are generally familiar with them. When getting user feedback on a Twitter wireframe, things like "Name @handle" can be distracting. To avoid this, use something made up but recognizable like "Marie Curie @mcurie" if you plan to show that element to testers. If the audiences for your wireframe are people who are comfortable with pseudocodes, using "Name @handle" probably won't throw them off, and in those cases, that level of abstraction is fine.

    These images show a progression from screen printout at left to the most abstract zones of activity.

    Three wireframes at varying levels of abstraction.
    Levels of abstraction.

    A close-up image of one of the more abstract wireframes.
    An more abstract wireframe.

    The red writing are callouts that describe what activity happens in each zone, with an emphasis on navigation.

    The next image is even more abstract, and it raises the question: Why are there four different button blocks spread across multiple zones?

    Image of a wireframe that highlights zones of activity.
    Zones of activity.

    This wireframe of the Twitter iOS app really spotlights the trade-offs in navigation. There are four different areas of navigation (why?), and instead of using hamburger-, sidebar-, or other types of menus, the designers settled on buttons. There are positive and negative tradeoffs here. On the one hand, the buttons are self-disclosing, and it is immediately clear what sort of operations are possible. On the other hand, there's a lot of clutter, and users have to look in multiple places to perform the operation they want.

    To practice wireframing, I suggest:

    • Draw whatever you've been working on, regardless of its state. Much like handwritten notes, even an hour of drawing helps me understand the logic behind the system.
    • Look for a widely admired version of the feature you've been working on. Sketch out a wireframe of what you find and then sketch out your version. Tools that have mass adoption have generally achieved high usability. Wireframing Amazon's shopping cart gives a good understanding of what a successful cart needs. In this case, Twitter is a reasonable place to start if you want to understand the logic behind a complex communication app.
    • Analyze the similarities and differences between the other version and your version of the feature. What considerations underlie those design choices?
    • To set a benchmark for user profiles, you can make a series of wireframes by looking at popular services that have user profiles and using them as points of comparison.
    • Take a look at guides like this one on 10 best practices for wireframing.
    • Wireframes Trigger Insights

      Have you ever given a slide presentation only to see a typo that you didn't catch beforehand? Even though you rehearsed many times, the typo didn't leap out at you until now. When it comes to UI, wireframing ensures against this. Like the exercise in this post, wireframing an app that you're building will quickly reveal problems that you didn't expect. Wireframing takes you out of the code and offers another context to critically examine engineering and design choices.

      Whether you're working solo or on a team, wireframing is a quick and powerful way to think about the structure of your app and to prioritize improvements.

      Going Deeper

  • Talking Across The Divide: Designing For More Than "It's Secure"

    If you're coming to the study of security and privacy from another field, it can sometimes be tough to get a clear answer to what seems like a simple question: Is this app secure? However, if you're working on the user experience for that software, it's critical that you understand the assumptions that security experts are making about your users and their behavior – and not just take the experts' word that all is well.

    I've written before that neither security nor usability are binary properties. In a nutshell, this means that there's a lot of gray area when it comes to deciding whether something is secure or insecure. Security enthusiasts often come at the problem by answering one question: Is this the most secure solution available? Meanwhile, non-experts are actually asking a different one: Is this secure enough for what I want to do? Like usability, security is defined as a function of a particular set of users and their needs – more specifically, as a function of the threats they face.

    <img src=”” alt=”Image of two buckets, labeled ‹secure› and ‹insecure›, with a marble labeled “app” and a question mark.”>
    It’s often hard to classify an app as being secure or insecure without additional context about the user and their goals.

    If you're a designer, usability researcher, or other UX professional collaborating with a security engineer for the first time, it may be uncomfortable to push back and ask for more information about the app's security features, especially if the answers are full of unfamiliar acronyms and buzzwords. And it's tempting as a security expert to say something along the lines of, "trust me; this is secure." But digging in to this kind of cross-discipline dialogue can be essential for identifying places that users will get tripped up. Talking across the divide can prevent an app that is secure in theory from becoming seriously flawed in practice.

    Here are some questions that teams of security experts and UX professionals should be able to answer about the software they are building together. This kind of conversation is best held in front of a whiteboard, where team members can communicate visually as well as verbally. Security experts should challenge themselves to offer complete explanations in accessible terms. Their partners should be vocal when a concept is not clear.

    1. What is the threat model (probable set of attacks) that the software protects against? Who are the "adversaries" that are likely to try and subvert the software, and what features are in place to protect against them? Include both attackers interested in your infrastructure (either for monetary gain or for the lulz) and in your users' data (whether they're harvesting large quantities or carrying out a personal vendetta).
    2. What threats is the software currently vulnerable to? Don't forget to consider users who are at risk of targeted attacks, such as domestic abuse survivors, investigative journalists, or LGBTQ activists around the world (one resource to help you get started is our former fellow Gus Andrews' set of user personas for privacy and security). Remember in particular that some people are targeted by their telecom providers and even their governments.
    3. How does the UX help users understand both the protections the software offers and the protections it does not offer? Given the people who use the software (or are likely to use it in the future), does the UX do an accurate and reasonable job at conveying how it aims to keep its users secure? For example, if the software advertises encryption, is it truly to inform users or simply to smooth over their fears (pro tip: the phrase "military-grade encryption" is never a good sign for users)? If your chat app's messages automatically disappear, do people understand that there are ways those messages may still be retained?
    4. Are there things that users can do – or fail to do – that will expose them to additional risk? Does the UX help shape their behavior appropriately to counter this eventuality? For example, does the app periodically encourage users to protect their accounts with basic features like two-factor authentication? Does it just send two-factor codes by SMS or also support alternatives like app-generated codes or hardware tokens?

    This is just a starting point in what we hope will become an ongoing conversation between UX professionals and their security-minded colleagues. Great software is built around excellent user experiences, and keeping users' data safe is a core (if often implicit) user need. For this reason, we believe designers and UX researchers can be important advocates for user security. Conversations across the divide are an important first step to making this a reality.

    Would your team like help talking through these types of issues? We can lend a hand. Drop us a line at

  • Safeguarding Research Participants With A Bill Of Rights

    In this installment of our series on resources for field research, we discuss the participant's bill of rights. Additional resources include screeners and model releases for photography.

    Why Consent Matters

    Field research such as interviews and observations are an important part of Human-Centered Design. As important as learning about first-person, lived experiences is to the design process, the act of participating in an interview can feel awkward. There is an inherent power dynamic that puts researchers in a dominant position; for all that participants know, once they share a personal story, researchers are free to use it as they please.

    While an interview may seem innocuous, it can put participants in harm's way when handled improperly. Consent is a safeguard that stems from a collection of unethical scientific studies such as the Tuskegee experiment, where Black men who had syphilis were not told that they were infected, and not told that their condition was the primary reason for the study. This and other experiments from earlier eras caused lifelong trauma for their participants.

    While research on technology projects may seem less dangerous than medical experiments, they can also harm participants. For example, people can suffer serious consequences from online behavior, including lowered credit scores, loss of livelihood, and threats to their safety or the safety of their loved ones. When a researcher shares participants' stories, they are vulnerable to similar consequences if those stories can be traced back to participants through personally-identifiable details.

    Ethical researchers generally use a consent form that both informs and assures participants that they have rights. This is called "informed consent", and by signing the consent form, participants acknowledge that they understand what kinds of data are being gathered and how the data will be used.

    Photo of the author looking at papers with a study participant.
    Papers used during an interview, including the participant’s bill of rights.

    Balancing the power dynamic

    We use the phrase "participant's bill of rights" for our informed consent document to convey that the participant has control over aspects of the process. There are three main areas in the bill of rights:

    • The power to ask questions: Participants are empowered to ask questions and give feedback about the interviewer, including speaking to a superior. The form should include contact information for the interviewer's superior.
    • Compensation just for showing up: Participants can refuse to answer individual questions, refuse to participate, or leave, but still be compensated.
    • Controlling what's captured and how it's shared: Participants understand that researchers will not record the interview, take their photo, or quote them unless participants give their consent. Researchers will take appropriate measures to contact participants about any media they want to use publicly, and participants can review the media content before it is shared. They can also request the removal of any materials that have already been shared publicly.

    Example bill of rights

    Below is an example of a participant's bill of rights.

    We respect you and appreciate your time. Everyone participating in an interview has the following rights:

    1. I can ask questions about the interview, the organization, or the interviewer at any time
    2. I do not have to answer any question that I do not want to
    3. I can refuse for the interview to be recorded by video or audio, and I will still be compensated
    4. I can leave at any time, and I will still be compensated
    5. I can provide confidential feedback about my interview experience to the interviewer's manager
    6. I must approve the use of any photos, audio, video, or anonymized quotes that are used publicly, whether on a website, on a blog, or in the press
    7. Even after a photo, video, or quote has been published, I have the right to request its removal at any point in the future

    You can reach us anytime by calling or texting to [x] or by email at [x].

    Tips to Consider

    • Select an approachable name for the document: In medical or psychological contexts, these consent documents are reviewed by Committees for the Protection of Human Subjects. As a term, "human subjects" may be technically correct, but it is distant and off-putting. At Simply Secure, we intentionally selected "participant's bill of rights" to provide clear language and to empower participants.
    • Print the bill of rights on letterhead: Use your organization's name and logo to make this look official.
    • Be accessible: Bring two copies of the consent form so that participants can take one with them. This should include contact information for someone in your organization who is not the interviewer in case the participant wants to discuss a negative experience.
    • Take only what you need: Be mindful of appropriately secure channels for communication. Safeguard necessary physical documentation in a locked cabinet, and shred anything that you have no reason to keep.

    We invite you to adapt this bill of rights to suit your needs.

  • Model Release: Respectfully Sharing Stories

    This post is part of a series explaining our publicly available resources for user research. The previous installment covered how to write screeners to recruit participants. This week, we discuss how to get model releases to share photos from user studies.

    One approach among many

    At Simply Secure, we strive to balance study participants' privacy with building empathy in an audience of developers, policymakers, and designers by sharing study photos and stories. Meanwhile, many startups have used exploitative and borderline unethical user testing methods that make extensive use of behavior tracking. Privacy-preserving technology is evolving. Now is the time to develop best practices that both enable meaningful data collection on user experience and respect users' privacy. There are many ethical ways to handle data collection, and the following is an approach we used in one of our studies.

    We recently conducted a user study about participants' experience with surveillance and the strategies they use to preserve their privacy. There's an inherent paradox in recording a conversation about surveillance, and to acknowledge this, we decided that handwritten notes were sufficient to capture attitudes towards surveillance and personal strategies to get around it. Instead of recording audio or video during the interviews, our researchers only took notes on paper. In situations where anonymity is critical to our participants' safety, we asked them to tell us what they'd like to be called rather than collecting their names and contact details.

    Why take photographs?

    As a designer, I'm a visual person and find photos to be a powerful note-taking tool and memory aid. When I look at a photo from a user study from 10 years ago, I can remember specific details of the conversation and insights from the study even though I haven't interacted with the participant since then. Not only are photos useful to designers as memory aids, they are powerful in inviting an audience into a user's story. They help the audience internalize learnings from a study.

    Photography is always an opt-in process

    Despite its benefits as a mnemonic, participants should never feel pressured to have their photo taken. At the beginning of Simply Secure interviews, participants are given a Participant's Bill of Rights that outlines consent procedures, including the ability to refuse any questions, quit and still be compensated, and refuse photography.

    In a winter 2015 study, we helped participants understand how the photos could be used by showing them photos from the Humans of New York's (HONY) Instagram feed. HONY combines photos of people in public space, with descriptions of what they are doing or thinking. These powerful stories are akin to bite-sized doses of user research. We used HONY's feed to demonstrate how we might share their stories and personally ­identifiable photos on the public web.

    In the final 5-10 minutes of our session, we asked if people were comfortable posing for photos. Doing this at the end is beneficial because participants have a clearer idea of who we are and are conscious of the stories they've shared. Participants could choose from three types of photographic participation:

    • No photography
    • Being photographed in a personally meaningful but non-­identifiable way (such as their shoes or purses)
    • Being photographed in an identifiable way

    What a model release means

    Our default is to avoid collecting personally-identifiable information, but in instances where we do, we ask study participants to sign our model release. This step is central to how Simply Secure balances building audience empathy and protecting participants' privacy.

    There are clear concerns that modeling is exploitative and harmful to people, and the choice of using a model release—and calling it a "model release"—is an intentional one.

    An important ethical consideration of model releases is that models lose control of their images and cannot choose how their narratives will be paired with them. When using HONY as an example, we showed a photo of a man who talked about the aftermath of his domestic violence conviction. By talking through this provocative example, we were confident that our participants appreciated how hard it can be to get a nuanced picture of people's stories.

    We explained that although we had no plans to put the images on Instagram, we would like to share them on our blog and in presentations. That's a much smaller audience than popular Instagram feeds, but there are implications of sharing photos anywhere on the open web.

    Some examples

    Of the 12 participants in our study, three signed model releases, five agreed to non-personally-identifiable photos, and four did not want to be photographed. We took photos with an iPhone, and everyone was offered a chance to review them. Rather than candid shots, we took staged photos to portray the participants as powerful, positive people. After some minor photo filtering and cropping, we sent copies of the photos to participants who signed model releases to get their approval for public use.

    Photos of participants, showing their faces.
    Study participants who signed model releases for sharing photos.

    Photos of participants, not showing their faces.
    Non-personally identifiable photos, taken with verbal consent, but not a signed model release.

    We're still learning

    Since our study was about surveillance and personal strategies to preserve privacy, we asked participants to show us examples of how they use their phones. With their permission, we photographed their phones. We blurred out their names and personal information before sharing any photos publicly, but photos with homescreens, photos, and wallpaper required extra sensitivity.

    Consider the photos below. One of the participants had a photo of herself as the wallpaper. She was comfortable sharing the photo and signed a model release. But what should we do if her wallpaper were a photo of another private individual? How should we approach photography when the wallpaper is a photo of a child? Complicating matters, does it change things if the participant is the parent versus a relative of the child?

    Photos of several mobile phones, showing the home screens.
    Participants’ phones. Photos shared by consent with identifiable text blurred.

    We are still learning at Simply Secure, and we believe that a human-centered approach should be a best practice in developing privacy-preserving software. We welcome discussion about best practices for ethical research.

    Tell us what you think of how a background image of other private individuals fits into personally identifiable information. If you are a researcher and want to join the research dialogue on our Slack channel, email

  • Event: UX in a High-Risk World

    As you know, building great software depends on a deep knowledge of users. If you're working on a project targeted at people who operate in high-risk situations, such as activists and journalists, it can be hard to get the quality insight you need to design features and experiences that will work for them.

    If you're based in the San Francisco Bay area, there's an exciting event happening in July that focuses especially on user experiences for this population. UX in a High-Risk World, hosted by Internews on Thursday July 14th, will bring together "visionary leaders who are piloting and developing solutions for activists facing censorship, hacking, surveillance, and suppression in some of the world's most challenging environments".

    Graphic reading Thu Jul 14 at 6:30 PM San Francisco, CA – UX in a High-Risk World – By: Internews

    This is a great opportunity to talk not just about building secure, privacy-preserving software, but about the lived experiences of users who have an urgent need for it. We think that this is an important conversation, and we're offering support by sharing it with our community of partners and volunteers.

    Internews will also announce the UXFund Call for Proposals at the event. As the event description notes, "UXFund is a small-grants program to support usability and accessibility improvements for open-source digital security tools."

    Attendance is free but admission is limited, so if you want to go, register before the July 6th deadline!

  • Compelling Color

    Great user experiences are born through the hard work of professionals with a variety of skills. As illustrated by the UX unicorn we've seen before, there's a lot that goes into what we call "design" or "usability".

    Image of a unicorn composed of different elements of the user-experience process
    The skills and responsibilities of an effective UX team. Originally published in Building an enterprise UX team by Rachel Daniel (also on LinkedIn), UX Director at MaxPoint. Used by permission.

    Looking at this unicorn illustration, it may be tempting to dismiss visual design as a "nice to have" skill. After all, it's possible to make a basically functional piece of software without paying any particular attention to the visual design (just as a horse can get by without a horn). And unlike interaction design, which is possible to evaluate empirically through user studies, it can be hard to pin down exactly what makes a visual design "good". This subjectivity can be compounded by visual design’s tendency to evolve and trend over time.

    Screenshots of iOS 6 and iOS 7
    Observe the difference in the visual design in the icons of iOS6 (left) and iOS 7 (right). This change would appear to be driven largely by a desire to have the apps look fresh, new, and modern.

    But visual design is about more than just making something pretty or trendy. Visual design includes typography, iconography, and color, all of which can contribute or detract from usability. These basic elements of visual design usually follow well-understood rules.

    Two versions of the phrase visual design is key, one of which is easily legible and the other of which is not.
    Typography and color selection can make or break a user’s experience.

    Let's focus for now on some basic ideas around color. A consistent and tasteful palette can be a great way to make your software, website, newsletter, and print materials more usable and attractive to your users. Simply Secure worked with a professional visual designer to come up with our colors (as documented in a post about our style guide), but it's possible for even people new to design to select a set of tasteful hues. Here are some tips to get you started.

    Read up on the basics

    Are you confused by terms like hue, saturation, and brightness, and what bearing they have on making a color scheme attractive? Ria Carmin's developer-targeted post addresses exactly that. (See "Rule 3: Color".) She does a great job of breaking down different types of color palettes, such as those that follow a monochromatic, analogous, complementary, or triad scheme.

    Screenshots of three color picker interfaces.
    Traditional color pickers can be limiting, imprecise, or hard for color novices to understand.

    Choose your tool carefully

    When making a color palette, don't work with individual colors in isolation. Many traditional color pickers in graphics software can be limiting, imprecise, or hard to understand. Free online tools such as Adobe Color, Paletton, and Coolors can help you identify a collection of colors that work well together. To play it safe, use the default settings and carefully modify one color at a time to watch the rest of the palette adjust.

    Don't go to the extreme

    While it's tempting to try and send a strong message about your software by using lots of intense colors, remember that moderation can go a long way toward making your project look professional and credible. Developers in the security community seem particularly drawn toward dark shades, which can feel unapproachable to some users. Another tendency is to pick the brightest, most saturated colors available, which can also be off-putting. Finally, remember that color is often used best as a highlight, not as the main body. Whites and greys with a small, coherent selection of accent colors allow the user's eyes to quickly identify what belongs in the background and what needs their attention. For example, Simply Secure uses lots of bright colors on our webpage, but the main body of each page is a neutral white or off-white.

    Has your team been struggling with color? Did you recently go through a redesign that you're proud of? We'd like to hear from you; please get in touch!

  • Selecting Research Participants for Privacy and Beyond

    A screener is a questionnaire that helps researchers recruit the most appropriate participants for their user study research.

    Here is an example we used for our mobile messaging study in NYC. Blue Ridge Labs handled the recruiting. Most of this screener's questions are a standard part of how they work with potential participants. Our questions, in red, focus on messaging and attitudes towards privacy. Additional questions about VPN use, email, and getting online were for our Fellow Gus Andrews's research.

    This example question sorts candidates into how frequently they message:

    About how many messages would you say you send via your phone in an average week? This includes text, Facebook messages, WhatsApp, etc.

    • Less than 10
    • Between 11-30
    • 31-50
    • More than 50

    Questions like this can allow researchers to select a balanced mix of participants – for example, a few who message infrequently and a few who message heavily. In our case, we wanted frequent messagers, and used this question to meet with people sending 31 or more messages per week. (In practice most participants sent more than 50.) In recruiting terms, we screened out people sending 30 or fewer messages.

    Screeners are a regular part of user research for studies on everything from breakfast-cereal selection to the usability of enterprise software. Professional recruiters rely on screeners to match participants with projects. Even if a team handles their own recruiting, such as by posting a listing in a café or on a message board, screeners can still be helpful. The process of creating one helps a team be very specific about who they want to talk to, and forces them to clarify who they think will use (or not use) their product.

    Tips for Effective Screeners

    Here are some considerations for writing effective screeners for privacy-preserving technologies and beyond.

    Don't ask for information you don't need.
    When you complete a screener, you gather personally-identifiable information about people and are responsible for storing it securely. Is gender identity relevant? Income? Zip code? Nationality? There are good reasons for collecting answers to those questions, but if they aren't directly relevant to your study, don't ask. Potential participants are more likely to complete a shorter, more-targeted questionnaire. The faster you get enough completed questionnaires, the sooner you can recruit enough candidates to start your study. Our example screener included questions from our partner Blue Ridge Labs that weren't immediately relevant to our work. The disadvantage of having extra questions was offset by the advantage of having our partner manage participants' information. Simply Secure has no way to identify or contact the participants since we never had access to the data.

    Instead of yes/no questions, ask multiple choice or open-ended questions. Yes/no questions tend to be more leading and encourage the potential participant to answer correctly rather than truthfully. For example, ask "How many text messages do you send in a week?" instead of "Do you send more than X messages?"

    People are more than their demographics.
    Think of ways to ask questions about behaviors or attitudes, not just descriptive demographics like age. "So easy your mother could use it" assumes that mothers have more trouble using technology than other groups. Don't assume that people in a particular demographic will all be the same or have the behaviors you're looking for. Tech enthusiasts and tech-avoidant people come in all ages, sizes, colors, and genders.

    If the research is in-person, rather than on-line, screen for people who want to share their stories.
    Screeners are used for both qualitative, in-person research and quantitative, online research like surveys. If you're recruiting for in-person research, you'll want to talk to potential candidates first to see how comfortable they are responding to questions. Having a thick accent, making grammar mistakes, or being shy shouldn't be barriers to participating in an interview. But someone who is ambivalent about the topic or uninterested in answering questions won't be as good at inspiring a design team empathize with them. Our screener with Blue Ridge Labs used "must talk in more than one-word answers" as a criterion.

    Availability questions can shorten time between screener and study.
    Ask which of several times participants are available for a feedback session. For example, consider this question:

    Select all times you are available to meet at [address]:

    • Monday, June 3 at 1:00pm
    • Monday, June 3 at 2:30pm
    • Tuesday, June 4 at 6:00pm
    • Tuesday, June 4 at 7:30pm
    • None of the above
    • </p>

    Questions like this remove the need for an extra step in determining availability. Be sure to ask how people want to be contacted for follow up for their available times. If you plan to meet someone Monday, don't send emails to a work address they don't check over the weekend.

    Make room for non-users.
    Teams can learn a lot from carefully listening to people who aren't already interested in their projects. Making room in the screener to capture people who, for example, haven't heard of two-factor authentication can lead to important insights about designing broadly-appealing, generally-accessible software.

    Get the Right People

    Screeners are helpful for making sure studies get the right mix of participants. They are also part of the foundation of a process for getting critical user feedback early and often. Taking the time to ask a few basic questions up front can set up the whole study – and project – for success.

    Ame and a participant sitting at a table.
    Learning from a Blue Ridge Labs participant recruited using our screener.


    Simply Secure's NYC Mobile Messaging Screener

    Further Reading:

    GV's How to Find Great Participants (includes screener worksheet)
    Spring UX's Managing the Recruit

  • Meeting Users' Needs: The Necessary Is Not Sufficient

    Building great software requires understanding what users want and need. If you’re building privacy-preserving software, this includes understanding the privacy threats that your users face.

    Photo of a woman standing on a city sidewalk.
    One of the participants in Ame’s NYC study.

    When Ame set out to talk to people in the New York City neighborhoods of Brownsville and Harlem about their experiences with mobile messaging, she wanted to amplify voices that are frequently underrepresented in the software community. (Many thanks again to Blue Ridge Labs for helping her connect with study participants.)

    Big tech companies usually end up focusing their user research on the affluent Silicon-Valley dwellers that resemble their employees. Many funders of internet-freedom software are interested in the needs of activists and journalists. As a result, the software requirements of other people – folks who aren’t activists, but who have modest financial means – are not heard by developers, product managers, and other decision-makers who shape the features and presentation of software today.

    Important nuances

    Ame began sharing findings of the study with researchers, developers, and members of our Slack community to seek feedback. We were excited and gratified to see it referenced in a recent blog post by a Google security engineer who contributed to Allo, the new instant-messaging app that the company announced during I/O. (Note: the engineer wrote the post as a personal opinion, not in his official capacity as a Google employee. He subsequently made several changes to the wording of the post, we assume at his employer’s request.)

    The blog post highlights that study participants expressed concern around physical threats like shoulder surfing, and that they see disappearing messaging (where messages are automatically deleted after a certain amount of time) as a key protective feature.

    We were pleased, because the post signified that this research was already reaching software decision-makers. It validated our belief that this kind of study, which amplifies the voices of underrepresented users, holds real potential to influence the features and priorities of software teams.

    However, we were less pleased by this part of the post: “Most people focus on end-to-end encryption, but I think the best privacy feature of Allo is disappearing messaging. This is what users actually need when it comes to privacy.”

    It’s true that our writeups of the study thus far have talked about the mismatch between privacy enthusiasts’ priorities (e.g. end-to-end encryption) and participants’ requested security features (e.g. disappearing messaging). However, we have never argued that disappearing messages should come at the expense of end-to-end encryption.

    Participants in the study saw disappearing messaging as an important feature because it combats a set of threats that they feel they have some control over. That doesn’t mean that those are the only threats that they care about. Indeed, participants also expressed concern about government surveillance, while simultaneously conveying a sense of inevitability. If you believe the government will always have the power to spy on you, why would you waste time trying to find software that prevents that spying?

    False dichotomies

    The Allo team has faced significant criticism by members of the security community because they plan to make its end-to-end encryption opt-in rather than on by default. They argue that this allows users to upgrade their security if they want, but otherwise have immediate access to chatbot-style, AI-powered features. Until we can actually use the product, it’s hard to know whether this dichotomy – privacy vs. chatbot goodness – is really a necessary one. Is it truly impossible to both provide end-to-end encryption for interpersonal channels and offer an advanced bot interface on another?

    If it is the case that users have to choose between a feature that offers chatbot functionality and one that works to preserve their privacy, let’s all be honest about the decision that’s being made. Don’t imply that disappearing messaging is sufficient because it’s what users are already asking for. Meeting user demands is a necessary part of building software, just as protecting against the threats they’re familiar with is a necessary part of ensuring their privacy. But that doesn’t make either sufficient. Software teams need to use their expert knowledge to offer users features that they demonstrably need, even if they don’t know to ask for them. Software that truly meets users’ privacy needs will protect them against the spectrum of threats they genuinely face, not just the ones they know to talk about.

    Connect & share your thoughts

    Seeing this study interpreted by one software engineer has already taught us a lot. We now know that the way we’ve been presenting these findings has not gone far enough to contextualize how they should be interpreted. This is something we will work to improve.

    In the meantime, we are eager to get your thoughts and opinions on this work as well. Please take a look at a draft technical report describing it – available <a href"">here</a> and in our Github repo – and let us know what you think.

    Finally, if you live in NYC and you’re interested in connecting with some great people doing outreach around security and cryptography, we encourage you to check out @cryptoharlem. It is one of many groups around the world working with their local communities to improve access to privacy-preserving tools. If you are part of such a group, we’d love to hear about your experiences, and talk about the possibility of working with you to amplify the needs of people in your community.

  • How to Name Your App

    Naming software is hard because the name needs to convey a lot of meaning about what the program does to an unfamiliar audience, and do it all using only a word or short phrase. You want something memorable and easy to say – which becomes more complex when designing with a global audience in mind.

    Android's recently-announced competition to name the latest operating system has been met with skepticism. The accompanying parody video pokes fun at naming as an unskilled and silly exercise. The name for something like the latest version of an operating system doesn't really matter from an end-user point of view. Only super-technical people will notice or care. Call it Nutella McNutella face, as Tech Crunch suggested, or version, and a general audience will be satisfied that it's the latest choice.

    Why Names Matter

    In contrast to an established operating system, an app's name is an important way to differentiate what the program does and to encourage end-users to try it. Names are particularly important for internet-freedom projects where people may have limited bandwidth, be searching for information in a language they don't speak well, or trying to make a quick choice between similarly-named apps in an app store. A strong name can help build trust and drive adoption among users who need the project most, and get more people communicating securely.

    Even open-source software developers need to consider their "brand" – that is, the way they express their project’s benefits and values. Coming up with a memorable, compelling, and differentiated name for an app is a skill practiced by brand strategists. Most brand strategists work in the highly-commercial world of advertising – i.e., encouraging people to buy things. However, that same skill can help developers of all kinds reach a general audience and enable them to better protect their privacy. Here are a set of practices practices from commercial naming than can be adapted to an open-source context.

    Photo of a McDonald's restaurant
    McDonald’s is one of the most-recognized name brands in the world. Imagine a world where privacy-preserving software names are as identifiable. Image by Mike Mozart, CC-BY 2.0.

    Clarify Your Purpose

    A successful name conveys something about the app, either functional benefits or an attitude. Clarifying the values the app represents is an important first step. Naming conversations can be difficult because they can expose differences in opinion among contributors about the most important benefits of the app.

    To get everyone on the same page, try filling in the following sentence, leaving the name blank for now. </p>

    For [type of user], [name] is a [frame of reference] that [key benefit] because [reasons to believe].

    E.g., "For teenagers, Snapchat is a photo messaging app that hides messages from your parents because the interface is too confusing for adults."

    Here are some thought-starter questions adapted from Brand Strategy Insider to help you complete the above sentence.


    • What does the app do?
    • When the app does a better job than competitors, how is it different? (e.g. Faster? Cheaper? More fun? More reliable?)

    Culture and Purpose:

    • What will the development team never compromise on?
    • What are the team's the core beliefs?
    • What larger goal or cause does this app serve?
    • What does this app want to change in people's lives?
    • What are the ideas that the customer and development team agree are important?


    • Is the product serious or playful?
    • If the app were a drink, what would it be? (E.g. home-brewed oolong tea versus a Starbuck's mocha?)
    • If this project were a person (or celebrity), who would it be?

    Aspirational Self-Image:

    • What does using the app tell others about the customer?
    • How do customers want to be seen?

    Generating Ideas

    After you've answered the questions above, work with your team to identify a list of three to five adjectives that describe your app’s brand. It may take some time to get to agreement on the adjectives.

    Once you have the adjectives, brainstorm a list of at least 20 name options that reflect those adjectives. Let the list sit for a few days, then review your choices with fresh eyes. Select your top choices and move on to the exercises below in Choosing a Name.

    If you have time and can find willing participants, it's a good idea to go through this brainstorming process again, starting over from your adjective list, and including different people. A second, independent round of brainstorming before moving on to choosing a final candidate is a good way to get comprehensive coverage.

    To help seed ideas for brainstorming, look at example naming resources, such as Wolf Olin's Naming Handbook for inspiration. Here are some categories of possible names, with U.S.-centric examples:

    • Acronyms (UPS, IBM)
    • Descriptive (Whole Foods, Airbus)
    • Alliteration and Rhyme (Reese's Pieces, Dunkin' Donuts)
    • Evocative (Amazon, Crest)
    • Neologisms (Wii, Kodak)
    • Foreign words (Volvo, Samsung)
    • Founders' names (Hewlett-Packard, Disney)
    • Geography (Cisco, Fuji Film)
    • Personification (Nike, other myths)

    Take a critical look at the apps on your phone. What kind of names do they have?

    My home screen has

    • Translations from analogue: 7 apps. (e.g. Clock, Notes)
    • Descriptive: 3 apps. (e.g. Headspace, Lyft)
    • Evocative: 3 apps. (e.g. Kindle, Signal)
    • Neologisms: 6 apps. (e.g. Instagram, Trello)

    Choosing a Name

    Once you have a list of candidate names, start vetting them.

    Open the door and shout out the name.
    Saying the name out loud, such as "Hey, come look at this thing on XYZ" is a way to bring your project to life. If it's hard to say or you feel embarrassed saying it, then it's probably not the right name.

    Try using the name in a sentence.
    Is it hard to spell or type? How does it fare with speakers of other languages?

    What's the verb that means to use the app?
    Right now in San Francisco, WhatsApp is starting to become a verb, but I hear "message me in WhatsApp" or "text me in WhatsApp" as well as "WhatsApp me." Consider if you want your name to work as a verb. For example, Google Drive confuses many people, in part because no one knows what verb to use. Even habitual users struggle to describe using it. "Link it to me on Google Drive?" or "Share it with me on Google Drive?" "Drive" works as an analogue for a disk drive, but the verb phrase probably isn't "Drive it to me." The poor name choice makes it difficult to talk about and form a mental model of how the sharing features of the program work.

    Is the name already in use?
    Is the domain name available? What search results come up on websites and app stores? Are there legal conflicts? (A rigorous review may require the help of an attorney.)

    Is it good enough?
    Keep in mind, naming is often anti-climatic. It's fine to settle on an option that elicits a neutral reaction rather than love-at-first-site enthusiasm. The best names are often straight-forward or obvious. "Dropbox" is one example of an app with a clear, straightforward name.

    Does it make sense to potential users?
    Get user feedback on your choices. This can be as simple as saying that you're working on an app called X, and if someone asks you what it does, turn the question around and ask them what they would expect at app called X to do. Making side-by-side comparisons of app store descriptions or websites introducing the app can help clarify how end-users perceive different names.

    What's in a Name?

    The name and the values your “brand” expresses are how people find your app in a sea of similar alternatives. It helps them distinguish between options that are trustworthy and those made with snake-oil. Sharing the name is how they encourage their friends to use it, too. If your goal is to get a privacy-preserving app into the hands of as many people as possible, having a strong, memorable, and evocative name is an essential step that your team must take.

  • Developers Are People, Too: Supporting Cryptographic Agility

    On Monday I had the pleasure of speaking at a Workshop on Cryptographic Agility and Interoperability held at the National Academies by the Forum on Cyber Resilience.

    The assembled group of academics, policy-makers, and practitioners touched on a variety of problems around the practical application of cryptography in production software. The main focus was on the challenges and benefits associated with cryptosystems that can be updated or swapped out over time (and thus exhibit “agility”). The organizers asked us to consider questions such as the following.

    • Why is cryptographic agility useful and what are its potential risks and impacts?
    • What approaches have been attempted for improving cryptographic agility, and how successful have they been?
    • How might privacy and human rights be affected by cryptographic agility?
    • What are the consequences of cryptographic agility for the interoperability and usability of communications systems?
    • What are the key opportunities for standards bodies, governments, researchers, systems developers, and other stakeholders with regard to cryptographic agility? </ul>

      The Forum will issue an official report of what what said in due time; for now, here are some of the thoughts I shared with the group.

      Who are the users?

      Whenever I encounter a group of security experts talking about designing user-facing systems, I like to remind them that their users are almost certainly less experienced with security than they themselves are. This doesn’t mean that their users are stupid or ill-informed, and nine times out of ten it doesn’t mean that the experts should go about trying to educate their users to achieve a shared worldview, either. But it does mean that the experts need to put effort into building empathy with their users, and into setting them up for success.

      Photo of hands on a keyboard
      Developers are users, too – of APIs, standards, and libraries. Image CC-BY 2.0 WOCInTechChat.

      In the case of cryptographic agility, “users” aren’t just the consumers buying and using mass-market software. They are also the software developers, architects, and decision-makers who are trying to decide whether and how to integrate cryptography into their systems. These developers are the ones who must benefit first from policies, standards, and practices if we are to use cryptographic agility to achieve resilience against software vulnerabilities.

      How to help end users

      Developers want to do the right thing for their users. Users want to do the right thing to protect their data, too, but are often even less experienced with security than developers. One big risk of cryptographically-agile systems is that developers force decisions onto users who are ill-equipped to make them. (“Hmm, we support three different encryption schemes because we’re agile. Which one should we use? Let’s ask the user!”) What can developers do to help users?

      • Good defaults: Developers should choose default settings for users that are secure, and that strike a balance with performance. This goes against a custom of security-expert culture: without knowing the user’s threat model, it may feel wiser to set no default and let the user choose. However, many users find such decisions daunting; asking users to choose among unfamiliar options can lead to them being frustrated and giving up on the program, or guessing at an answer. At the other extreme, some developers may be tempted to set the default to the most conservative, cryptographically-strong setting. This can be problematic in cases where there is a significant performance impact.
      • Choices come with recommendations: In cases where the user must make a choice – or may be inclined to alter the default setting – the developer should offer guidance to help them. In some cases this may involve simply stack-ranking the options (“Most secure” through “least secure”). In cases where there is not a clear ordering, another approach may be scenario-based menus that highlight the relative pros and cons of each option (“Strong data protection, with a 10% slowdown on uploads”).
      • Transparency: Developers should provide a mechanism by which curious users can identify exactly which cryptographic library is being used in a program. This will help ease users’ minds when a vulnerability is discovered – ”Ah, this is running OpenSSL X.X, so I’m safe!” – and can help the community more easily hold developers accountable for updates. It can also be useful increasing the visibility of closed-source, country-mandated cryptographic suites, which many security experts worry may contain backdoors. </ul>

        Developers, we <3 you

        There’s more that the security-expert community can do to help developers. Here are some broader-reaching ideas.

        • Algorithm guidance: It’s not enough to simply say “these are the algorithms available”, or “these are the algorithms approved for use”. Authoritative entities – be they government agencies like NIST, standards bodies like ISO, or educational materials like textbooks – should try whenever possible to offer unambiguous guidance as to the relative benefits and drawbacks of algorithms. There is broad consensus in the security community on which algorithms are reaching their end-of-life and which ones are still fresh, but average developers don’t have easy access to this information.
        • Programming education: It is a time-honored tradition for practitioners to complain that academic institutions aren’t preparing students well for “the real world”. There are many critical areas of programming practice that receive no attention in many undergraduate programs, such as writing automated tests for code. For what it’s worth, I would like to add the cryptography lifecycle to this list. In addition to offering guidance around the pros and cons of different algorithms, security courses should require students to spend time thinking about how a program’s architecture impacts its resilience in the face of cryptographic vulnerabilities over time. It’s not enough to design a system that uses a cryptographic library well; students must also learn to plan for a library’s obsolescence.
        • Study developers: In the user-experience community we understand that studying our users is an essential part of building systems that work for them. If we are to understand the current practice of cryptographic agility – what’s really working for developers, what challenges they face, and why they make the decisions they do – we can’t just convene experts to talk about the problem. We must use social-science qualitative-research methods to actually talk to developers in the context of their work, probe their practices, and uncover their lived experiences. </ul>

          Bridging the people-tech gap

          Simply Secure has multiple stakeholders, and in technical circles we often try to be the voice of the user when users aren’t in the room. We bring that same passion in advocating for the needs of cryptographers, software engineers, and fledging computer scientists. We work on tools for people who just want to communicate with their friends – who treat computers as black boxes – and people who are passionate about writing good, secure, usable code. Let us know how we can do more.

  • Chatbots, UX, and Privacy

    Chatbots, or conversational programs that simulate interactive human speech patterns, are a hot topic in UX right now. Microsoft CEO Satya Nadella recently claimed that “bots are the new apps”, and that they are the interface of the future for tasks like ordering food and booking transportation. In San Francisco, tech elites use a multitude of oft-parodied services like Wag to find dog walkers and Rinse to have their laundry done. However, the appeal of a single integrated interface to multiple apps is obvious from a UX point of view, even as the social implications of so much “efficiency” are still being debated.

    Back to the Command Line

    Prompt bills itself as the “command line for the real world”. It uses text to integrate with over 1,000 services – including commerce (e.g. Domino’s Pizza), productivity (e.g. Evernote), and home automation (e.g. Nest). With Prompt, it’s possible to get directions from Google Maps or order an Uber to drive you there simply by sending text commands.

    Screenshot of a conversational UI.
    Screenshot from

    Driving everyone from the interactive world of apps to the visually impoverished world of the command line feels like a step backwards to many designers, including me. But we can interpret this shift as a response to the usability challenges of working across multiple apps on a mobile OS.

    Chatbots versus Better Apps

    Dan Grover’s excellent post Bots Won’t Replace Apps, Better Apps Will Replace Apps clearly illustrates the UX implications of what he describes as “Silicon Valley phone OS makers’ growing failure to fully serve users’ needs, particularly in other parts of the world.” I recommend reading the whole article, but the screenshots alone tell a compelling story.

    Dan is product manager at Chinese mobile messaging platform WeChat, which works to embed services in its core interface graphically rather than textually. His examples offer a view into the world of the Chinese-language mobile experience and serve as a counterpoint to the hype around chatbot interfaces. For example, he contrasts a pizza ordered via 73 keystroke taps in a conversational UI with 16 taps in the graphical WeChat equivalent. Even though click/tap counts are an imperfect way to evaluate usability, they are one illustration that advocates of the so-called efficiency of chatbots might not have the whole story. Textual interfaces work well for some users in some contexts (system administrators and programmers have embraced them for decades!), but that doesn’t doesn’t mean that they will work everywhere for everything.

    Screenshot of a conversational UI with a corresponding column of tap counts on the right.
    Example transaction from Microsoft Bot Framework showing 73 taps to order pizza. Image from Microsoft.

    Screenshot of a GUI interface with tap counts indicated.
    WeChat interface for in-app ordering Pizza Hut, showing 16 taps needed to complete the transaction. Image from Dan Grover.

    These chatbot-versus-graphical interactions show different relationships between messaging apps and other special-purpose apps. For example, ride-sharing service Lyft uses the phone’s native text-messaging app to notify passengers that their ride has arrived, but passengers can’t order a ride from within the native messaging app. WeChat started as a messaging app and has expanded to take on activities done by special-purpose apps in other contexts.

    Security Implications of Chatbots

    Telegram, which tries to position itself as a platform that keeps users’ data secure in a credible way (despite significant challenges on that front), gives developers tools for building bots. It even offers prize money to developers using the Telegram Bot API. But how do privacy and security fit into this landscape? Should we be advocating for the equivalent of end-to-end encryption in this kind of chatbot universe?

    From a human-centered point of view, we can expect that communicating with a bot sets end-user expectations that their messages are being read by machines. It’s an easy inference that their messages are saved and archived by the bot owner and used as training material to improve the program over time. Just as people who call a customer hotline are informed that “This call may be monitored or recorded for training purposes,” people have an expectation that some unseen entity is eventually reading the message. Otherwise how would they know what kind of pizza to send to which house?

    The expectation that “secure” chats are read by unknown parties has the potential to change users’ mental models of privacy and confuse their understanding of what “secure messaging” means in other contexts. Further research is needed to understand the implications and how to communicate security properties of different platforms.

    Chatbots as Security Coaches?

    Chatbots are an intriguing output format for explaining security concepts. In this example from Slack, a bot messages me to let me know that a file’s actual sharing permissions have changed.

    Screenshot from a Slack’s conversational interface indicating that a private file was shared in a particular channel by a particular user.
    Screenshot from Slack.

    This is an effective message because it’s actionable. The proactive information (which appeared to me in a private channel, with accompanying notification) gives a sense of immediacy. I know who shared what file with whom, and it’s easy to check the contents of the file. I am one click away from being able to ask Scout about the action she has just taken.

    This approach could be adapted to a number of contexts. Many large service providers send notifications by email when a user’s password has been modified or other important account details have changed. A conversational UI could not only be a prompt and friendly way to share this information with users, but could offer users an opportunity to take immediate action if the change was unwanted. Thinking more aspirationally to connected homes, smart cities, and IoT applications, chatbots could help people understand the chain of custody of their data. For example, they could notify people that their image has been captured on video and shared with a third party – or offer them an opportunity to opt out of such a recording. The details of such systems would be complex, but new interfaces could help make the exchange of complicated information more easy and accessible.

    I’m optimistic that chatbots can help people understand how their data is being used. I’m especially excited by the potential to use chatbots not just to control commerce, but to empower us to manage our personal data. Privacy-minded people should look for opportunities to make chatbots more than just glorified mechanisms of corporate surveillance. We should strive to instead create tools that will help people understand their data and their capacity to control it in an actionable, friendly way.

    Further Reading on Chatbots + UX

  • Design Matters: 2016 Design in Tech Report

    For the past two years John Maeda (whose previous roles include Professor at the MIT Media Lab and President of the Rhode Island School of Design) has issued a Design In Tech Report. This influential analysis, which Maeda presents at SXSW and has also been picked up by outlets like Wired, has helped Silicon Valley understand how design is valuable to companies and their customers. It is situated in the context of venture capital, as Maeda is currently Design Partner at VC firm Kleiner Perkins Caufield and Byers. However, his attention to industry trends backed with carefully-reported figures has implications for the broader world of tech – including nonprofit and open-source efforts.

    Design as a Force for Good

    My three biggest take-aways from the 2016 Design in Tech Report (pdf) are that market trends of the past year prove that design:

    • is about more than beauty,
    • has deep ethical implications, and
    • can be a force for economic inclusion.

    More than beauty

    As a trained designer, I have to grit my teeth to report that "design is more than beauty". The heart of my practice is not visual design but user experience flows, so this take-away seems painfully obvious. Design is about making things that work well for real people. However, it's helpful for me to remember just how much patient explanation can be necessary to communicate the broad range of activities encompassed by design to people who aren't familiar with it – from research, to information architecture, to organizational design and beyond. In that sense, it's gratifying that market trends are bearing out the value of design beyond the simple creation of pretty pictures. How do we spread this understanding further?

    Ethical implications

    The ethical implications of design are particularly important to consider in the context of the current technology industry, which is heavy on VC-fueled startups. That community is interested in design because design shapes behavior, and is effective at driving "conversions", or sales. From creating addicting products that encourage spending, such as Addiction by Design: Machine Gambling in Las Vegas, to Dark Patterns that trick users out of unsubscribing from services, irresponsible design harms people. How do we harness this deep understanding of user motivation and behavior for good rather than just for profit?

    Economic inclusion

    Happily, the report also highlights instances of design being used in ways that benefit society. For example, the UK Government's Digital Service Agency saved £1.7 billion ($2.5 trillion) by re-designing digital services and making them more accessible to all people. Design is an amplifier of values, and can build systems that are more equal or more unequal. The report showcases examples of market success that design for diverse audiences, such as gay social networks and gender-neutral children's toys. How can we help the security and privacy communities meet the needs of a broader group of people?

    From Design for Trust to Design for Privacy

    The 2016 Design in Tech Report touches on cybersecurity startups delivering network monitoring solutions, but the more encouraging point is that it positions them within a bigger framework of designing for trust. Considering everything from AI interfaces to startups in the so-called sharing economy, the 2016 report states that "design's fundamental impact rests in the ability to engender trust".

    I am optimistic that the 2017 Design in Tech Report will address privacy in a more explicit way. The Apple vs FBI case is a turning point for how companies handle customers' data, and with Apple's strong heritage as a design-driven company, other design companies are taking note. I hope that this time next year we will be examining how design leaders within Silicon Valley's VC-culture are shifting their focus away from customer data as a commodity and toward user privacy as a core value proposition.

  • When User News Is Bad News: Tactical Advice On User Feedback

    When you're putting your heart and soul into designing, building, or improving a piece of software, tuning in to feedback from users can sometimes get you down.

    Imagine waking up one morning and finding your project is being mentioned on Twitter in a slew of messages like these:

    Whether they're saying it "kinda sucks", "officially sucks", or that it is a full-on #McFailure, it's easy to find tweets like this for just about any app. With few details and lots of negative attitude, this sort of message is always discouraging to the team working on the app in question.

    That doesn't mean it's reasonable to just ignore negative feedback, however. Unless you know why the user is frustrated and a fix to their problem is just waiting to be pushed in the next release, all user feedback is always an opportunity to learn something.

    A positive spin

    First, it's important to have the right mindset when approaching negative user feedback. No matter how strongly you identify with the software project in question, the comments aren't about you as a person, so be mindful of your emotional reaction when you're reading it. It's very common to feel angry, frustrated, and even hurt. Try to find a balance between taking the feedback so seriously that it upsets you, and discounting it to the point that you lose motivation to understand what's going on.

    If your project is new or under-resourced (as many so many often are), remember that these kinds of messages are a sign that people care about what you're doing. They are trying to use your software. It excites them enough that they're even trying to tell you when it disappoints them. They may not have the vocabulary, time, design understanding, or technical acumen to explain what they dislike, but they want you to hear them so you have a chance to make things better.

    Tactical advice

    So, how do you actually go about turning negative feedback into useful insight? Here's a set of questions and ideas to get you started.

    Provide a forum

    Do users have a place to go when they want to kvetch or kvell in the first place? Feedback channels don't have to be elaborate or high-maintenance; while it's great if your team can support online user forums localized in 23 languages, a simple email address that is prominently advertised on your website is an effective place to start.

    Pro tip: don't just tell users to file a bug on GitHub. That's a great way to get highly-technical users to submit feedback, but a lousy way to get insights from just about anyone else. Social-media sites like Twitter and Facebook can be an effective way to connect with users, although it's important to try and help users understand the risks that such platforms pose. (For example, it's a bad idea to post publicly on Twitter about using VPN software from a country where such programs are prohibited.)

    Muster the troops

    Making sense of user feedback is critically important to software projects, because your users' opinions will ultimately determine the success of your software. (If they don't like it, eventually they'll find a better alternative; if they do like it, they'll tell their friends and your userbase will grow.) Monitoring and analyzing user feedback is therefore worth investing non-trivial amounts of time and energy.

    Do you have one or more team members who are responsible for this work? Do those people actually have time to engage with the data that is coming in? Do they have time during team meetings to share their findings? Does the team use this information to help it prioritize near- and long-term work?

    Gather a baseline

    If you have a software project that people outside the team are using, it's past time to start gathering data on what those users are saying about it. Even if your userbase is small and the feedback is mostly positive, an accurate picture today will provide the basis for future evaluation.

    What are the top known issues? Are you trying to keep track of individual pieces of feedback, or have you leveled up to clustering pieces into related groups? Are there clusters of complaints that can't be explained by any known bug? How many users are providing feedback: do you think it's ten different people saying similar things, or one person trying to reach out ten different times with the same issue?

    In addition to your official feedback channel, don't forget to monitor your project's social-media accounts on a regular basis. Even if you don't think of it as a way for users to send feedback, they almost certainly do. And there's nothing more demoralizing to a user than landing on a Facebook page and finding that there are complaints that have been ignored for months (or years!) on end.

    Track comments over time

    You don't have to instrument your app or gather analytics to get longitudinal data. Checking in on your feedback channels on a regular basis (daily, weekly, or monthly, depending on the size of your userbase and the volume of feedback) will help you build a picture of how you're doing, and provide a "user heartbeat" to help motivate and direct the team's efforts.

    Are you monitoring how many new complaints you get in each thematic cluster per month? Did your latest update cause complaints to go down, or have they remained constant? Are your development priorities successfully addressing top complaints over time?

    Engage for information & community

    The tweets quoted above are an extreme example, but many users provide feedback that isn't exactly useful. After all, how can you debug a problem if you don't even know what version of the software the user is running? Depending on the feedback channel through which the user connected, you may be able to reach out and ask for more information.

    When a user submits feedback (even the inactionable kind), do they get a response thanking them for taking the time to share their thoughts? Does your project website have a page describing what information makes a good bug report?

    Just a brief automated reply can make your users feel like you care about them, even if you don't have the time to write them personally. "Thank you so much for your feedback, it means a lot to us. We will do our best to read it soon. We are busy trying to ship the next version of the software, so probably can't reply in detail. But if you're willing to go the extra mile for us, would you consider sharing more detailed feedback using these instructions?"

    Invite critics to a beta-testing group

    This may be beyond the organizational means of many small projects, but: have you considered starting a beta-testing group? Some distribution platforms like Google Play have features to make this easier. Beta-testing can help you prevent huge waves of negative feedback by providing more eyes on software updates, and can serve as a great way to channel the enthusiasm of some of your biggest user critics. We work with a number of organizations that connect with users interested in privacy-preserving software, so let us know if you need help connecting with more potential users.

    It's all about the empathy

    At the end of the day, we all care about how our users feel. We may get frustrated when they chew us out for honest mistakes, or berate us for problems that are ultimately out of our control. But these moments are always an opportunity to put ourselves in their shoes, try to see our project through their eyes, and learn our craft anew.

    Do you do a great job channeling your users' feedback into amazing software improvements? Are you struggling and looking for more help than the above ideas provide? Please get in touch.

  • Comfortable UX, Not Just Open APIs

    Simply Secure focuses its collaborative efforts on open-source, privacy-preserving software projects. In my conversations with designers, developers, and end users, I'm often struck by a divergence in their understanding of what "openness" means in software. For example, last December during a user study, participants reading app store descriptions of secure messaging apps consistently thought that "open source" meant that their messages were public.

    The distinction between "source code" and "content generated in apps" isn't always clear to a mass audience, and this confusion has implications for privacy preservation. Many people know that their Facebook login credentials give them access to other services and apps, as "Login With Facebook" is common on everything from babysitter finder Urban Sitter to dating app Tinder. However, most people don't understand connecting Facebook functionality to other apps often allows their personal data to flow between services as well.

    Provocative services like Swipebuster, whose creator says they intended to build awareness about privacy issues, illustrate the confusion many people experience about what "open" and "public" mean in the context of the apps they use.

    Is Tinder data public?

    Swipebuster allows you to search for Tinder profiles based on certain criteria. Armed only with a first name, age, approximate location, and $5, you can get a list of matching Tinder users, complete with their photos, last logins, and whether they're looking for men or women.

    The anonymous creator explains, "There is too much data about people that people themselves don't know is available," Vanity Fair reports. "Not only are people oversharing and putting out a lot of information about themselves, but companies are also not doing enough to let people know they're doing it."

    According to Vanity Fair, Tinder responded that "searchable information on the Web site is public information that Tinder users have on their profiles. If you want to see who's on Tinder we recommend saving your money and downloading the app for free."

    Paying money to not deal with Tinder

    Let's put aside for a moment the potential moral or social implications of trying to access Tinder data for purposes other than finding a date, and explore why Swipebuster is so disturbing. I believe that it resonates as an example of the cultural divergence around openness because it occupies a middle ground between two undesirable options. Engaging with Tinder directly as a user – installing the app to see if someone else is using it – can be undesirable if you don't want others to know you're browsing Tinder data (e.g., because you're already in a romantic relationship). The apparent openness of the Tinder APIs presents a nice alternative if you can write code to query them, but this path requires a level of technical knowledge that makes it all but impossible for most people. $5 seems like a bargain when compared to learning Python from scratch or making your neighbors think you're cheating on your spouse.

    Swipebuster is what I would call a comfortable user experience. A comfortable UX opens the possibility of accessing data to a broad spectrum of users. An open API may make the underlying app possible, but data alone isn't enough for users to engage. In the case of Swipebuster, filling out a form with a credit card is an easy, routine experience for someone trying to get information. A $5 price adds another barrier, but it is a familiar interaction.

    This ease contrasts with Tinder users' perceptions. Many consider the data they share with the dating service to be confidential on some level. The existence of Swipebuster makes many Tinder users feel shocked and vulnerable.

    Form from
    Form from

    As The Guardian's Alex Hearn writes,"Even if it might seem obvious that Tinder, a site which works by showing name, gender, age and location to strangers, doesn't consider that information secret, it's a very different matter to be confronted with a searchable database of that information. Your home is not secret, for example – people see you come and go all the time – but that doesn't mean posting your address online is advisable."

    What happens in Vegas does not stay in Vegas

    The Las Vegas tourist bureau has lured visitors with the promise that "what happens in Vegas stays in Vegas," encouraging people to engage in behavior they wouldn't want to be associated with in other contexts. By extension, people may believe that what happens in an app like Tinder stays in that app. It may be more realistic to assume is that everything that happens in an app is ultimately accessible on the open web.

    Along with confusing permissions, badly-communicated or poorly-designed API integration can be another vector of privacy risk. Vice (and Tinder itself, as quoted above) describe Tinder's web-based API as "open, or "public", but when examined more closely, it appears that it is actually what software developers call "undocumented" or "private". This means that Tinder developers probably neither intended the API to be used by third parties nor made particular efforts to prevent such use; Swipebuster is the result of what many would call "reverse engineering". Part of the shock with Swipebuster is because it shows that security through obscurity isn't working. Whatever social contract of mutual accountability might work when a Tinder user encounters someone they know in the app, or when an app's interface limits access to the data contained within, doesn't hold when it's easy to pay for information from a database.

    Building more comfortable experiences

    At Simply Secure, we want better privacy-preserving tools that empower people to protect their data. Although Swipebuster may occupy shakey moral ground, it demonstrates that a good UX will always enable more people to access data than an open API (even in cases where the API is more elegant). Programmatic interfaces aren't enough; to get people truly engaged, we need initiatives that work to open not only data, but to make truly comfortable experiences for non-expert users as well.

  • Tradeoffs In Seamlessness: The WhatsApp Update

    I was originally planning on continuing today the series I <a href=>started</a> last month, but it's hard to pass up the opportunity to talk about Tuesday's exciting WhatsApp’s update.

    The new version of the messaging service's apps now offers users a glimpse into a feature that has been quietly rolling out for months: end-to-end-encryption. A number of other groups have written about the update's technical strengths and weaknesses – c.f. the EFF and ThreatPost, and WhatsApp's own whitepaper – but I'd like to spend a moment on the updated user experience, and lessons that UX designers and developers can take from the decisions WhatsApp has made.

    Users don't have to take any action at all – it just works!

    Always on for everyone

    As long as both you and your messaging partner are running recent versions of the program, WhatsApp's end-to-end-encryption is on by default, with no apparent way for the user to turn it off. This means that people don't have to fuss with generating or exchanging cryptographic keys; it happens silently in the background. In fact, users don't have to take any action at all – it just works!

    Fredric Jacobs shares the error message he saw when his messaging partner was running an old version of the software (left) and the menu option to turn on notifications around encryption (right).

    An exercise in tradeoffs

    Of course, this seamlessness doesn't come for free. Security enthusiasts are quick to point out limitations of this approach – in particular, that an application without mandatory key verification can be susceptible man-in-the-middle attacks. WhatsApp mitigates this limitation in two ways: by providing an optional key verification facility (which the EFF analyzes quite nicely) and by allowing users to opt-in to getting warning messages when their messaging partner's key has changed (i.e., key-continuity detection).

    Screenshot from WhatsApp interface.
    The message WhatsApp displays when the other user's key has changed.

    Noticing notifications

    The current release of WhatsApp does not prompt users to opt-in to encryption-related notifications. The option to do so is a bit hidden in the Settings > Account > Security, but given how few options are in that menu, it's reasonable to expect that someone poking around will discover it without too much trouble. I am curious whether WhatsApp did any user research before or during their design of this interface (which is pictured in Fred's tweet above) to make it discoverable and appealing to users who aren't necessarily looking for it. It should at least be easy to find for users who are seeking it out.

    Designers working with security settings often have to be cautious about pushing users too hard to turn on things that won't be useful to them. Time and again, intrusive security notifications have caused users to be frustrated to the point of turning off the security feature or switching to a less secure product. WhatsApp has done a good job on this front; in the absence of an attack, the notification is both rare (it should only occur when the other person has switched phones or uninstalled and reinstalled the app) and discrete once enabled.

    However, I worry that it may be a little too discrete. When we were testing it, we actually managed to miss the inline message (pictured above) at first, even though we were explicitly looking for it! This may be because the chat thread wasn’t open when it appeared. Other messages came in after it, pushing it out of the user's focus area, which is centered on the bottom of the screen. It took scrolling back up into the message history to see the inline message. Could an attacker exploit this focus issue with some users by intentionally sending a slew of distracting messages after signing on?

    Screenshot from WhatsApp interface.
    The dialog that appears when you tap the inline notification.

    Furthermore, while the dialog that pops up when you "tap for more info" offers reasonable detail (and a "learn more" option), neither it nor the inline message itself help the user realize that "security code changed" means "your communication may be at risk".

    I understand that the cases where users are being attacked will be far, far outnumbered by the cases where their messaging partner has simply gotten a new phone. And it's important to not overwhelm users with useless messages. At the same time, when the user chose to turn on these security notifications, they gave a strong signal that they were concerned about security. Perhaps the designers assumed that any user who turned this feature on would know what the warning meant? But if that's the case, why use a term ("security code") that is so friendly to security novices, rather than the one that experts are accustomed to ("key")?

    Here's hoping: the importance of research and iteration

    I am so pleased and impressed that WhatsApp has invested the time and energy necessary to release this feature that I am going assume (hope?) that this is just the first of several iterative improvements to this experience. Once they have a chance to look at how the feature has been received and get more data about how often people are engaging with it – likely through both in-app analytics and user research – they can work to make it even more discoverable and useful to users who want to learn more about their security status.

    Here are some concrete suggestions that I would love to see explored.

    • Are there ways to surface the notification feature to users who haven't opened their account settings? This might itself be a one-time (or once-yearly) notification, ideally some time after the user has completed the initial setup flow so as not to get lost in dialog fatigue. Users could have the option to (1) go directly to their security settings, (2) be reminded to do so later, (3) or dismiss the suggestion permanently.
    • Are there patterns of behavior that might indicate that the user is under attack, and be used to enable the notifications automatically? For example, sudden key changes by a number of the users' contacts, or the user blocking a number of new contacts.
    • Could the inline notification do more to indicate that the change in security code is worthy of investigation? Any solution would need to be coherent in the app's visual design language, of course, but color or iconography could go a long way here.
    • Is there some way to prevent users from missing the inline notification as we did in our experiments? The obvious choice would be to have the notification be a modal dialog the first time it appears for a given messaging partner (in addition to the inline message so the user could easily access it again if they accidentally dismissed it), but there are likely other options as well. </ul>

      The perfect is the enemy of the good

      All apps have room for improvement, and designing for both general users (who currently are unlikely to be subject to a MITM attack) and users at risk (who may be targeted by government-level attackers on a daily basis) is extremely hard to do. WhatsApp and Signal have pushed the envelope of UX design for cryptography to new territories, and have brought end-to-end encrypted messaging to over a billion people. We therefore applaud them for this accomplishment wholeheartedly, and enthusiastically await their future efforts in this realm!

      Are there other design patterns that you have experienced that you think would work well in a context like WhatsApp? Have you tackled similar problems, and would you like to share your efforts with the public? Are you working on a software project and seeking help designing or researching similar experiences? Please get in touch!

  • How UX Excludes or Includes

    Software communicates its values via its user experience (UX) by making some actions easy and others harder. For example, mobile apps can be configured to automatically opt users in to location sharing, and require people to dig through multiple layers of menus to opt out. This design choice reflects the developer's belief that it's ok to collect location data about users without asking their permission. But this is just one example; values are encoded in software in many ways beyond default settings.

    In 2016 Twitter has come under fire for UX changes, most notably switching their classic star icon (meaning "favorite") to the heart icon (meaning "like"). Other changes in Twitter's UX send strong messages about what Twitter believes their platform is for.

    The addition of GIFs to the compose window tells users that animated images are an expected, normal part of Twitter. This may come as surprise to long-time Twitter users who see it primarily as a professional or news-sharing communication platform. (These may be many of the same people who felt that hearts were inappropriate or unprofessional replacements to stars.) Normalizing the use of GIFs is probably a way to encourage new users or to encourage new behaviors by current users.

    However, the implementation choices in the GIF search feature also send a more subtle message about how users are expected to express themselves.

    Screenshot showing a GIF-selection menu.
    Inserting a GIF into a tweet with Twitter's web interface. Screenshot from the Twitter interface as viewed in Chrome on OSX.

    UX Shapes Behavior

    Looking at the options presented, we see that they have included only emotions in the top choices. This shapes user behavior by encouraging them to express strong agreement or dissent. The message that "Twitter is for arguing on the internet" is encoded into the UX. Of the 8 choices showing, four are positive (Agree, Applause, Awww, and Dance) and four are negative (Deal With It, Do Not Want, Ewww, and Eye Roll). The latter options tell users that they should expect to not only see things they do not want to on the platform, but to also share things that others don't want to see. If someone questions your tweet, the UX suggests adding a "Deal With It" GIF as a response and moving on.

    Twitter partnered with Giphy and Riffsy to provide the GIFs, which focus on pop culture in the United States. It's not clear if or how either the selections of illustrated emotions or the default GIFS are localized for a global audience. The first image illustrating applause is a take from The Lord of the Rings fantasy movie showing white actors dressed in costumes similar to 19th-century British clothes. I have watched that movie many times, but it strikes me as an odd choice to illustrate "applause." What does it feel like to not know who those people are? How does that impact diverse users' comfort sharing that GIF? Any GIF? Using Twitter in general?

    More fundamentally, what messages do the choice of images and titles for the GIFs send? How do they set the tone for conversation on Twitter? How do they welcome or exclude participation by different groups?

    Communicating with graphics presents challenges for cross-cultural miscommunication in apps beyond Twitter. One example is the choice of words to describe emoji in Slack. I was surprised that the fist icon, which I had used for years on my iPhone in text and WhatsApp messages, is labeled "facepunch" in Slack. I thought of it as fist-bump, or a modern high five. So every time I thought I was sending a supportive "right on, you go, well done" message, the recipient may have thought I was punching them in the face.

    Slack app emoji of face punch
    Facepunch sounds less supportive than fist-bump.

    Designing for Inclusion, From the Beginning

    I live and work in San Francisco and am familiar with many of same the Gen-X cultural memes that the people working at Twitter, Facebook, or Google are steeped in. I want to make tools approachable and accessible to a global audience, but struggle with ways to make my own biases and cultural assumptions visible so they can be questioned.

    UX designers have an opportunity to design for inclusion from the beginning and to challenge their teams to make software welcoming to all kinds of people. Even just thinking about the hands that you show in your product can be informative. Facebook's repository of diverse hand images is just one visual ways to remind your team and the world who you're designing for (though I'd still like to see some painted nails, jewelry, and tattoos in the bunch).

    Rizwan Javid created a set of multicultural names for InVision's Craft plugins (which integrate with Photoshop and Sketch) to help designers make prototypes that reflect a global userbase. Using this list can help address some practical considerations: can your UI handle a name like "Juan Carlos Gutiérrez De La Paz" without wrapping when displayed under a profile photo? But it also sends an important message to the team if they know they are designing for Shamika Thompson and Le Ly, instead of just the John Does and Jane Smiths of the world. The small step of including diverse names in an early interface can build empathy and send a message to designers that they are creating something for a global audience.

    Technology Builders

    Beyond designing for a global audience, look for ways to celebrate the leadership of people of color building technology. Slack designer and developer Diógenes Brito's choice to use a brown hand to unveil a feature in Slack reflects a world where the people creating websites and integrating APIs have brown hands.

    Stephanie Morillo and Christina Morillo have created a set of stock photography showing women of color tech leaders, developers, and designers — and released it under a Creative Commons license. So next time you need an illustration of a developer or a designer, use them.

    Photos of a woman of color standing in front of a whiteboard and of a woman of color using a laptop.
    Sample UX Designer and Developer images from WOC Tech Chat, shared under a CC BY 2.0 license.

    Download Javid's text files of names from our GitHub repository (added there by permission), and tell us about your favorite resources on designing for diversity!

  • Contracting Creatives, in Brief

    Your team has reached the stage where you need to hire a professional designer. Maybe you want to finally get a great-looking logo, make a website that doesn't look like it was designed in 1996, or create a really compelling video for your Kickstarter campaign. In any case, you know that it might be tricky to express what you're looking for – especially if you come from a technical background and aren't used to dealing with folks who work in pixels.

    If you're hiring a designer, you need to write a creative brief. Briefs can take many forms, but the basic idea is to communicate both what the project is, and give some indication of how the resulting work should feel – but without getting too detailed or prescriptive.

    Ask an expert: a conversation with Anne Trausch

    Ame sat down this week with Anne Trausch, a veteran content strategist, writer, and designer currently at IDEO San Francisco, to talk about the role a brief plays in a creative process.

    Q: What is the essence of a creative brief?

    A creative brief clarifies the restrictions and the opportunity of a particular challenge. It's a call to arms and also a map of where to focus efforts. Briefs answer why the project matters, and what the audience will get out of it.

    They should be short — ideally a page in length — with a simple statement of the problem, objective, audience, main message, reasons to believe the message, and tone.

    Briefs put a box around a problem and say, "Go wild within this box."
    – Anne Trausch

    Q: What words of wisdom do you have about helping organizations communicate their spirit and vision, and in bridging the divide with creative professionals?

    For a creative team to succeed it's vital to clarify a strategic direction and foster agreements around any limits to the approach. Briefs outline what criteria a good solution will have to adhere to. Understanding the restrictions of a project can be liberating. Briefs put a box around a problem and say, "Go wild within this box."

    The best solutions are often unfamiliar, bold, and new. In those cases, a team could be anxious trusting something seemingly unproven. A good brief reminds everyone of the strategic intent and can help keep conversations and discussions focused on how well the proposed solution addresses the challenge, and not get bogged down in how the team feels about it.

    I have never been forced to accept compromises, but I have willingly accepted constraints.
    – Charles Eames

    To review: briefs in practice

    Building on Anne's wisdom, here are some practical tips for people new to writing creative briefs.

    Keep it brief. When in doubt, focus on the basics.

    • What problem are you trying to solve with this creative endeavor?
    • What outcomes do you want to see as a result?
    • Who is your audience for this project?
    • What message do you want to convey?
    • What reasons does someone have to believe your message?
    • How do you want the project to feel – what's the emotion, the spirit, or the values that you want to convey? </ul>

      Remember, successful briefs not only convey your hopes and dreams for a project, but also provide some creative breathing room. After all, you're not hiring a printer to manufacture an artifact you've already fully envisioned, but a creative professional whose talent will, if all goes well, bring something new and beautiful into the world.

      Once you have your brief in hand, you can work to find someone who will use it as a starting place for a conversation on what can be achieved on your budget and timeline. One tip for finding a designer is to look at the work they've already done, and try and find someone whose past projects have a similar scope and spirit to what you're looking to accomplish. Personal referrals can be great, but only if the person's experience is also in line with the work you want to have done.

      If you're an open-source team looking to collaborate on a design project for the first time and need a hand writing a brief or finding a designer, get in touch!

      Learn about creative briefs from the perspective of creative professionals themselves in Briefly from Bassett & Partners.

  • Notes from the Internet Freedom Festival

    I really enjoyed my time at the Internet Freedom Festival in Valencia, Spain. I was inspired and humbled to meet so many talented people as part of a global event about internet freedom. From powerful conversations about privilege to UX design jam sessions, it was a great week. With more than 600 people registered and 160+ sessions, there was more terrific discussion than I could be part of, but here are some themes that stuck with me.

    Designing With and Designing For

    IFF is the most global event I've attended, with people from 43+ countries. It was an eye-opening exposure to what the global internet is like in 2016, and how much work is needed to make it better. I'm a practitioner of Human-Centered Design who believes that empathy should be the foundation for building technology, and some of my most memorable discussions were about privilege, power, and my own biases. From observations on white-knighting, the minimization of lived experiences, and more, I came away with a renewed appreciation for the distinction between designing with people by involving them as partners versus designing for people in ways that are disrespectful and take away agency. I'm looking forward to listening and learning more. Thank you to the people who challenged me and encouraged me to grow.

    Data, Algorithms, & You

    As an example of this, Tara Adiseshan's Designing Participatory Algorithmic Decision-Making Processes challenged me to think more broadly about structural discrimination, and how dangerous the myth of "neutral data" is in an era when algorithms determine so much of our experiences – pricing, credit scores, and more. There is a powerful design opportunity to give people agency to understand what data about them is collected and how to participate in the outcomes. Check out the Algorithm Club, a Twitter book club, for more.

    Continuing the theme of data and algorithms, Sarah Gold and Ian Hutchinson from Projects By If's session You and Your Data turned out to be the boldest empathy exercise I encountered at IFF, encouraging empathy for advertisers and people who make their money from advertising revenue. This participatory discussion imagined the internet after adblockers. I'm interested to see their examples of data permissions work as a case study of how policy, user experience, and data agency intersect.

    Building Better UX

    Huge thanks everyone who participated in the UI and Usability Jam, a workshop connecting the a community of people working to improve the design and usability of internet freedom tools. If you haven't seen these Resources for UX Self-Education, take a look. My biggest inspiration came from appreciating how the well-known user onboarding challenge of empty states – or the initial state of the interface before there's activity to display – is an opportunity for techno-activists to attract people to their movement. Triggered by Stingray Mapping and’s use of location information in compelling ways, I got excited about how "demo mode," or using an app soley for the purpose to showing features, is an opportunity to reinforce the values that lead people to download the app in the first place. Special thanks to Bridget Sheerin for pushing my thinking about radars and other dynamic interfaces for communicating location tracking.

    I was also glad to see more sessions about the craft of UX. Thanks to our Fellow Gus Andrews for a great session focused on tools and feedback, and to Sajolida for the Tor configuration in Tails feedback session.

    Finally, inspired by An Xiao Mina's Multilingual Design session, I've been thinking more about majority and minority languages, and what a more linguistically inclusive internet would be like. As a first step, I'm working to follow more people on Twitter who Tweet in languages other than English.

    I came away from the Festival energized by the vibrant and passionate community and fired up to make the internet a better place.

  • Learning Lessons Where We Find Them: Analyzing Facebook's Privacy Checkup, Part 1

    This is the first in a short series of posts looking at Facebook's "Privacy Checkup" feature. This installment examines why even privacy advocates who avoid social-media sites should take time to understand it and related user experiences. The next installment will go into depth critiquing the feature itself, taking lessons from the user experience that are useful to any designer of privacy or security-related software.

    As a reader of the Simply Secure blog, chances are good that you spend a fair amount of time thinking about privacy and data security. If you use programs like Tor Browser, Privacy Badger, or Signal, you might express your personal data-privacy goals with statements like "I don't want anyone to be able to follow what I do online," "I want to be able to control my online data and metadata as much as possible," or "I don't want any companies or governments spying on me."

    While people who don't use these types of programs may have spent fewer hours searching out vetted open-source software, that doesn't mean that they don't care about their data privacy. The Pew Research Center has found that "91% of American adults say that consumers have lost control over how personal information is collected and used by companies," and that "the public has little confidence in the security of their everyday communications."

    One thing that Ame heard during her recent study in New York City is that participants were concerned that they were being surveilled by the police through Facebook. Although we need no external validation or justification of their concerns to find meaning in their lived experience, there is tremendous evidence that their fears are justified – as reported by The Verge, The Guardian, and other outlets.

    The mix of social media and conspiracy statutes creates a dragnet that can bring almost anybody in.
    – Andrew Laufer, as quoted by The Verge

    Of course, some privacy advocates would tell people with these concerns that privacy is anathema to sites like Facebook, because their revenue model is largely based on gathering data about users and selling targeted ads. And, depending on your threat model and the security practices of the social-media site in question, it's true that sharing your data with such sites can put your security at risk – both online and in the physical world – especially if you are an activist under threat from state actors.

    But there is no single definition of what is "private enough". Everyone has different threat models. And as a user advocate with a human-centered-design ethos, I argue that it is not reasonable to simply tell a billion or more people that they should abandon the platforms they use to communicate every day with friends and relatives – that they use to buy used clothing for their children, get inspiration for their creative endeavors, and hunt for job opportunities. The value that users get from these sites is too high; a message to leave it based on amorphous privacy threats would fall on closed ears.

    We should instead make sure that users have the tools they need to manage who can see their data, and work to understand the ways that sites share data outside of the user-visible platform. It sounds like the New York City police are not able to obtain warrants that provide them access to large swaths of the local population's accounts (although I welcome correction on this point). This means that helping users control who sees their data shared through the platform – and an increased focus on helping individuals detect phony friend requests – could go a long way toward protecting the participants Ame talked to who were concerned about unwarranted police surveillance.

    This is why features like Facebook's "Privacy Checkup" hold great potential. The best way to understand how well it works for users in practice would be to look at Facebook's usage statistics (How many users complete the checkup? What changes do they implement as part of the process? Do they engage more with their privacy settings than users who don't interact with the feature? Do their sharing behaviors change after completing the checkup?) or perform a user study (Do users understand what is going on? Are they happy with the results of the checkup when they're complete? What about six months later? Do they feel that their data is safer? Do they work with friends or family to help them protect their data?). But, we're going to be scrappy and do an armchair expert review – the kind of analysis that is easy to perform on any piece of software after using it for a short amount of time. This kind of review is most useful for identifying low-hanging fruit – i.e., obvious things that may confuse or frustrate users.

    Stay tuned for the next post in this series, where we'll start taking the feature apart and identifying lessons that are useful to any designer of privacy or security-related software.

    Screenshot of Scout's Facebook account, taken today.
    Screenshot of Scout's Facebook account, taken today.


  • Be a self-starter: UX educational resources

    This week Ame is in Valencia at the Internet Freedom Festival, where she’s talking about making great user experiences with software developers and activists from around the world.

    She is joined by a number of volunteer UX professionals, including veteran UX researcher Susan Farrell. Among other activities, they will be holding “jam sessions”, where software teams can get scrappy advice on how to improve their UX. Susan put together the following document to help session participants, and we’d like to share it here so the broader community can benefit. She plans to post it on Github in the near future, and we will work to share updates as she makes them.

    User Experience Self-Education Resources

    An annotated list by Susan Farrell

    Last updated: 1 March 2016
    Status: Beta / draft
    Suggestions, etc.:
    Usage Intent: Please ask, until it's under version control at GitHub. This is just a draft.
    Intended Audience: Anyone who wants to learn how to make things easier to use, through better design and research.

    What is User Experience? (UX)

    There are a lot of official and unofficial definitions worth reading. An applied version:

    • User experience is concerned with what happens when someone uses something (tools, designs, systems, toys, amusement parks, etc.) and their experience of that.
    • People experience interacting with businesses and organizations in various ways for various reasons (find information, shop, research, call, buy, receive, set up, repair, return, email, branding, stores, etc.).
    • A person's user experience with something can be good, bad, ugly, brilliant, fun, embarrassing, and so on.
    • Usability can, and should, be evaluated. Occasionally it should be measured, but quick, qualitative testing with users helps you improve outcomes faster, especially while designing.
    • The profession of UX is a big set of people with many and various combinations of UX skills.
    • They are all trying to improve people's lives by making software, information, products, services, and other designed experiences, better, easier to use, and more delightful.

    Image of a unicorn composed of different elements of the user-experience process
    The skills and responsibilities of an effective team. It’s ideal to have more than one UX person to ensure important sub-specialties are covered. Originally published in Building an enterprise UX team by Rachel Daniel (also on LinkedIn), UX Director at MaxPoint. Used by permission.

    What's a good user experience? Well that depends on things like:

    <ul> <li>Who the users are</li> <li>What the purpose of the system is</li> <li>What people expect</li> <li>How well it meets their needs</li> <li>How they feel about it</li> <li>And how it’s experienced, sustained, and maintained over time</li>


    Why improving human-computer interaction matters

    • In many parts of the world, every business is suddenly in the software business
    • Everyone is a potential computer user
    • UX methods solve important problems for people and organizations
    • UX research finds and meets real, human needs
    • People want to be delighted. Functionality is usually not enough.
    • You can decide with data (stop arguing)
    • You can create more-appropriate products and services by collaborating with users
    • You can find and mitigate risks by testing designs early and often
    • You can measure how much more usable your products and services are over time
    • You can compare your offerings with competitors’
    • You can streamline: saving time, effort, and money

    How to Get Up to Speed

    User Experience Careers (free report and article) Read this to see if it sounds right for you.

    Start doing these things one at a time until you’re doing all of them:

    1. Carry a notebook
    2. Collect screenshots and photos of designs that work well and poorly
    3. Pay attention to the details of every interface you use
    4. Aggressively teach yourself by reading books
    5. Find a mentor and a community
    6. Get (and later provide) an internship or apprentice position
    7. Take some courses online and also hands-on workshops
    8. Learn to describe interfaces and interactions as precisely as possible
    9. Embrace usability testing so you can learn what works and check designs as you build
    10. -----Start designing-----
    11. (Don't get distracted by tools, use what you can get that works)
    12. Paper prototyping
    13. Interactive prototyping
    14. Practice

    Where to Start: Reading

    • Principles of visual design
    • HCI university textbook(s) that cover both design and ergonomics / human factors
    • Interaction design
    • Accessibility
    • Research methods (overview)
    • Information architecture
    • How to analyze data

    You are what you read. Get serious by hitting the books. There's no faster or better way to educate yourself. Snack a little with blogs, talks and workshops, sure; but the books are where the foundational learning happens. Everyone in UX has read these, will read these, or has them on hand for reference. See also other books these authors write. Find more in their bibliographies. Go to CHI, UXPA, and other conferences where new design research papers are presented. (See UX Professional Organizations, below.)


    Great books on interface design are infrequent. The old textbooks have a lot of good in them still. Humans hardly ever change, and usability problems seem perennial.

  • Reaching For The Masses: Protecting Privacy Through Better Software

    Many regular readers of our blog have already drunk the metaphorical Kool-Aid. You know that a good user experiences is critical to an app's success; moreover, you know that when a piece of software seeks to preserve its users' privacy, a poor UX can have disastrous results.

    But working in a community of passionate individuals – whether it's as a designer, a cryptographer, or an internet-freedom activist – can make it easy to forget that the majority of the human race isn't aware of your favorite issues. It's easy to lose sight of the fact that most people don't spend their days thinking about their relationship to software, or how their software handles their data. The recent news about Apple and the FBI have brought many of these issues to the forefront, but it's hard for people on the outside to sort through the hype to understand what's really going on.

    Although our main focus at Simply Secure is on helping UX professionals and software developers learn, connect, and grow in their efforts to make great experiences for their users, we also try to help other communities understand the space we work in. To that end, I recently penned "Protecting Data Privacy With User-Friendly Software" for the Council on Foreign Relations series of "Cyber Briefs". The CFR positions itself as "a resource for its members, government officials, business executives, journalists, educators and students, civic and religious leaders, and other interested citizens" – many of whom aren't familiar with the difference between symmetric and asymmetric crypto, or between UI and UX.

    Policymakers in the United States and other countries should recognize that anything less than intact cryptography puts all users at risk. Developers cannot build software that allows law enforcement to access encrypted communications but prevents malicious actors from exploiting that access. Cryptography cannot distinguish good people from bad, so a backdoor for one is a backdoor for all.
    The focus of too many projects has long been on users who resemble the developers themselves. It is time to professionalize the practice of open-source development, recruit designers and usability researchers to the cause, and take a human-centered approach to software design. In particular, project leaders should make the development process more accessible to new participants by including explicit instructions to user-experience experts in their documentation.

    You can read the full brief here.

  • Features – Like Backdoors – Are Forever

    The news this week has been full of stories about Apple's resistance to a court order demanding they build a custom backdoor to a phone used by one of the San Bernardino suspects.

    While I will leave deep analysis of the legal situation to experts of that domain, I believe that this instance holds valuable lessons for all software teams. One lesson in particular helps us understand why the creation of such a backdoor would inevitably become dangerous for innocent users.

    Image of colorful doors
    Colorful doors in the UK, by Paul McIlroy under CC BY-SA 2.0.

    People love useful software

    Put simply: once a piece of useful software is created, its users won't want to give it up.

    This is a phenomenon that is well-understood by many experienced software teams. It's part of why long-lived programs like Microsoft Word have so many features. There are strong incentives to add new functionality: niche users request them, salespeople see them as a competitive advantage, and developers get to work on building fun new things instead of just maintaining someone else's code. The incentives to remove functionality – making code simpler and faster, making the commonly-used tasks easier to find and use – are far outweighed by the pain that users experience when a beloved feature is taken away.

    This is one reason that a human-centered design process is helpful. Working first to understand users' needs allows a team to start by developing a simple, well-targeted piece of software, rather than throwing a hodgepodge of features against the wall to see what sticks. Once a feature has hit the wall, chances are there are some users somewhere who see it as the product's core advantage – and would be sorely disappointed if it was ever taken away.

    The moral is thus: always assume that people will use software more than you expect, and become attached to it in ways that you can't foresee. Don't ship software that you don't want to see used in new, creative, expansive ways.

    Backdoors: popular to a fault

    We can view a backdoor that circumvents the iPhone's security measures to be a software feature just like any other. If an entity like the FBI was able to get privileged access an iPhone for the San Bernardino case, it is safe to assume that they and other law enforcement entities would want to do so again for future cases – they wouldn't want to give such a useful feature up.

    As the demand increased it would be hard to continue treating the backdoor software with tremendous care. More people at Apple would have to be given access to it to satisfy demand, or perhaps Apple would share the software with the law-enforcement agencies so they could take on the burden of fulfilling access requests directly.

    As more people gained access to the software, the probability of malicious actors also gaining access would go up. It's hard to keep something that lots of people use every day secret. Given how much juicy data the backdoor could ultimately give access to, we have to assume that it would only be a matter of time before the backdoor was stolen and released into the wild.

    Backdoors, like any software feature, will always become popular. If you don't want lots of people using them in new and unexpected ways, it's better to just not create them in the first place.

  • Awkward! QR Scanning + LinkedIn Spam

    Messaging with friends and colleagues is rewarding – but sharing contact information is awkward. Many people want to preserve their privacy by carefully controlling who gets their contact information, and choose not to broadcast their email address or phone number via a public Facebook or Twitter profile. Instead, they choose to strategically share their contact info.

    It's awkward to navigate the social and UX challenges in this sharing. Looking at how WeChat and LinkedIn handle this problem exposes two different kinds of awkwardness: mechanics of sharing and social agreement about what permissions you get as a result.

    WeChat: Reciprocity and Leaving People Hanging

    Chinese messaging app WeChat has grown to 650 million monthly users. Although posts may be censored, it's a fixture of the Chinese mobile landscape. During 2016's Lunar New Year, WeChat handled 8 billion "red envelopes" of New Year's money through its payment platform. Christina Xu's Am I Scanning You, or Are You Scanning Me shares the social nuances of scanning a QR code off someone else's phone to exchange contact information. Her writing provides cultural context; for example, URLs are only slightly more human-readable than a QR codefor most Chinese people. Her discussion of Chinese norms of courtesy and reciprocity include descriptions of the discomfort that an inexperienced QR-code scanner causes, requiring their scanning partner to "hold their phone out steadily for awkward, uncertain seconds, as if waiting slightly too long for a high five."

    Image of users scanning QR codes.
    Scanning a QR Code to share contact info in WeChat. Photo by An Xiao Mina, used by permission.

    With WeChat, the physical mechanics of contact sharing may be awkward or difficult, but it's clear what you have permission to do. After being scanned, you still have a graceful way to not complete the friend request if you choose. However, once the request is accepted, you can exchange messages with them on the platform, and optionally share Moments with them.

    LinkedIn: Spam and Dark Patterns

    Last fall, designer Frank Chimero proposed that any New Yorker cartoon could be captioned with "Hi, I'd like to add you to my professional network on LinkedIn." These bland and unthreatening words, which the professional networking site has made familiar to many people around the world, have achieved internet memedom.

    However, these messages are not always desirable, and a specific set of UX decisions has caused LinkedIn to become synonymous with spam. Dan Schlosser describes LinkedIn's Dark Patterns and how they trick people into inviting their contacts to connect on the social network.

    LinkedIn iPhone app screenshot
    LinkedIn interrupts users' workflow to request access to your contacts.

    In contrast to the up-front awkwardness of scanning a QR code, sharing contact information via LinkedIn is downright seamless. The LinkedIn iPhone app screenshot shown above makes "continue," which gives ongoing access to your contacts, prominent – and the "x" to dismiss the request without granting access subtle. Because the app frequently requests access to your contacts, invitations to connect can be unknowingly sent to everyone in your address book, including those you don't consider professional contacts, such as someone you texted with to buy a used sofa.

    With LinkedIn, the social awkwardness around contact sharing comes after the request has been sent. The fact that LinkedIn's aggressive requests reached meme status indicates that these requests are often unintentional. Furthermore, there's no social agreement on how to interact once the LinkedIn connection is made. Message only within the app? Send emails? Endorse for skills? It's not clear what accepting the request means, because it's not clear what the invitation-sender intended.

    Designing to Share Contacts and Preserve Privacy

    Deliberately increasing the awkwardness of sharing contacts would decrease usability and distract users from their primary communication goals. However, "friction-free" isn't always good; designers should safeguard the intentionality of sharing contacts by making it explicit and noticeable. Instead of happening seamlessly behind the scenes, contact sharing should be something people intentionally and explicitly opt into. I look forward to more experiences like WeChat, where the on-screen UX and social agreements work together – even if they require a little QR awkwardness.

  • Video Roundup

    It’s always great to attend security and privacy conferences in person. But in cases where you have to miss an event, online videos of the talks can be a great way to stay current with the ongoing conversation.

    Art, Design, and The Future of Privacy

    As I promised back in September, the videos of the event we co-hosted with DIS Magazine at Pioneer Works are available online. The DIS blog had a great writeup with summaries of the different panels, and you can find transcripts over at Open Transcripts. I had a great time participating, and came away with some great perspectives.

    Two of my favorite sessions were Sarah Ball talking about unique perspective from her work as a prison librarian and our advisor Cory Doctorow’s barn-burning sendoff at the end.

    Art, Design, and The Future of Privacy - Ask a Prison Librarian about privacy, technology, and state control from Matthew Joseff on Vimeo.

    Art, Design, and The Future of Privacy - Where to from here? from Matthew Joseff on Vimeo.

    Video links:

  • Notes on the O'Reilly Design Conference

    Last week I went to the O'Reilly Design Conference and enjoyed learning about emerging UX trends. The conference was full of high-quality presentations on UX practice. Here are three of my favorite talks.

    The Many Minds of the Maker

    Knight-Mozilla Fellow Livia Labate shared examples of how designers can overcome barriers to learning code. Her experiences from the pragmatic (no you don't need to learn Rails) to the philosophical (to be good at something, be bad at it first) are relevant to people beyond designers. Her willingness to find common ground and avoid stereotypical conflicts between designers and developers is important.

    Measuring Hard to Measure Things

    GitHub's Chrissie Brodigan shared user research that helped make GitHub more useful to new users. She included interesting examples of empathetic listening to understand what people wanted. I especially appreciated her insights on survey design and A/B testing offers of free, private code repositories at attract people. Pro tip: consider phrases other than "free private" (e.g., “You’re eligible for a free private repository!”) in an email subject line to avoid spam filters. There were some nice lessons on transparency too, as people took to Twitter to complain about newbies being offered free stuff at the expense of long-time users during a limited-rollout experiment.

    Designing for Evil

    Brandon Harris described the benefits of a troll persona (or, more generally, an attacker persona) for understanding how users could subvert your software to harm others. This seems particularly relevant as a way for designers without a technical security background to consider how their interfaces are vulnerable to attackers. For example, Scout wrote about Ashley Madison's leaky interface and password recovery flow. With no technical knowledge, a designer could imagine someone testing both their partner's and their own email addresses to see what kind of messages are returned.

    Privacy as a Social Good

    The conference had a robust slate of 11 presentations over two days in the Design for Social Good track. Privacy played a role in several presentations, including my own talk on UX for Security. Two areas that felt particularly rich in other talks were helping people feel mastery over IoT environments. and questioning algorithmic decision-making. It was nice to see designers talking seriously about the benefits of privacy, but more work is still needed to expand the conversation. Birds of a Feather groups and hallway conversations on social good felt more anchored to "social" as in social media rather than societal good.

    Favorite Quotes

    Image of a slide from the conference, which says: This is hands down the most time-consuming process and the least efficient thing that I do in my life. [...] People have wasted years of their lives doing this.
    Facebook’s Margaret Gould Stewart, pictured here, encouraged designers to improve enterprise software by offering this quote from a usability study participant. There are many painstaking experiences that waste people’s time, and they have no say in which system they have to use for their jobs.

    For the Reading List

    Two of my next UX reads will be Designing for Respect: UX Ethics for the Digital Age by David Hindman and Designing for Dasein by Thomas Wendt.

  • Users are people too: our talk at Shmoocon

    Last week Gus and I gave a talk at Shmoocon in DC. The focus was on helping technologists who don't have experience in human-centered design processes conduct basic research to improve their existing open-source tools.

    We covered four basic steps that we believe even small or volunteer teams can take:

    1. Agree on your target users
    2. Do an expert review of your UX to identify (& fix) low-hanging fruit
    3. Interview real users
    4. Build a model of your users and their needs
    5. Smooth the path for user feedback
    6. Iterate until you get it right </ol> </p>

      Overall the talk was well received, with a few choice quotes making their way onto Twitter.

      We've gotten a few queries from folks interested in our slides. If you'd like to tak a a look, you can find a PDF of the deck (including speaker notes) here! We will also put them in our small-but-growing GitHub repository of resources.

  • Signing on to protect the internet

    This week we joined nearly 200 other organizations, companies, and individuals in signing an open letter to the world's governments calling for them to protect the integrity of online security, and to not undermine it by weakening, limiting, or backdooring encryption.

    Simply Secure has written before about the importance of this issue before, both on our blog and elsewhere. We believe that all people should have access to strong privacy-preserving technologies, and that efforts to compromise encryption in the name of fighting terrorism will only backfire.

    If you agree with us, we encourage you to sign the letter as well.

  • Calling UX Designers & Usability Researchers

    We are pleased to share that the call for applications to the 2016 Supporting Usability and Design for Security (SUDS) Fellowship is now live. The fellowship, which is sponsored by the <a href"">Open Technology Fund</a> and co-administered by Simply Secure, is the next generation of the Secure Usability Fellowship Program (SUFP).

    Note: the deadline for applications has been extended to March 21st.

    SUDS is designed to pair fellows with host organizations that will offer mentorship and oversight, and Simply Secure is once again acting as one of the host organizations. Read on for more information about the fellowship, as well as for details on what types of projects we specifically are interested in hosting.

    Come design and research with us!
    Come design and research with us!

    Fellowship basics

    How long is the fellowship?

    The fellowship term can be 3-6 months (for seasonal fellows) or 12 months (for senior fellows).

    What is included?

    A modest stipend, plus a modest travel stipend for 12-month (senior fellows). See the SUDS application site for more details.

    Who should apply?

    The fellowship is targeted at UX professionals. This means primarily UX designers and usability researchers, although other types of human-centered experts are welcome to apply.

    What types of projects does it support?

    It is designed to fund human-centered design and research work on security and privacy problems that fall under OTF's Internet-freedom remit. This might include, for example, research into challenges users of privacy-preserving software currently have, or design explorations to help make such tools more useful and delightful.

    What types of projects does it not support

    SUDS does not support software development directly: i.e., it generally won't fund people to write software code. It doesn't support policy research, training, or long-horizon (5+ year) speculative research activities. Finally, it is not general-purpose lab funding; it is designed to support an individual UX professional working full-time for the specified fellowship term.

    What Simply Secure is looking for

    The SUDS application site has the list of current hosts, and encourages you to suggest additional ones. Different hosts are interested in working on different types of projects. Simply Secure is open to applicants from a variety of UX-oriented backgrounds, and with different levels of experience working with security and/or privacy.

    Our organization is especially focused on supporting high-quality UX design and usability research for secure-communications software (e.g., encrypted chat), and on normalizing UX processes into the open-source development process. But, we're open to other types of projects, too.

    So if you're an experienced UX designer or researcher and are passionate about security and privacy, we want to hear from you. If your idea falls outside of our core focus area or you don't even know exactly what you want to work on, that's ok – we can chat about your interests and work with you to see if there's a project that might fit your background! Just drop us a line at

  • How to Sketch Storyboards in 10 Minutes: No Drawing Skills Needed

    Sketching storyboards – cartoon-like drawings showing how people use technology – is a way to get more, high-quality ideas for product design. Sketches are useful for taking notes during a discussion and for getting a team on the same page. Fine art drawing is difficult for many, but anyone can master the basics of sketching storyboards – even without drawing skills. You don't need to be artistic, just follow these simple steps.

    This is a quick primer to get started. Thank you to Christina Wodtke and Laura Klein for inspiring this with their workshop at Lean Startup 2015. All you'll need is a pen, a few sheets of paper, and 10 minutes. If you're unaccustomed to drawing, thicker pens like Sharpie markers can be more expressive than fine-tipped pens like ballpoints.

    Sketching Emotions

    One key element that storyboard often convey is emotion. To learn how to quickly sketch a variety of sentiments, start by folding a piece of paper into thirds horizontally, as though folding a letter. Then unfold the paper and fold it in thirds vertically. You should end up with 9 boxes folded into the paper.

    Images of folded paper.
    Fold a piece of paper into thirds vertically (left). Then unfold and fold into thirds horizontally (right).

    Draw a circle in each of the nine squares, then add two dots to the middle of each circle. These dots are the eyes. Eyes go in the middle of the face, lower than you might think.

    Images of circles and circles with eyes.
    Draw a circle in each box (left). Two dots for eyes go in the middle of each circle (right).

    Now it's time to add mouths and eyebrows for expression. Draw a smile on each face in the top row, a straight line on each face in the middle row, and a frown on each face in the bottom row. Draw eyebrows tilted up on every face in the left column, no eyebrows in the middle column, and eyebrows pointed down on every face in the right column.

    Images of simple expressive faces.
    Simple mouths and eyebrows capture emotions.

    Congratulations, you've done it! You have 9 different facial expression capturing a range of emotions.

    From Stick Figures to Star People and Box People

    Some of us learn to draw stick figures in grade school, and don't practice beyond the game of Hangman. But with a few minutes of practice with these new forms, you'll be up and running with more expressive figures.

    Star people are simple figures whose head, arms, and legs make a star shape. Take a new sheet of paper and draw a star for reference. Underneath it, practice drawing the "body" of the star, which will be the arms and legs of the figure. To do this, imagine tracing around the perimeter of a star. Don't worry about the fifth point/head of the star yet. After you've practiced a few times, draw 10 star people with heads. If you like, experiment with drawing different groups of figures, such as figures of different sizes together.

    Left column: practice drawing the perimeter of a star as the body of a figure. Right column: 10 star people figures and some groupings of figures.
    Left column: practice drawing the perimeter of a star as the body of a figure. Right column: 10 star people figures and some groupings of figures.

    Box people are an alternative to star people. I personally find star people easier, but box people's arms and legs are easier to position. Practice drawing a page of box people. At the end, you'll probably have a sense of which are easier for you to draw.

    Image of sketched box people.
    Practicing sketching box people.

    Don't worry if the sketches are messy and the lines aren't square. Just keep practicing.

    Putting It All Together

    Sketching a storyboard is a way to be specific about the problem you're trying to solve. Here are some examples of star people with faces that tell two simple stories about the emotions people feel when using technology.

    Image of example sketches.
    Example sketches of star people feeling emotions while using technology.

    You can see how these brief sketches convey the emotional experience of using technology. Storyboards like these are useful at the beginning of the product development cycle because they help get a team on the same page about what problem they are solving and how they deliver positive feelings. Storyboarding is also a good way to get unstuck if you're not sure where to begin or how to prioritize possible changes. Taking a few minutes to sketch a handful of storyboard panels can uncover new insights into how people use technology.

    Additional Resources

    Christina Wodtke's Library of Visual Thinking includes lots of resources for digging deeper into working visually. If you're unsure what to illustrate with these figures, read Laura Klein on Predictive Personas. Simple sketches like these are a nice way to illustrate predictive personas, far better and more expressive than generic stock photography. Happy sketching.

  • Some Of Our 2015 Favorites

    2015 was our first full year in operation, and we’ve come a long way! Looking back at the past twelve months, here are some resources that we’ve found to be particularly useful (or entertaining). Let us know your favorites on Twitter!

    Ame’s picks

    Thinking back on 2015, I’m really glad to be part of Simply Secure and for the opportunity to be an evangelist for design. I’m thankful for resources that make design easier.

    The Noun Project

    The Noun Project is a great resource for icons. They’re useful for more than interfaces – I’ve included them in presentations and posters too. With low pricing for individual icons or subscriptions, as well as options for free attributed use, they’re my number one resource for 2015. Easy to search, easy to download, or drag-and-drop directly from the desktop app into Keynote.

    Screenshot of the Noun Project's image download interface.
    Noun Project images are downloadable as .png or.svg formats.

    With subscription or purchase, the images are free to modify. Here are some of my favorite modifications from this year:

    Noun project sample icons
    Images left to right: Mobile UI wireframe, assemblage of IoT location tracking, synthwave keytar.


    Flickr has a nice way to search images licensed under Creative Commons, which I use to illustrate presentations and blog posts, like the Lessons from Architecture School series. (Scout is also a fan of Google’s advanced image search options; after doing a search, choose “Search Tools > Usage Rights".)

    Screenshot of Flickr interface.
    Filtering Flickr search results by Creative Common license.

    InVision Blog

    InVision makes collaboration software great for teams working remotely on UX projects. The InVision Blog is a consistent source of high-quality, accessible design writing. Here are three recent posts I’ve found helpful:

    Scout’s picks

    This year was one of tremendous growth for us as an organization; we went from being a group of one to having other staff members and receiving official nonprofit status from the IRS. Here are some of the online resources I’ve found useful and entertaining, either in my own work or as a support to others.


    The Nielsen Norman Group has been a mainstay in the user-experience research space for almost twenty years. They have a great collection of free articles on all sorts of relevant topics, from foundational user-research pieces to gems like this recent comparison of UX design and working in restaurants. They are one of the first places I encourage new researchers check out while exploring the field.

    Swift on Security

    Lest any of us in security take ourselves too seriously, it’s always good to have someone like Swift on Security in our Twitter feeds.

    Screenshot of @SwiftOnSecurity's Twitter page.
    Who knew Taylor Swift was into Oxford commas?

    Nonprofit pointers

    Setting up a new tech nonprofit in the US isn’t always easy, especially given the IRS’s recent take on open-source. My #1 bit of advice is to get good lawyers helping you, if your organization can afford it (shout out to the NEO Law Group in San Francisco), or a good law clinic if you can’t (for example, check out OTF’s new Legal Lab). But I also have found resources like Guidestar and Nolo to be tremendously helpful in understanding the landscape and requirements of organizations like ours.

  • Straight Talk: New Yorkers on Privacy

    We spent last week in New York doing field work on mobile messaging. Thank you to the Design Insights Group at Blue Ridge Labs for connecting us to such great participants. Many thanks also to the research participants themselves, who gave us permission to share their stories and images.

    NYC background images
    Apartment building in Brownsville (left); jewelry store + phone center in Harlem (right).

    Real New Yorkers with Real Stories

    We talked with twelve New Yorkers from across the city, meeting with people in libraries, offices, restaurants, and homes. We spent an hour listening to each participant talk about how they currently message, their privacy concerns and security practices, and their opinions on secure messaging. These conversations provided insights into how to design secure communication tools for a mass audience.

    NYC interview images
    Learning how real New Yorkers use mobile phones by interviewing them.

    Most participants were Android users, with one iPhone user and one person declining to say. All of them used multiple messaging apps on the same phone, with the native messaging app, WhatsApp, Kik, and Facebook Messenger the most commonly used, along with direct messages in Twitter or Instagram. Many people have developed a hierarchy based on how well they know someone to determine how they message them: letting someone know your Instagram handle is less intimate than giving them your phone number.

    Emoji for Fun and Security

    Going out into the field is always surprising. One unexpected insight during this research was participants’ use of emoji as a privacy-preserving strategy. Emoji were an important part of messaging for many people, with apps like Bitmoji and Expresser used to add graphics across multiple platforms. One teenaged participant even used emoji in place of names in her contact list; the people with emoji were the most intimate or frequently messaged.

    NYC phone images
    Left to right: Bitmoji, Expresser, and a participant’s contacts list.

    Using emoji to hide the names of contacts can be an effective strategy if, like these participants, your main privacy concerns are related to other people getting physical access to your device. Shoulder surfing, people rifling through your phone, and screenshotting were some of participants’ top worries. Concealing the name through emoji makes it more difficult to identify the contact at a glance.

    Stay tuned for more research findings and design directions from this work.

  • Maximizing Meaning in Empty States

    It can be hard to communicate about security-related features with users who aren't already security experts. From word choice to the level of detail included, it's easy to overwhelm people with information, leave them scared, or bore them to indifference.

    For many applications, one major challenge is finding the right place to communicate. Empty states – screens in your app where there is no actual content to display – are a great opportunity for this communication, in part because they frequently occur when the user is first starting out. Here's a sampling of empty states from a variety of platforms, and a piece on designing great empty states in general.

    Incognito Mode vs. Private Browsing: Scannability Wins

    As a mini case study of how empty states can be used to communicate about security, consider the initial pages for Incognito Mode in Chrome and Private Browsing in Firefox.

    Incognito mode empty state, Chrome version 47.0.2526.80
    Incognito mode empty state, Chrome version 47.0.2526.80

    Both take advantage of the empty state associated with a new browser window to communicate both the benefits and limitations of the feature. They both share that their respective features prevent some data (like cookies and search history) from being recorded by the browser, but preserve other types of data (like downloaded files and bookmarks). Both screens also communicate that the features don't protect the user from surveillance by ISPs or employers providing internet service.

    Private Browsing empty state, Firefox 42.0
    Private Browsing empty state, Firefox 42.0

    In comparing the two, though, the text in Firefox's empty state stands out as being more scannable: it's easier to extract essential information from it without having to read it carefully. Specifically, the use of lists, bold subheadings, and icon bullets helps the reader 1) learn that Private Browsing doesn't keep everything private, and 2) extract details on what is and isn't retained while using Private Browsing.

    When you're setting out to communicate complicated concepts to your users, working to make your textual content more scannable is one quick and easy step you can take. Reflecting the meaning of the text in its structure – e.g., dividing benefits and limitations into two bulleted lists – reduces the burden on the user in trying to understand your material. And, this structure helps you resist the temptation to make your writing complex in an attempt to be precise. It's better to be simple in the main interface and provide a link to more information for users who want to learn more.

    For other ideas on how to make your interface text more scannable, check out The Nielsen Norman Group's handy list of tips and illustrative examples.

    There's Always Room for Improvement

    Although the Firefox Private Browsing empty state wins out over Chrome's Incognito Mode as being more scannable, its design could still be refined to help new users understand the feature better.

    For example, the green checkmarks under "Not Saved" convey confidence, but the caution symbols (which are affectionately referred to as "party hats" on the Simply Secure Slack channel) can be ambiguous. If the point is to help users understand that saved downloads and bookmarks can be problematic, wouldn't something along the lines of a red X be more clear?

    This brings up an interesting conflict between the positive and negative words and their corresponding icons: the negative heading ("Not Saved") has positive icons (green checks) and vice versa. Perhaps alternative symbols would help, like smiling and frowning faces? Perhaps a term other than "saved" would be helpful: "discarded", or "forgotten" and "remembered"? Alternative terminology could also prevent "saved" as being understood to mean "kept safe or protected", an interpretation that non-native English speakers might make.

    Similarly, the "Tracking Protection" box in Firefox poses potential challenges to unfamiliar users. The rectangle labeled "ON" looks like a button or toggle from a mobile interface, but is in fact not actionable. The tutorial helps the user understand what the feature does (it block "parts of the page that may track your browsing activity") and goes out of its way to explain how to turn the feature off, but doesn't offer users insight into how parts of a page might track them, what kinds of content might be blocked, or the implicit tradeoffs they might be making when choosing whether to use the feature (i.e., they might not be tracked, but parts of webpages might stop working).

    Opinions are Opinions; Data is Data

    All this illustrates that there's always room for improvement in any user experience. And, it also shows that there can be a lot of hard decisions to make when trying to communicate with users: two people could probably argue endlessly about whether "saved" and "not saved" is better or worse than "remembered" and "forgotten".

    That's why it's important for a software team to not just rely on their own intuition and inclinations in making decisions about user experiences. Whenever possible, products benefit when teams gather data from real users – whether it's a broad quantitative sampling or a small focus group in all its qualitative glory. Quick, scrappy, and informal studies can offer just as might food for thought as large, well-organized ones – and are easier for small teams to perform on the fly.

    If you're interested in improving how your project communicates with users about its security features, or want help structuring a study to get insight from real users, get in touch. And, please spread the word to your favorite open-source projects and encourage them to apply for free help!

  • Donate to Simply Secure

    Simply Secure is a non-profit organization, and we rely on donations to be successful in our work of getting privacy-preserving software in the hands of more people.

    Image of wrapped

    To celebrate our official recognition by the IRS as a 501(c)3 organization – which means donations are tax-deductible in the US – we have added a donations page to our website. As you are contemplating your charitable giving at the end of the year, please keep us in mind. Even small amounts will help us demonstrate that there is broad support for improving the user experience of secure software, so we welcome your contribution, whether it is $1, $10, or $1,000.

    And, don't forget to drop us a line at and let us know what open-source software projects you would like to see get support polishing their user experience. We will reach out to the teams you suggest and see if we can lend them free help, either through our recently-announced collaboration with the Open Technology Fund or as part of another initiative.

  • Apply now for design and usability help

    We are pleased to announce a new collaboration with the Open Technology Fund as part of their Usability Lab project. This exciting initiative will allow open-source software projects to apply for free assistance with user-experience (UX) design as well as usability research. To our knowledge, this is the first program to offer support of this kind.

    Open Technology Fund + Simply Secure

    Who should apply?

    The Usability Lab is focused on projects within OTF’s remit – i.e., software tools and initiatives that support free expression and information exchange online. If you’re working on a tool that provides encrypted communication, secure file exchange, censorship circumvention, or related features, you should definitely apply. If you're not sure whether your project fits within this framework, please apply and we will work with you to see if it can be supported under the program.

    What kind of support can projects receive?

    Eligible software projects will receive free support from design and/or research professionals to evaluate and improve the quality of their project’s UX. Simply Secure will work with your project to identify the type of support will be most useful, and scope a well-defined set of activities that can be accomplished over the period of a few weeks.

    Potential activities include:

    • Expert reviews to identify opportunities for improving the UX
    • Usability studies to evaluate a newly-proposed feature
    • Design sprints to harmonize the visual look-and-feel between an app and its website
    • Program evaluations to examine a team’s process for getting feedback from its users
    • Strategy research to help a team identify and understand its user population </ul> </p>

      In addition to matching software teams with skilled designers and researchers, Simply Secure will collaborate with engineers and UX professionals to ensure good communication over the course of the project. Simply Secure will also work with the software team after the design or research phase is complete, to make sure they are successful in incorporating the findings into their next development cycle. Finally, Simply Secure will work with software teams to transparently share the results of the collaboration, bringing open-source values to UX work.

      Can UX professionals get involved?

      Absolutely! If you are a UX or visual designer or a usability researcher interested in doing applied work on software in this space, please contact us at with a link to your portfolio and/or curriculum vitae. This is a pilot program that we hope will help us connect an extensive network of designers and researchers working on privacy and internet-freedom tools, so we want to hear from you!

      How do I learn more?

      Please apply for support here (or email us if you are uncomfortable using Google forms) as part of OTF’s Usability Lab, and contact or with questions!

  • Encryption is not for terrorists

    Recent attacks by Daesh in Turkey, Egypt, Lebanon, and Paris have fanned the flames of an ongoing debate about software that is resistant to surveillance. It seems that some participants in that debate are trying to use these attacks as an excuse to drum up fear around end-to-end encryption. They argue that these events tell us that the general citizenry shouldn’t have access to strong privacy-preserving tools.

    A lot of people are saying a lot of smart things on the subject, but I want to briefly outline a couple ways in which this call for limiting encryption is problematic.

    This instance

    There appears to be no actual evidence that encryption software was used to plan recent attacks, much less that such software thwarted intelligence agents who would otherwise have been able to prevent the tragedies. Indeed, Le Monde reports that the cell phone found in a trash can near the Bataclan in Paris contained “a detailed map of the concert hall in addition to an SMS message saying, according to information gathered by Le Monde, ‘Let’s go and get this started.’” [1] Not an encrypted chat program, or an encrypted email – an old-fashioned, easily-intercepted text message.

    This lack of evidence did not prevent “European officials” from asserting that encryption tools had a role in the Parisian attacks – assertions that were published and silently removed in an article by the New York Times.

    We all have an interest in seeing terrorists’ attacks prevented, and we can all appreciate that finding and monitoring the activities of malicious actors is hard work. It’s also understandable if officials are trying to keep details of the investigation (like what communication tools the terrorists used) quiet. But fear-mongering about encryption – whether it’s truly disingenuous or simply unsupported – doesn’t make the public feel better when attacks occur, nor does it mollify people’s concerns about the massive surveillance systems that have been put in place to thwart such plots.

    Indeed, false claims of encryption hampering intelligence efforts only highlight the ineffectiveness of mass surveillance. The cynics among us must wonder, “Why are they complaining about encryption, when they can’t even thwart attacks that are planned in the clear?”

    The bigger picture

    Even if there is evidence that the terrorists who planned these attacks were using high-quality encryption tools (and not just ones that are likely insecure in practice), that doesn’t mean that law-abiding citizens should be prevented from doing so. There are many imperfect analogies we can use to argue this point: terrorists use fast cars, paper shredders, cell phones, and (for countries with minimal gun-control laws) terrorists use firearms. When push comes to shove, the fact that a technology with substantial lawful use is sometimes used by malicious people – and even when this use of technology makes it more difficult for law enforcement to stop the “bad guys” – does not justify efforts to ban it.

    This is especially true when it comes to things like encryption. We live in a world where the internet is integrated in every intimate corner of our lives – from our love letters to our financial and health records – and numerous criminal factions stand to profit from gathering our personal data. The average person’s integrity and even safety depends on keeping their private information private. Some policy-makers would have us believe that it’s possible to build a “backdoor” into encryption so law enforcement can peek into our private lives when they have probable cause, but the technological reality doesn’t line up. Backdoors can’t reliably be marked “good guys only”; when one is introduced, it will inevitably be used by malicious actors as well. Encryption tools that only work some of the time aren’t proper encryption tools at all. All sorts of organizations and people – from Google and Facebook to the EFF and The Tor Project, from the CISO of Yahoo to the co-inventor of the RSA algorithm – agree.

    The future of this debate

    Simply Secure believes that all people deserve access to privacy-preserving communication tools, including end-to-end encryption. We are working to support software developers in their efforts to make these tools more user-friendly, and to help tool-makers express the value of their software to non-experts.

    The debate on who should have access to these tools will only intensify as they become more popular. If the pundits arguing in favor of backdoors – or, more absurdly, in favor of outright bans on certain encryption algorithms – have their way, dedicated terrorists won’t be thwarted. They’ll still find ways to communicate out of the eyes of law enforcement. But law-abiding citizens will have lost the ability to protect their data in the process.

    [1] My own translation; original text: "un plan détaillé de la salle de concert ainsi qu’un message SMS disant, selon des informations du Monde, « On est parti on commence »." </p>

  • Why Open-Source Projects Need Style Guides

    Style guides specify the look and feel of how a company or team communicates with the outside word. collects examples of website visual standards that maintain a consistent online presence. Brand guidelines typically focus on how logos are treated, while style guides are more extensive – including not only look and feel, but also interactive behavior, such as the alerts and form templates in the U.S. Web Design Standards.

    Style guides empower groups, such as teams developing open-source software, to communicate with their users with one consistent voice. Visual design elements, such as fonts and colors, help the world understand who you are. For example, Starbucks Coffee has a particular green, which they use for a variety of purposes:

    Image of the Starbucks Green
    Starbucks Green: Pantone 3425 C / Hex #00704A / RGB[0,112,74] / CMYK[100,0,78,42].

    Because they are so consistent in using that green – and only that color green – people in many countries can make sense of this ad.

    Image: Photo of a Starbucks advertisement
    Photo of a Starbuck's advertisement seen in San Francisco, October 2015.

    Building Users' Confidence

    As my recent blog post on Nostalgia, Trust, and Brand Guidelines argues, end users can't assess the quality of underlying cryptography, and will instead evaluate how robust or trustworthy software is based on its look and feel. Style guides can help a distributed team of volunteers come together to make polished software that inspires confidence by end-users and drives adoption of secure communication technologies.

    Beyond "Darth Hoodie"

    When Simply Secure worked with Martin Wright of mySociety to create our style guide, we discussed using bright, punchy colors to convey a friendly, approachable tone. The Simply Secure colors and rounded fonts communicate a welcoming feeling, very different than the traditional shield, lock, and key icons that try to say "Secure!" but end up saying "Keep Out".

    Image of a sample from Simply Secure's style guide.
    Part of Simply Secure's style guide.

    These choices were intended to make security accessible and desirable to a mass audience, rather than the dark, menacing images familiar to people who read articles about security in the popular press. Stock photography of hackers tends to be comical, with balaclavas and hoodies as standard attire, as in Horrible Infosec Stock Art (thanks to @bascule in our Slack channel for the pointer).

    Inviting Designers to Participate

    Just as a cliché stock image tells InfoSec professionals that they aren't the audience for an article, designers judge whether a team is a good fit for them based on appearances. UX professionals will look at a piece of software and evaluate how much its creators value their users, based on how professional and consistent the software's interface, website, and supporting materials look. Style guides are a tangible asset that all teams should create to help designers understand a team's values and commitment to UX, and to creating software for average people.

    Imaging Different Use Cases

    Putting together a style guide is a good exercise on its own, because it helps clarify the audience and describe the situations where end users' will encounter the materials. For instance, at Simply Secure we knew we wanted community events, so we made sure to have event signage that included the same colorful Fibonacci sequence that's on our website and business cards.

    Image: Simply Secure event sign template.
    Simply Secure's template for event signs.

    Style guides should be living documents, updated over time, so starting with something quick and scrappy – but open and accessible – is a great choice for an open-source project. We're releasing ours under a CC-BY 4.0 license on our website and our Github repo, so you check it out. And, if you want help developing a style guide for your open-source secure-communications project, let us know!

  • Don't let security dogma steer you wrong

    My recent post describing some of the reasons we choose Slack over IRC for our public forum is part of a larger conversation people are having around the promise and concerns of group-communication tools. A quick search for "Slack vs. IRC" yields a wealth of opinions on the subject; our post generated some interesting discussion (and a couple angry rants on Twitter).

    I focused my discussion on the usability advantages of Slack – advantages that I believe encourage designers to join our public forum in a way that they would not if it were hosted on IRC. Simply Secure is about bridging the gap between the technical and design/research communities to get more human-centered thinkers working on open-source privacy-preserving tools. We can't do that if we continue to tell designers that they have to communicate using tools they hate, and the OSS community's expectation that they do so is one reason open-source tools are still so painful to use.

    Buried at the end of the post was another point that deserves more attention: "But for the meantime, this abstract threat does not outweigh the benefits Slack offers, especially when one ponders how often both Slack and its open-source alternatives realistically undergo regular security reviews by skilled engineers."

    Text image: security reality, not security dogma

    It's critical to observe that we can't assume that open-source tools are always – by virtue of them being open source alone – the most secure in practice.

    Open source alone is not enough

    "What?!", you might be yelling at your screen. After all, we all know that opening source code to the light of day allows the public to hold developers accountable, and prevent both unintentional bugs and all-too-intentional back doors.

    But, you have to ask yourself: how many security audits have you personally performed of the open-source tools you use? (What were the results? Did you do a follow-up a year later?) How many IRC clients have bug bounties? How many of the open-source tools we depend on have anyone with security expertise reviewing their code – much less neutral third parties who aren't part of the team that wrote it?

    The answers, of course, are not pretty. Even projects new and old with an explicit security focus suffer serious bugs that would arguably have been caught by a thorough security review. We're still exploring the world of Slack alternatives, but a recent review listed "empty test suite" as a problem with three of the five products it considered. If a team doesn't have the resources to build automated tests into their development cycle, what confidence can we have that they are doing their due diligence with respect to security?

    Dealing with resourcing realities

    Big closed-source organizations like Slack clearly have the leg up in this domain; a quick search on LinkedIn reveals at least a handful of Slack engineers whose primary focus is security. This is slowly changing in the open-source world; efforts like the Core Infrastructure Initiative and OTF's Red Team Lab (currently accepting applications; contact with questions) provide support to open-source projects seeking to evaluate and improve their security posture.

    It's not enough to shake our collective, outraged fist and say that open-source projects would fulfill their maximally-secure destiny if only they had more resources. And I agree, of course, that there are considerations beyond code-level vulnerabilities that should give any user pause when considering a tool like Slack. Security is not a binary property, and a cloud-based solution hosted by a third party is too risky in the context of many organizations' threat models.

    Open-source is good; avoiding dogma is better

    So if you're an organization that has the technical resources to host your own solution, and you find one that is truly accessible to your users (or your users have the time and patience to work with the developers to improve it), that's great! If you do use an open-source tool, please contribute back to the project so its developers can continue in their good work. This is the ideal outcome, and the one that will lead us to the best privacy and security posture over time.

    But, in the meantime – and no matter your threat model – please take an honest look at the pros and cons of any solution you consider, and think critically about whether a development team practices the values that they preach. Open-source solutions are great, but only if they will really meet your needs, or can be adapted to in a reasonable amount of time to do so. Just because something is open-source doesn't mean that is it necessarily has fewer security vulnerabilities than a closed-source solution. Espousing otherwise – especially to organizations with limited technical capacity – is irresponsible.

    Don't let security dogma get in the way of your assessment of security reality.

    Thanks to @isa for a recent conversation on the topic that inspired this post, although please don’t blame her for my conclusions.

  • Mind The Gap Between Mobile Apps

    Users of the Facebook iPhone app were recently surprised by a new feature offering to “Add the last link you copied?” into a status update. Many people did not expect to see a complete URL that they had put onto the clipboard from another app, without explicitly involving Facebook. Christian Frichot discusses iOS security concerns with this feature, but I also consider this to be a UX design failure.

    Screenshots of new Facebook URL feature
    Copying a link in Safari (left) makes it appear in Facebook (right).

    Consider the following example scenario: you get a reminder to share the name of a counseling service with a friend who is having a difficult time. Several minutes after copying Integral Counseling’s website URL from Safari into an email and sending it to your friend, you open Facebook and see the offer to include the link in a status update. Clicking the X in the dialog box presumably prevents Facebook from including the link in your status update, but there’s no way to keep the link from the screen or – more broadly – to limit Facebook’s access to your clipboard in the first place.

    Facebook seems to be reading everything on the clipboard to see if there happens to be a link to a website. People copying any information – even passwords or personal notes to themselves, even when using encrypted chat or email – feel like they have had their privacy compromised.

    Good UX Isn't Creepy

    Individual iOS apps haven’t always done a good job of exposing to the user how they access OS-level features like the clipboard and address book. Seamless access to Contacts creates all sorts of awkward situations, such as Tinder dates showing up in LinkedIn’s “People you might know”.

    The Privacy menu of the device’s Settings lets users specify which apps have access to Contacts, Photos, and other OS features. This control panel, traditional copy-pasting, and the interface for explicit in-app sharing are the ways users understand how content gets into Facebook, a broadcast medium. Automatically pulling in links from the clipboard breaks that mental model.

    Screenshots of the iOS Privacy Settings
    Left: the Privacy menu of Settings controls which apps access OS-level functionality like Photos and Contacts. Center: within Safari and other apps that use the iPhone’s native sharing menu, Facebook shows up as a destination. Right: the iOS Facebook app automatically pulls in links from the clipboard.

    There is no setting for granting access to the clipboard; it seems that iOS provides unlimited access to all apps. There are cases where this functionality makes sense and truly helps the user: for example, reading-list manager Pocket also inspects the clipboard for content and asks to add a copied URL to your personal list. Perhaps similar functionality makes Facebook feel creepy because Pocket is about keeping your data for you, while Facebook is about sharing your data with the world.

    Screenshots from using Pocket
    Left: Pocket asks to add a URL to a reading list. Center: when reading a website within Pocket, there’s an option to share that link. Right: Pocket has options to share with different audiences, including via Facebook, but the default is a personal reading list.

    These apps’ differing value propositions are apparent from the language on their respective home pages. Facebook invites you to “Connect with friends and the world around you.” In contrast, Pocket is about keeping thing to yourself, telling users “When you find something you want to view later, put it in Pocket.” The private nature of the app is reinforced by a mental model that Pocket is a place on your phone where you can read articles and watch videos even without a network connection.

    UX designers have worked hard to create seamless user experiences where things seem to happen as if by magic. But as people’s behavior online and off is increasingly tracked, seamlessness can now easily evolve from being unexpectedly delightful to downright creepy. Facebook has a business imperative to get more content into the platform, and this probably resulted in a drive to create a shortcut around the iOS sharing menu. I encourage individual designers facing similar imperatives to push back and protect their users by advocating for good UX – in other words, UX that isn’t creepy.

    Alternative: Elegant Seams

    As more people become privacy aware, there’s a professional challenge for UX designers to understand the technical on-ramps and off-ramps of how data flows into and out of their apps. Elegant transitions between apps can help UX designers move from a siloed view of their product to a transitional view, designing for users to move throughout an ecosystem of apps and operating systems.

    Consider how the iPhone UX evolved; it launched without a clipboard or the ability to multi-task between apps. The UX for switching between apps is still clunky, relying on a double-click to see what apps are open. And even though people now often move between apps or close apps they don’t want running in the background, the interface for multi-tasking hasn’t kept pace. Double-clicking to scroll through apps is one kind of seam, but not an elegant one.

    Screenshot of iOS app switching
    Double-clicking the iPhone’s button shows open apps.

    Surfacing the seams between apps – such as permission management and data sharing (both implicit and explicit) – is important for empowering users to protect their privacy. Rather than falling into a paternalistic trap of “making everything just work” and not “forcing the user to think” – which robs users of their agency – designers must create elegant interfaces that liberate users to manage how their data flows between apps.

    One opportunity for managing seams and meeting people’s privacy goals is to use best practices from service design for helping people move between multiple experiences and platforms.

    In an increasingly-privacy conscious world, “seamless” doesn’t always work. Sometimes we need elegantly visible seams.

  • Underexposed: Building a Movement for Secure UX

    Last week Simply Secure hosted a pilot workshop called Underexposed. A small group came together in San Francisco to

    • Share successes and challenges in secure user experiences
    • Describe processes and wishes for successful collaboration between designers, developers, and security professionals
    • Prioritize the most important topics and audiences for outreach.

    We also held participant-proposed breakout sessions on topics ranging from “Making a Living” to “Privacy-Preserving User Research Metrics.”

    You can download a pdf of photos capturing the post it notes from the sessions.

    Top 3 Surprises

    We're hard at work synthesizing the discussion, but Underexposed is a community effort. Please reach out if you're interested in being a reviewer of the output. Stay tuned for more outcomes, but here are three of the things from the notes that stood out to me.

    Timing is Everything

    One of our goals was to find specific ways to facilitate collaboration between designers and other teammates. Part of that effort involved identifying communication gaps and misconceptions about design. Timing came up as a common misconception. Budgeting sufficient time for design into a project plan is one way to insure success. We also heard a clear request to involve design earlier in the process, rather than tacking it onto the end after the technical challenges are met.

    Image: Design takes time. Design is not window dressing at the end.

    We heard an unmet need for project and product management expertise, and Simply Secure is working to build those skills in the secure communications community.

    Lights, Camera, Action!

    The group identified some surprising potential audiences for Simply Secure's work normalizing Human-Centered Design for security. Journalists were consistently mentioned as a priority, but groups like celebrities and parents/grandparents unexpectedly surfaced as well.

    Image: Audience journalists; Audience: celebrities; Audience: Parents and Grandparents

    Telling human stories about the lives of relatable people is an important part in communicating the value of secure communications.

    Blackout Day

    The community is looking for ways to build awareness of global security challenges. There's a need for visual design that works across cultures, in addition to localizing the text in interfaces. Building empathy for the digital threats that people face in other geographies is challenging. Perhaps inspired by the 2012 protest against proposed internet legislation in the U.S., one creative way to help more people appreciate an open internet is to have a "Blackout Day," simulating the conditions of internet restrictions.

    Black Out Day. Empathy for the experience of other countries.

    Thanks to our participants' lively discussion, Underexposed was a success. We're working on ways to involve more people in the future.

  • When Closed-Source Software Wins The Day

    We prefer to use open-source software as a matter of principle. We believe that putting software code in the open is the best way for the public to build trust in it.

    You might find it curious, then, that we choose to foster communication and community through a tool like Slack, which is closed-source. (Note: you can request to join our Slack channel by sending a request to Many software teams that build privacy-preserving tools host similar spaces dedicated to communication with volunteers and users. Their spaces are usually built on IRC, though, which has multiple open-source options for both the client and the server. Why didn’t we go a similar route?

    Our decision to go with Slack over IRC mirrors the decisions that people the world over make every day. If we take a minute to examine our reasoning, we can find some valuable lessons for open-source developers.

    Instant access

    One of the biggest advantages we found Slack has over IRC is how quickly it works in a variety of environments. You can get up and running on the web in less than a minute, and expand your experience to include a native client on your desktop or mobile device with a quick download. You don’t have to enter a channel name or configure the software to point at a particular server: you click on an invitation in your email, and you can get started after just one or two steps.

    Slack has been pretty close to instantly accessible from an administrative point of view, too. We haven’t had to set up a server, do extensive configuration, or offer any kind of how-to information to our users other than “send us an email and we’ll invite you”. Given the diverse community of people we are trying to reach – including designers, researchers, and program managers – we expect we would either have to offer a lot of support to get the more adventuresome among them to try IRC.

    Stateful, active participation

    IRC grew up in the age of desktops, where you only participated in a real-time online conversation when you were seated at the keyboard. Some IRC clients may have evolved beyond this model, but vestiges of it remain. Today’s smartphone-weilding users operate in a different world, where they might be on their phone at one moment, a computer the next, and a tablet in a few hours. Slack tries to make this experience seamless. It remembers where in the conversation stream you left off, and helps you find your place across different devices. It also lets you get notifications when somone mentions you, so you can tune in even when you’re “offline”.


    Beyond ease of first use and aspects of the software’s functionality, Slack is just so gosh darn friendly looking. For many people, staring at a screen full of monospace text is tortuous. Tasteful in-line image integration, textual hierarchies interspersed with whitespace, and tastefully-colorful menus all make Slack easier on the eyes. There’s a welcoming bot that helps you set up your profile, and pithy loading messages help you adopt a lighthearted mood when you join each day. Finally, Slack offers much of its functionality up front through graphical interfaces, rather than requiring the user to learn special textual incantations to access them. Although Slack is intended to help people communicate through text, its attention to other details is what makes the experience more enjoyable than current IRC clients for most people.

Screenshot of an IRC client. Image:
Screenshot of our Slack channel.
    The IRC and Slack experiences are very different

    The tradeoffs

    Now that I’ve gushed about what Slack offers, I want to call out some of its downsides. Because it’s owned and operated by a third party, we don’t have ultimate control over our Slack community. We believe that the company has reasonable policies in place that prevent their employees from going in and mucking about, but there’s always a chance that a bad apple could get in and do damage of some kind.

    This lack of control also manifests in the fact that Slack limits the number of archived messages that are available on unpaid accounts. In other words: if we want to access all of our archives, we need to pay them money – and given their rates for a community our size, we can’t afford to. The silver lining here is that they offer free upgrades to documented nonprofit organizations, so when our application for 501(c)3 status is approved, we should be able to gain access to those archives again. (We have also been trying to download the archives on a semi-regular basis for our archive, and are glad that Slack provides the facility to perform such downloads to administrators.)

    Finally, being closed-source means that Slack may have all sorts of crazy vulnerabilities that could allow an attacker to compromise our community in some way, and only Slack employees would know. For some communities this alone is enough to make Slack an impossible option, which we understand and support. So if an open-source solution comes along that offers more of Slack’s benefits than current IRC options, we will definitely reconsider our choice (feel free to contact us if you know of one). But for the meantime, this abstract threat does not outweigh the benefits Slack offers, especially when one ponders how often both Slack and its open-source alternatives realistically undergo regular security reviews by skilled engineers.

    This is one case where open-source options are currently losing the battle, at least right now.

    Screenshot of the WeeChat IRC client, Fundación Wikimedia, Inc., published under a CC BY-SA 3.0 license.

  • Catching Issues in Evolving Interfaces

    You may remember this summer’s media frenzy surrounding adultery-matchmaking site Ashley Madison. In brief, the company had its password database hacked, stolen, and posted online with great fanfare. Amidst the stories focusing on noteworthy individuals and the demographics of the membership as a whole, some people have been investigating other aspects of the site’s operations, from their “Affair Guarantee” package to their practice of charging to delete a user’s account from their servers.

    "Leaky" Interfaces

    One researcher uncovered a quirk of the site’s password-recovery form that actually allows someone to check whether an email address is associated with an account. In security we often refer to such a flaw as “leaking” sensitive information.

    Usually leaks that occur with sign-in or password-recovery forms involve the text of the interface – e.g., a sign-in form that responds “The password entered does not match the one on file for this email address” as opposed to the more broad “The email address and/or password entered do not match our records.”

    The Ashley Madison password-recovery form actually uses the same text whether or not the email address entered is in their database. However, in one case the text-input field and the button stay present in the screen, and in the other case they disappear.

Screenshots from Ashley Madison's password-recovery form.
    Screenshots of Ashley Madison’s password-recovery form when the email address is not (left) and is (right) part of their database.

    Supporting graceful product evolution

    Given the site’s focus on discretion – and the carefully-worded textual content of the form – it’s unlikely that someone sat down and intentionally designed it to behave this way. It’s more likely that this interaction snuck in as parts of the site’s architecture were reworked over time. Since the folks working on the site likely don’t reset their passwords on a regular basis (much less compare the result when the email address is and is not in the databse), it's easy to see how the team missed the addition once it was added.

    This is an example of why it's important to think of designing not just the product, but also processes to support the product's graceful evolution over time.

    Here are some ideas to help catch interface problems that sneak in:

    • Create UX reviewers. Just as teams conduct code reviews before a set of code changes are committed, it can be useful to have UX reviews as well. These can be performed by a designer – advisable when an interface is being implemented against a set of mockups that the designer created – or by another engineer when the change is small. The goal is to make sure that at least one other person takes a solid, critical look at the user-facing implications of the changes, just as the code implications are examined.
    • Create an adversary persona. Many teams craft user personas to help them design interaction patterns that will meet the needs of their diverse user population. Why not also create one or more personas representing attackers? (Thanks to @gretared for her take on @jorm's idea of creating a troll persona – "because you can't design for good without understanding the evil"). This adversary persona can help UX reviewers identify ways that the interface might inadvertantly leak information.
    • Regularly audit against known best practices. Armed with your attacker persona and other approaches for threat modeling, try to identify a set of principles or clear protection goals that you can then use to evaluate the user experience on a regular basis. For example, many websites require users to reauthenticate before accessing sensitive parts of their account; this is a best practice that protects against both accidental and some intentional forms of data compromise. Keep the list of best practices as short as you can, to make it feasible to schedule a regular review that assures your interface hasn't evolved too far from its original privacy-driven design. </ul> </p>

      Screenshots of Ashley Madison password-recovery forms captured by Troy Hunt and used on his blog, which is published under a CC BY-SA 3.0 license.

  • Ninjas + Hemingway: Writing for User Interfaces

    The writing in your user interface is an opportunity to encourage people to use your product. Writing in interfaces includes everything from the words in an in-app setup tutorial to a website's navigation menu. Because there is both technical complexity and high stakes for user failure, careful language is key to getting mass audiences to adopt secure communication tools.

    Good writing for an interface, also called user-experience (UX) copywriting, does two things:

    1. it explains how things work, and
    2. it creates an emotional reaction with the user.

    Explanations in language people understand

    Explaining how things work isn't straightforward. Many secure communication apps struggle with conveying too much technical information to an overwhelmed user who, for example, doesn't care about the difference between DSA and RSA keys. Careful UX copywriting avoids this pitfall.

    Even a simple explanation can create a negative emotional reaction when it sounds like jargon to a general audience. Jargon is off-putting, whether it's Silicon Valley buzzwords or bureaucratic govermentese, because it conveys that the service is untrustworthy. To stamp out jargon, Mail Chimp's Content Style Guide on GitHub includes this list of words to avoid using. They've flagged a bunch of corporate-speak words as inappropriate. Mail Chimp is a sales tool for sending customized email and newsletters, so "Incentivizing your ninjas to crush it!" could be a reasonable explanation of their value to their customers who are marketing professionals, but their writing is careful to use vocabulary that is accessible to a broader audience.

    "Hash"? "Fingerprint"? Tweet us @simplysecureorg with your nominations of technical security terms to avoid.

    Reducing the need for support

    With good UX copywriting directly in the interface, the need for training and other kinds of support decreases. Consider this excerpt from 18F's writing guide for U.S. Government websites:

    "FAQs are bad. We don't like them. If you write content by starting with user needs, you won't need to use FAQs. … If you're thinking about adding FAQs, consider reviewing the content on your website that is generating questions and think about how you can change the content or design to answer the question — or provide an answer in context to prevent people from visiting an additional webpage to find the answer." – 18F Content Guide

    Similarly, the Gov.UK writing guide does a good job of explaining how to make complex information accessible to a broad audience. Just as a piece of consumer software must be understandable to many types of users to be successful, UK government publications must reach and be useful to a broad spectrum of readers, from English-language learners, to populations with low levels of literacy, and citizens with a range of accessibility challenges.

    With good UX copywriting directly in the interface, the need for training and other kinds of support decreases.

    Resources for UX copywriting

    In addition to complete style guides, here are some practical tips:

    My favorite suggestion (which makes both lists) is to be make your writing shorter. To trim excess words, try the Hemingway App. It was named after the writer whose work was famous for its short, clear sentences.

    Image: Screenshot
from Hemingway App, showing dynamic highlighting as you type.
    Image: Screenshot from Hemingway App, showing dynamic highlighting as you type.

  • Victims of Success: Dealing With Divergent Feature Requests

    Last fall I attended a workshop with a group of open-source developers working on security tools. In talking about the challenges they faced in making their tools more usable, I blithely said something along the lines of "It's important to always listen to your users, and take your cues from them."

    "But how," replied one developer, "do you know what feature requests are the right ones?"

    My first response was confusion. I was talking about user research, so a question about feature requests seemed like a bit of a non-sequitur – especially since lay users are generally more comfortable sharing problems than they are sharing suggestions for improvement (e.g., “configuring the app is hard!” rather than “please reduce the number of steps in the sign-up flow"). The developer then shared that, although his team didn't have a lot of opportunities to interact directly with their users, they did have a form for submitting feature requests. Moreover, their more tech-savvy users were using the form a lot, to the point of inundating the team. He wanted to know how to prioritize among this deluge.

    Listen for the intent underneath the request

    While it is important to listen to your users and learn from their message, it's absolutely critical to hear the intent behind what they say. To use a contrived example, imagine that a user says "You should put a purple pony on your main page. Ponies are really friendly, and I like them, and they would make me smile every time I opened the app!"

    While it is important to listen to your users and learn from their message, it's absolutely critical to hear the intent behind what they say.

    While it's undoubtedly true that this one user would find your app much improved by the addition of a little Twighlight Sparkle, that doesn't mean that actually adding an image of a My Little Pony is going to improve the experience for most of your users. Even if you got the same message from hundreds of people, you should still ask yourself: What underlying need is driving these requests?

    In this example, the suggestion itself gives some clues:

    • The user says that ponies are friendly. Does the user find the app unfriendly? Is there something about the color scheme, the typography, or the wording used in the app that could be changed to make it more inviting?
    • Ponies are lighthearted and energizing. Does the user feel that interacting with the app is burdensome or painful? Are there unvoiced friction points that could be smoothed over to make the app more enjoyable to use?
    • The user says they want to smile when they open the app. Although a pony may not be an appealing symbol for all people, is there another non-invasive way to greet the user when the app is launched?

    Find deeper answers through research and analysis

    Rather than view feature requests as a set of highly-divergent signals, it can help to try and group requests based on the underlying need that they speak to. If you see a lot of heat around your error conditions, consider reviewing the messages displayed in those situations to make sure they make sense to people not on the development team. If you have a lot of requests around your sign-up flow, perhaps it's time to do some cognitive walkthroughs with real users to identify friction points. If possible, make sure that your feature-request channel offers users the possibility to enter their contact information, as it is often helpful to follow up with them afterward for more context about why they are making the request.

    Rather than view feature requests as a set of highly-divergent signals, it can help to try and group requests based on the underlying need that they speak to.

    Keep the channels open

    More generally, make sure that your team gets opportunities to interact with users that aren't just through feature requests. Whether it's a help forum, a user study, a questionnaire on the download page, or some other channel, the more insight you can gain into what and how users think about your product, the better equipped you will be to prioritize user-facing improvements.

    If you want to talk through how your open-source security project learns about its users or prioritizes its feature requests, please get in touch!

  • Fostering Discussions Around Privacy

    This week we've been busy in New York City meeting with our <a href=""">advisors</a> and co-hosting Art, Design, and the Future of Privacy. It was gratifying to see so many people turn out to discuss creative ways of approaching an issue that is dear to our hearts, and I know that I'm not the only one who was inspired by the work our speakers are doing. From Lauren McCarthy's crowdsourced relationships, to Sarah Ball's perspective from working as a prison librarian, and straight through to Cory's rousing call for hope and action in the era of peak indifference, the evening showed that the conversation about privacy is for more than just technologists and policy makers.

    The event was recorded, and we'll share that recording here and via <a href"">Twitter</a> as soon as we can. In the meantime, we want to hear about the conversations you're having in your town about privacy. Are you a designer or an artist working on projects that deal with the subject? Are you a software developer who has been inspired to build privacy-preserving tools, or a person who is curious and interested in learning more about what tools are already available? We want to hear about your experiences, so get in touch!

  • Nostalgia, Trust, and Brand Guidelines

    Last week Google unveiled a new logo as part of an updated brand identity. Professional typographic designers were swift to react. Tobias Frere-Jones, designer of Interstate and other widely-used fonts, said "I really hope this 'e' does not become a thing."

    Beyond professional designers, the New Yorker's Sarah Larson complained Google "took something we trusted and filed off its dignity." The Google logo reaches the level of cultural commentary in a general interest magazine because its use is so widespread.

    Logos in the Landscape

    As a point of historical comparison, in 1970 designer Saul Bass created a new bell logo for telecommunications company AT&T. When AT&T updated to Bass' bell logo they changed:
    • 135,000 Bell System vehicles
    • 22,000 buildings
    • 1,250,000 phone booths
    • 170,000,000 telephone directories

    Those numbers, taken from the description of the imaginative 1969 pitch video, capture what was the largest corporate re-branding effort of the time. Although a meandering twenty-six minutes full of dated cinematography, the video makes some still-relevant points about why companies change logos. Starting at 6:09 the narrator describes how logo changes signal to external audiences (customers) and internal audiences (employees) that the organization is a different kind of company with different values.

    Interpretations of what the new Google logo means range from enthusiasts seeing evidence of a company maturing and becoming more interested in design, to critics observing that the friendly, approachable letters could be a counter-measure against the company's Orwellian growth.

    Logos and Nostalgia

    A negative reaction to the new Google logo is understandable, because any change reminds users that those browser tabs and mobile app icons can be modified at will. When Larson demands Google "give us back our serifs," we're reminded that those serifs weren't ours to begin with, but were simply part of the landscape we passed through. Similarly, the Southern Bell pay phones emblazoned with Bass' logo were part of the landscape of my childhood, but they weren't mine to control.

    Southern Bell logo,, used under fair use guidelines.
    Southern Bell logo (1970-1983) designed by Saul Bass.

    Strong emotional bonds fuel nostalgia for lost logos. Given the place that NASA (the United States' National Aeronautics and Space Administration) holds in the hearts of many Americans, it's not surprising that the organization's retired logo – which was part of a set of graphics standards created by design firm Danne & Blackburn – is still revered. A recent crowd-funded campaign to republish the 1975 NASA Graphics Standards Manual as a hardcover book ballooned past its goal the day it launched, even though those standards are available free as a PDF.

    NASA Graphics Standards Manual by Display Graphic Design Collection,, used under CC-BY-NC-ND 2.0
    Page from the 1975 NASA Graphics Standards Manual.

    Danne & Blackburn's logo is more than the beneficiary of nostalgia. It's a powerful, simple design classic of extraordinary flexibility. Their logo is successful in sizes ranging from icon on a business card to the exterior of the Hubble Telescope, where it continues to orbit the Earth today.

    Brand Guidelines for Correct and Incorrect Use of Logos

    The NASA Graphics Standards Manual is one example of brand guidelines. Unlike logos, where everyone is entitled to an opinion on whether they like or dislike the styling, brand guidelines go beyond style preferences to create an independent system by which design choices can be judged to be right ("on brand") or wrong ("off brand")." You may think that a particular logo is badly done or not to your taste, but still be able to conclude that it is used correctly if it follows the brand guidelines.

    Mozilla's brand guidelines are unusual because they allow use of any solid color. New York University's visual identity (PDF) is more typical in describing exact colors, sizes, positions, and how to use the logo correctly. The American Red Cross' brand standards include downloadable logos, which are important for reassuring participants of a high-quality experience when a community center hosts an official blood drive.

    Why Secure Communication Needs Brand Guidelines

    Brand guidelines ensure consistency when many different people are working on a product. This is an important component for building trust with end-users. It's crucial for secure communication projects in particular because lay users can't assess the underlying cryptography. Instead, they assess how trustworthy something is by the user experience, and consistent brand expression is a key part of that. As a counterexample, consider how a sloppily-implemented logo in an email can alert people to a phishing scam by signaling untrustworthiness.

    Logos and brand guidelines communicate trust, and giving mass users confidence that open-source secure communications tools are trustworthy is an important step to driving adoption.

    Image of Southern Bell logo, used under fair use guidelines

    Image of NASA Graphics Standards Manual by Display Graphic Design Collection, used under CC-BY-NC-ND 2.0

  • Art, Design, and the Future of Privacy

    We're headed to NYC next week for our annual Advisors' Meeting. While we're there we're thrilled to be partnering with Dis Magazine to host Art, Design, and the Future of Privacy. If you're in the area, please join us; the event is free and open to the public.

    7:30pm, Thursday Sept 17

    Pioneer Works, Brooklyn

    Join cryptographers, critical theorists, architects, designers, sociologists, user experience researchers and other bright luminaries for a casual evening discussing privacy, the culture of technology, and possibilities for creative intervention in the age of ubiquitous digital tracking. The conversation will be rich, fun, and move from the stage to the audience to end the night with a party feeling and plenty of shared discussion.

    Event flyer

    Human rights meets design challenges
    Scout Sinclair Brody
    Scout discusses our collective obsession and fatigue with technology, and the rights and responsibilities of artists, designers, and clear-thinking technologists within this context. How can we all work to make technology better for us as individuals and as a society?

    Digital Privacy IRL: Architecture, public space, and its role in preserving online rights
    Moderated by Ame Elliott
    Noah Biklen and Sarah Gold examine the role of built space, public space, and spatial metaphors in the preservation of privacy and digital rights.

    If you build it they won’t care: Designing privacy-preserving technologies for people with other interests
    Moderated by Scout Sinclair Brody
    Tyler Reinhard, Ame Elliott, and Harlo Holmes discus the deployment of “privacy-preserving technologies,” the role of design and critical engagement in this process, and the needed creative interventions that help these efforts resonate with the rest of us.

    Ask a Prison Librarian about privacy, technology, and state control
    Cory Doctorow interviewing Sarah Ball
    Cory talks with Sarah about the lives of people in prison, the fraught conception of “the private individual”, and the intersection between human rights, state control, and privacy.

    No, thank you: Agency, imagination, and possibilities for rejecting world-changing technological innovation
    Moderated by Meredith Whittaker
    Kate Crawford, Lauren McCarthy and Allison Burtch examine the role of human-centered approaches and critical discourse in the conception “technology for social justice,” and speculate on the moves necessary to enable local communities (et. al) to reject globally celebrated “disruptions.”

    Where to from here?
    Cory Doctorow
    Cory closes the evening with hopeful practicalities. Where can we direct our attention if we value privacy, have views on technology, and want to build more creative and relevant interventions?

  • Briar: Notes From An Expert Review

    Researchers who want to evaluate software interfaces have a number of tools at their disposal. One option for identifying obvious and significant problems is an expert review, which is often used to catch low-hanging fruit before performing any kind of user testing. Expert reviews employ usability heuristics, which systematically explore potential problems with a piece of software by applying patterns for good design.

    With some guidance from UX-research veteran Susan Farrell, we recently performed expert reviews of a few open source tools for encrypting communications. Each expert review included evaluation by myself and at least one additional researcher; many thanks to Arne Renkema-Padmos, Robert Stribley, and Bernard Tyers for their work on this project. During the review we described issues and took screenshots to illustrate them. After prioritizing the issues by severity and picking our top 15, we compared our findings with one another and synthesized them into a single report.

    One of the tools we reviewed was Briar, an open source peer-to-peer communications application for Android. Briar uses a range of communications methods — Bluetooth, Wi-Fi, or Tor — to provide users end-to-end encryption for messaging.

    We picked Briar to review because the development team expressed readiness (and eagerness!) to get and incorporate feedback. You can access our full report here. Below are a few insights regarding visibility, an important element of successful user interfaces.


    An overarching issue that Briar has, which it shares with a number of applications in the FLOSS secure tools space, is a lack of visibility into system and messaging status. Researchers found it unclear how Briar was connecting to the network at a particular moment in time – was it via Bluetooth, Wi-Fi, or Tor? – making it hard to troubleshoot when a connection was not working. Status icons do not make it clear when Briar is running versus when Briar is actually connected.

    Additionally, Briar does not yet do a thorough job of indicating when a message has been delivered. Because Briar can only deliver messages when a user is online, it can be hard to tell whether a message has made it through to the recipient. Briar also does not display an icon on its main screen to indicate when new messages have arrived.

    Screenshot of Briar mobile interface
    Briar's main screen currently does not provide a flag to make visible when new messages have arrived, requiring the user to dig into 'Contacts' or 'Forums' to discover them.

    Visibility is a crucial principle of usable design. Users need indications that they are correctly understanding the status of the system, that it has changed, or that they need to take action. While we were successful at using Briar to get a message to friends in the same room (the case which we tested), a clearer picture of where messages were in transit would have helped us better understand when and why we were having trouble.

    See the full report on our review of Briar here.

  • Usability and Security: Not Binary Properties

    People who think about computer security for a living sometimes cringe when they read about the subject in the popular press. Security is a complex and nuanced topic, and it’s easy to make assertions that don’t hold up to careful scrutiny.

    One basic-but-unintuitive principle is that security is not a binary property: in the absence of other context, it’s hard to definitively say that a particular system or piece of software is “secure” or “insecure”. We can only say that a system is secure against a particular threat, or – more usefully – against a collection of threats, known as a “threat model”.

    Justitia, Tehran Courthouse.  Image CC BY-SA 3.0, Abolhassan Khan
    Justitia, Tehran Courthouse.

    For example, some people might say that using a VPN while browsing the web from a coffee shop is “secure”, because it prevents the jerk across the street with a cantenna from listening in and seeing what websites you go to. But if your threat model includes listeners with devices housed with internet service providers (or a government that operates VPNs), you might instead refer only to an option like Tor as “secure”.

    As someone who has spent a lot of time thinking about security, it’s tempting to dismiss things as “insecure” when they don’t protect against the threats that I’m personally concerned about. Go too far down that path, though, and we find ourselves in a world where only the products that protect against the most extreme threats are considered acceptable. As with transportation safety and public health, we have to recognize that getting people to adopt a “good enough” solution – at least as a first step – is usually better than having them not change their behavior at all. In other words: it’s important to not let the perfect be the enemy of the good!

    Just as security is not a binary property, it’s also important to not think of usability as an all-or-nothing game. Design thinking encourages us to ask not just whether humans in general find a piece of software usable, but to explore 1) the circumstances in which different groups of users might be motivated to use the software and 2) the needs that a piece of software must meet in order to sustain that motivation.

    I think that this distinction is particularly important for software developers to bear in mind. It’s easy to get discouraged when someone tells you that the code you’ve slaved over “isn’t usable”. (Or get defensive – after all, there are plenty of people who seem to find it useable enough, or there wouldn’t be anyone to file all those feature requests.) I challenge you instead to dig deeper, and try to understand exactly what things the user found frustrating about their experience, and what expectations they had using the software that may be mismatched against the assumptions you have made in designing it.

    Just as we can only say that software is “secure” against certain threats, so too must we define “usability” as a function of particular users with particular needs, backgrounds, and expectations. Working to understand those users will ultimately help our community build better software.

  • Design Thinking

    The latest Harvard Business Review (paywall, but with limited free content) has two articles about design thinking that are relevant for teams working on security and privacy: Design for Action by Tim Brown and Roger Martin and Design Thinking Comes of Age by Jon Kolko. These articles describe how design thinking has moved beyond creating tangible products and on to supporting collaborative design of complex systems. They give an overview of design thinking’s evolution, from its roots in Herbert Simon’s The Sciences of the Artificial, through Richard Buchanan’s Wicked Problems in Design Thinking, and into addressing challenges for domains far outside areas historically considered “design.”

    Each article presents an easy-to-understand list: one that presents problems, another that offers solutions.

    Getting Past Common Criticisms

    Brown and Martin highlight how design thinking can facilitate organizational change – including stakeholder buy-in – that helps teams get past common criticisms of work in progress.

    • This doesn’t address the problems I think are critical.
    • These aren’t the possibilities I would have considered.
    • These aren’t the things I would have studied.
    • This isn’t an answer that’s compelling to me. </ul> – Common negative reactions from Design for Action, by Tim Brown and Roger Martin </blockquote>

      These criticisms occur in many contexts, including secure communications, and although the example in the article is of a CEO critiquing a consultant, the criticisms will probably be familiar to open-source developers too.

      Principles for a Design-Centric Culture

      Kolko’s article describes principles of a design-centric culture that help teams get past traditional criticism and get unstuck.

      • Focus on users’ emotional experiences
      • Create models to examine complex problems
      • Use prototypes to explore potential solutions
      • Tolerate failure
      • Exhibit thoughtful restraint. </ul> – From Design Thinking Comes of Age by Jon Kolko </blockquote>

        Thoughtful restraint is particularly tough for open-source efforts because the collaborative nature of decision-making can lead to compromises resulting in an ever-longer feature list rather than necessary editing. For an example of thoughtful restraint in action, check out Open Whisper Systems’ Development Ideology: “The answer is not more options. If you feel compelled to add a preference that's exposed to the user, it's very possible you've made a wrong turn somewhere.”

  • Empathy In The Real World

    As a practitioner of Human-Centered Design, empathy is a core skill in the work I do. In No Flex Zone: Empathy Driven Development, Duretti Hirpa writes about how empathy can be a competitive advantage.

    “We build software for all kinds of people, and empathy helps us to connect to these disparate audiences. We have to choose empathy, but I’d argue, it’s undeniably the ‘one weird trick’ to future-proofing the software engineering.” – Duretti Hirpa

    Simply Secure chooses empathy, and we believe that understanding the lives of end-users is an essential element in building empathy for them. Here are some security-focused resources for building empathy. These are useful not only because they explain use scenarios for different technologies, but because they paint vivid pictures of users’ priorities and motivations.

    Swift on Security writes A Story About Jessica, a fictionalized 17-year-old interested in biology and her boyfriend. She is currently worried about getting a scholarship for college and getting evicted from the apartment she shares with her mother. To make matters worse, Jessica unwittingly infects her hand-me-down laptop with spyware and doesn’t even know it, since she trusts the accuracy of the reassuring “protected” message she sees from her anti-virus software. This story is consistent with the finding shared by Iulia Ion, Rob Reeder, and Sunny Consalvo that non-experts rely on anti-virus software for all their security needs.

    Eleanor Saitta gives detailed Real World Use Cases for High-Risk Users, explaining the how applications like Facebook are essential emotional supports for vulnerable people, and that Facebook over Tor can help keep them safe. Details like a controlling husband making his wife get rid of her phone after a cross-country move can help tool developers build empathy for people who would benefit from their services. It can also help technologists make “compromises,” such as interoperating with services like Facebook that may be problematic from a security point of view, more desirable.

    Simply Secure fellow Gus Andrews’ User Personas for Security and Privacy build on work from Saitta and others to share personas from around the globe, important for helping developers make technologies that are accessible to people in contexts different than their own. The example of human-rights activist from the Democratic Republic of Congo contains some helpful nuance – for example, that she needs “to take a break from the stress of worrying for her safety and meeting with victims of violence.” Social technologies are one way that people escape and relax, so this activist may get important emotional benefits from using “insecure” applications.

    Working on secure communications is important because we have the chance to improve people’s lives by making critical tools meet their needs more closely. As Saitta writes about a woman living in a shelter’s Facebook use, “That account has been her one lifeline to contacting people, and not only is it crucial for her to be able to access it, it's been an emotional lifeline for her for years. Losing access to it radically lowers her long-term chances of not only getting to safety, but also of living a happy life later on.”

    Doing user research is a powerful and efficient way to build empathy, and I encourage everyone to spend time in the field, talking with people about their lives and how technology fits (or doesn’t fit) into it. One starting point when having such conversations is the USDS 18F Method Cards, which are an open-source resource for structuring user research. The cards on bodystorming and user interviews from the Discover section are particularly relevant for gaining insight that helps establish an empathetic connection with users.

  • Missing Trouble: In Memoriam

    This week we are marking the sudden passing of our Operations Manager, Nóirín “Trouble” Plunkett, who introduced themselves here just a few short months ago. We are heartbroken, and it has been hard to come to terms with this unexpected loss. Ame and I attended a memorial service in their honor this week in Boston, and we have been reading the multitude of memorials that have been posted online (including this one by Kaia Dekker, which deeply resonated with us).</p

    While we did not know Trouble for very long – our organization is new, and they started with us in March – their impact was immeasurable. They were a force of organization (of financials and file systems) and a source of productive rhythm (for newsletters, blog posts, and social media). They proofread our writing, cheered us on in our outreach, and lightened our work with wit and good humor. They wrangled our HTML, managed our Git repo, and had a talent for finding beautiful open-licensed images (Ame often joked that Trouble seemed to be using a better internet than she was).

    Trouble was also a vocal advocate of openness, and one of its greatest champions as we work to integrate the open-source ethos into our collaborative design efforts. In their short time with our new organization, they helped us create a foundation of honesty, precision, and compassion for the work we do. We will honor their memory by building on their contributions, maintaining a rigorous commitment to openness, and methodically trying to make the world a better place, one project at a time.

    There are no words that can adequately express our sadness or our appreciation for the short time we shared with Trouble. They were an irreplaceable treasure, and we miss them sorely.

  • Kids’ Online Privacy: SOUPS Conference Keynote

    Last week I went to the SOUPS conference in Ottawa. As a first-time attendee, it was a good opportunity to connect with some members of the academic usable-security community. One of the highlights was keynote speaker Valerie Steeves.

    Steeves, sharing findings from her Young Canadians in a Wired World research, reported results of an in-depth study of 5,436 Canadians in Grades 4-11. Based on a survey and in-person discussions, she shared sobering findings that kids’ expectations of online privacy are not being met. Alarmingly, 68% of respondents agreed with the incorrect statement that “If a website has a privacy policy, that means it will not share my personal information with others.”

    Young Canadiana Life Online

    2013 Summary of Young Canadians’ Online Behaviors, from

    Steeves also explained specific ways that the corporate, for-profit internet is harmful to children. She particularly called out commercial surveillance – for example, Club Penguin’s rules for policing other community members – as harmful for reinforcing gendered stereotypes and setting kids up for conflict with each other.

    Using quotes, she captured the frustration young people felt at being forced to agree to consent to privacy behaviors they don’t want in order to participate with their friends.

    “If we had a choice to say no, I would chose no. We can’t, or else we can’t go on the thing…” – Young Canadians in a Wired World study participant on agreeing to undesirable website terms.
    Being able to spend time online with friends is tremendously important, so the participants were repeatedly willing to make privacy compromises to be able to participate.

    The qualitative research shared in this presentation is a powerful motivator for giving people more control over their privacy and has an important role in informing design directions.

  • Behind-the-Scenes: Emerging Conversations from Slack

    Thank you to everyone contributing to the Simply Secure Slack channel. If you’re interested in joining, email for an invitation. I’m especially eager to get more UX people in privacy and security involved, so spread the world. Here are some highlights from our recent Slack conversations.

    Sharing the Rationale for UX Decisions

    Check out Gabriel Tomescu’s The Anatomy of a Credit Card Form sharing the Wave design team’s process for arriving at an elegant, easy-to-use form. It includes a quote that spoke to me, “Given the existing mental model of paying with credit cards online, we felt the presence of one lock icon was sufficient.” Indeed.

    Subtle improvements to Wave’s credit card form

    Subtle improvements to Wave’s credit card form

    Communicating Technical Benefits vs. User Benefits

    Stewart Butterfield wrote We Don’t Sell Saddles Here, which speaks eloquently to selling benefits of horseback-riding, not saddles. A technically savvy crypto audience will happily geek out about the details of different saddles. Meanwhile everyday computer users are still puzzling, “This helps me ride a horse? But why? And how does this help?”.

    Security: Cuddly and Fierce

    Tunnel Bear’s brand is more about horseback-riding than saddles. Their website doesn’t lead with “VPN” to describe what it is. Instead of shields, locks, or keys they use bears. Bears!

    Tunnel Bears are approachably cuddly, but also fierce

    Tunnel Bears are approachably cuddly, but also fierce

    Tunnel Bears are approachably cuddly, but also fierce

    Look for me at SOUPS in Ottawa this week. I’ll be presenting a lightning talk on ”Security is Not Enough: Design for Security Engagement” on Thursday afternoon. I’d love to chat if you’re there.

  • Closing the Participation Gap: HotPETS Presentation Summary

    I really enjoyed being part of the emerging-work track, HotPETS, at the Privacy Enhancing Technologies Symposium earlier this month. From meeting lots of great people to getting face-time with the Simply Secure team, Philadelphia was fun.

    Scout and I presented “Human-Centered Design for Secure Communication: Opportunities to Close the Participation Gap” as part of a session on Privacy and Human Behavior. The session also included some nice qualitative work from Tactical Technologies covering the collaborative and social nature of privacy and ethical implications for researchers working with vulnerable populations.

    The HotPETS presentation shared emerging findings from my Listening Tour — a series of semi-structured interviews reporting on perceptions and opportunities for security and privacy.

    HotPETS presentation on Human-Centered Design for Secure Communication

    The Listening Tour is a series of conversations — 27 so far — with designers, cryptographers, researchers, entrepreneurs, activists, and other potential members of Simply Secure’s community. This activity is part of a Human-Centered Design process to understand the needs and priorities of the stakeholders we serve.

    The biggest surprise from the tour so far has been how poorly the phrase “secure communication” is understood outside the security community. The entrepreneurs and designers I spoke with at professional events — people with no particular interest or awareness of security concerns — guessed “secure communication” to be something related to anti-doxxing efforts, bitcoin, or specialized tools for doctors and lawyers. There’s definitely work to do in bridging this gap.

    Stay tuned for more as the Listening Tour progresses, but emerging opportunities we have identified so far for closing the participation gap are 1) motivating lay-user adoption and 2) creating a shared vocabulary. The need for a shared vocabulary between designers and cryptographers resonated particularly well in the post-presentation discussion at HotPETS. We’ll be thinking more about how language can smooth collaboration and improve the accessibility of secure communication.

  • Lessons from Architecture School: Part 3

    This is the third and final installment in the series on Lessons from Architecture School: Lessons for IoT Security. You can also read the first and second installments, or download the presentation. Thank you to the audience at Solid Conference for good questions and lively discussion.

    Homes Are More Than Houses

    Shop houses are a type of vernacular architecture built throughout Southeast Asia. Vernacular architecture is built using folk knowledge and local customs, typically without the use of an architect.

    Shop Houses,
Singapore. Image CC-BY-NC-ND, Peter Morgan,
    Shop houses, Singapore.

    Shop houses are traditionally two levels with commercial space on the ground floor and a residence above. A typical feature is an awning protecting the street from sun and rain. Local custom, which became law in Singapore in the 1800s, is that the owner maintains the awning over a public passage or sidewalk, creating an interesting interplay between personal responsibility and the common good.

    Despite widespread familiarity with the building type, there are better and worse examples of vernacular architecture because, despite having access to good precedents, not everyone does a good job with implementation. For example, an awning may be both legally required and obviously a good idea, and still be leaky or badly constructed.

    Different types of knowledge — and different types of professional expertise — are necessary to make a successful building, just as they are for making successful security.

    Security Thought-Starter

    Don’t roll your own crypto. It’s easy to create a code that you yourself can’t crack that is trivially easy to a pro. Recent threats to Open Smart Grid show that creating a home-grown cryptographic solution leaves big vulnerabilities. Working with standard cryptographic libraries is one way to make sure your applications are using best-in-class security. Using open-source libraries also means that you (or experts) can validate the crypto. One venue for learning more is the Real World Crypto Conference, next held in Stanford, CA in January 2016.

    UX Consideration

    Exposing underlying systems can teach behavior. When electric cars first reached a mass audience, new dashboard interfaces educated drivers on the basics of how these unfamiliar systems work. Many Prius drivers didn’t know how internal combustion engines worked, which meant an explanation only in terms of difference would not be meaningful. Instead, UX design taught a bunch of people how to think about their car’s power source.

    Toyota Prius dashboard.
Image CC-BY, It's Our City, (cropped)
    Toyota Prius dashboard

    The design decisions in electric car dashboards have changed drivers’ behavior by helping them understand how the system works. Similarly, there is a huge opportunity for designers to create new interfaces to help people communicate securely. For example, simple visualizations of how the internet works could change users’ messaging behavior to become more security aware. The challenge is to show only appropriate complexity and not overwhelm the user with extraneous detail, just as the UX designers for the Prius selected only a few pieces of information that directly respond to changes in driving conditions.

    Image of Shop houses, Singapore, by Peter Morgan, used under CC-BY-NC-ND

    Image of Toyota Prius dashboard, by It's Our City, used under CC-BY 2.0 (cropped)

  • Niaje! Introducing Maina

    I’m Maina, and I'm excited to start out at as a Senior Fellow at Simply Secure. Prior to this fellowship, I conducted research at the Center for Advanced Security Research Darmstadt, and the Technische Universität Darmstadt. Using both quantitative and qualitative research methods, I focused on the usability of verification in Helios, an end-to-end verifiable, open-source, remote electronic voting system. Previously, I taught several undergraduate courses, including human computer interaction and computer security. My undergraduate degree is in computer science, and for my master’s degree, I investigated secure protocols for mobile phone voting.

    My previous research had two goals: first, to investigate the usability of verification in Helios, and second, to investigate whether voters are motivated to take up opportunities to verify their vote. With respect to the first research goal, my colleagues and I conducted a cognitive walkthrough and identified obstacles to voter verification in Helios. We proposed improvements which were integrated and tested in a lab user study. The findings showed that <a href"">voters were able to perform verification with the improved Helios</a>. We also found, using surveys, that despite the usability improvements, voters lacked the motivation to take up verification opportunities. Furthermore, and with respect to the second research goal, we identified that voters trust the people and processes involved in (paper-based) elections, suggesting that they are insufficiently motivated to take up verification opportunities. Consequently, we designed motivating messages to increase voter verification intention.

    At Simply Secure, I want to continue to focus on user motivation, and apply it in the context of secure communication. I want to understand user motivation and how it can better guide secure behavior, focusing specifically on email. My research will concentrate on these three questions: (i) To what extent does a lack of user motivation contribute to the low adoption and use of secure email behaviors?, (ii) How can we motivate users to adopt such behaviors?, and (iii) How can security motivation be integrated effectively into the design of email interfaces? I will focus on the link between user motivation and adoption, and test the impact of motivational interventions. The output of my research should recommend how to increase the uptake of secure email behaviors for the general population.

    Niaje, pronounced 'ni-a-dʒei', means "what's up" in Swahili.</p>

  • Hello Joseph and Kat

    You’ve already met Gus, and we’re looking forward to introducing you to Maina, the other Fellow that Simply Secure is hosting under the auspices of Open Tech Fund’s Secure Usability Fellowship Program.

    Ours are not the only SUFP fellows, however – the EFF has been hosting Joseph Bonneau since the start of this year, and Kat Krol started recently as a SUFP Fellow at University College London. We hope to share more about their research later in the year, but in the meantime, here are their introductions, in their own words! (And remember, if you’d like to catch up with any of the Fellows, share your work with them, or ask about what they’re up to, you can find them all on our Slack channel. Email us for an invite!)

    Joseph Bonneau, SUFP Fellow at EFF

    I'm Joseph Bonneau and I'm a Secure Usability Fellow working at both the Electronic Frontier Foundation and Stanford. I started in February 2015. My main goals are to improve the state of the art of secure messaging with user-centered cryptographic architectures. In particular, on the EFF side I'm working on improving the EFF's Secure Messaging Scorecard and starting next steps of the Campaign for Secure and Usable Crypto. At Stanford I'm doing technical work on the CONIKS project to build user-verifiable public key directories for secure messaging tools.

    My background is in cryptography and computer security. I earned my PhD from the University of Cambridge as well as BS and MS degrees in Computer Science from Stanford. I've worked at Google, Yahoo, and Cryptography Research, Inc. and last year I was a fellow at Princeton University's Center for Information Technology Policy. My research has spanned many topics including passwords and user authentication, cryptocurrencies, HTTPS and web security, privacy in social networks and side-channel cryptanalysis. I also taught the first courses on cryptocurrencies in the past year (both in-person at Princeton and an online MOOC). In my spare time I enjoy the outdoors, triathlons and pub trivia.

    Kat Krol, SUFP Fellow at UCL

    My name is Kat Krol. I'm a final year PhD student at University College London, UK. In my PhD research, I look at the role of effort in users' security and privacy decisions online. I'm passionate about conducting user studies of various kinds, always aiming to combine quantitative data with qualitative feedback from participants.

    Today, design and computing are all about fulfilling users' every need and want, providing interactions that are pleasurable and seamlessly integrate with their lives. Security is absolutely at odds with this – it disrupts users' natural workflow, asks them to heed every warning, check URLs and create complex passwords. The aim of my research is to contribute towards usable security and privacy that are contextually sensitive to human capabilities, needs and preferences.

    During my fellowship, I will be focusing on tools for secure instant messaging looking at their usability and adoption. There is so much technically excellent encryption software out there, that has not been widely adopted due to poor usability and/or a mismatch between what the technology offers and what the users need. I'm excited about the first steps, in which I will be conducting focus groups with users of selected messaging apps to learn about their perspective.

  • All Your Base Are Belong To Gus

    Hi, everyone! I’m Gus. I am pleased to be joining Simply Secure for a one-year fellowship.

    For the past year and change I worked for the Open Internet Tools Project, where I pioneered their work on security usability. OpenITP being an open source organization, I had the great joy of doing all my work in public, which means everything we did is still online and publicly available. Among the things I did:

    • worked to help the community share best practices in usability;
    • ran usability-focused hackathons;
    • held a monthly meeting of security trainers (which is still ongoing!);
    • held workshops to develop security-focused user personas and visual assets;
    • ran user tests on a number of tools;
    • analyzed tool-building projects’ data on their downloads;
    • and wrote analyses of the field.

    My fellowship at Simply Secure will have two parts:

    One part of my job this year will be continuing to do usability work for various secure-tools projects, in a more focused way. I will work with particular projects to identify what their usability needs are and develop solutions specific to their stage of development. These might include design workshops, metrics analysis, expert review, or more user testing.

    The other part of my job will be a more overarching research project that might be useful to a number of tool developers, as well as to the broader community of security trainers, usable-security researchers, and digital literacy educators. Building on the methodology already developed by Arne Renkema-Padmos, I will work with a handful of researchers to assess users’ mental models of how the Internet works. Users will draw out diagrams of what they understand, with “scaffolding” provided by images of Internet elements with which they may be familiar (browsers, routers, etc.). We will then analyze these diagrams to identify patterns in misconceptions, as well as what users already understand.

    Outside of work, I produce The Media Show, a YouTube series about media literacy and digital literacy which I began while writing my dissertation at Teachers College. Our latest series of episodes answer questions about media and technology drawn from Google Autocomplete — meaning many people have asked them. We’ve got episodes in the works on how ads know your location, how the Internet crosses the ocean, and how hackers find out your passwords. Previously, we’ve done episodes on how spam ends up in your email and how search engines work. Follow our progress on YouTube.

    I look forward to continuing to work with folks in this space!

    The Media Show's puppets explain how search engines work

    The Media Show's puppets explain how search engines work

  • Lessons from Architecture School: Part 2

    This continues Part 1 of a series of posts drawn from a talk I gave at O’Reilly’s online conference Experience Design for Internet of Things (IoT) on “Lessons from Architecture School for IoT Security.” You can find the slides for the original talk here. The talk encourages designers to think about security and outlines some ways UX design can support privacy in IoT applications.

    When designing IoT applications for the home, we can take advantage of how much time we spend there by looking critically at the unspoken assumptions homes reveal. Living in a house is something we all unconsciously understand how to do, having learned from watching those around us before we could talk. The home is a rich environment from a cultural anthropology perspective, in part because it encodes tacit knowledge about the people who live there.

    Understanding Unspoken Needs

    Looking at Finland’s Hvitträsk, a home and architectural studio built in 1903 by Herman Gesellius, Armas Lindgren, and Eliel Saarinen, reveals extensive use of Jugendstil, or Art Nouveau, decor mimicking forms found in nature. Hvitträsk teaches the cultural context of its construction, when Finnish Nationalism was rising as Finland sought to establish a distinct identity from neighboring Sweden and from Russia, who was administering Finland at the time. Nationalism and Romanticism are values that can be decoded by looking carefully at the environment. The combination rug/blanket references coverings used for sleigh rides, and the stained glass figures reference the Kalevala, the Finnish national epic poem. These design choices reflect an emerging national identity.

    Hvitträsk, built
1903, boyhood home of architect Eero Saarinen. Image CC-BY-NC, David Castell,
    Hvitträsk, built 1903, boyhood home of architect Eero Saarinen.

    Hvitträsk was the boyhood home of Eero Saarinen, well-known as the architect of what was then called the Trans World Airlines Flight Center, and is still in use by JetBlue passengers as T-5 of John F. Kennedy airport in New York City. Just 59 years separate the construction of Hvitträsk and the airport, but they included sweeping technical advances, from horse-drawn sleigh to commercial airplanes. One of the unspoken needs of buildings is to endure, and buildings – unlike many forms of IoT hardware – are upgradeable. Buildings are expected to last much longer than the 18-month lifespan of a device designed to become obsolete.

    JFK Airport T-5, built
1962 by architect Eero Saarinen. Image CC-BY-NC, Sean Marshall,
    JFK Airport T-5, built 1962 by architect Eero Saarinen.

    Security Thought-Starter

    Buildings’ long lifespans challenge IoT security paradigms. There’s an inherent tension in the enduring quality of building hardware and the difficulties of keeping connected devices secure over time. Sources including the IBM Institute for Business Value caution that committing to connected building infrastructure, such as smart doorknobs with 20+ year lifespans, carries risks because a smart doorknob needs to be maintained and kept up to date against security threats unknown at the time it was built. Designers need to think critically about the path for upgrading firmware in order to reduce the risk of IoT devices becoming out of date and vulnerable to new security threats. Supervisory Control and Data Acquisition (SCADA) systems in industrial contexts have long been criticized as insecure, so designers have a chance to learn from that experience and encourage thinking about security as those systems are adapted to home use.

    Routers are one example of a piece of infrastructure with long lifespans, still keeping the internet of ten years ago alive, often with numerous security vulnerabilities.

    Detail of a router firmware update dialog box. Image CC-BY, Fabian Rodriguez,
    Detail of a router firmware update dialog box.

    UX Consideration

    One of the simplest ways to protect a computerized system is to install software updates that include security patches, but users often view updating software as unpleasant and disruptive. I call on designers to re-imagine software updates as a moment for positive user engagement and behavior change. Successes in chronic disease management, financial planning, and other difficult topics show that design can change behavior. Let’s translate those successes into information security.

    There are formidable challenges to re-inventing something banal – people unthinkingly rush to dismiss dialog boxes unread – but new interfaces for explaining underlying security systems have the opportunity to create positive change.

    Image of Hvitträsk, by David Casteel used under CC-BY-NC-ND 2.0

    Image of T-5, JFK Airport, by Sean Marshall, used under CC-BY-NC 2.0

    Image of OpenWRT router firmware update by Fabian Rodriguez, used under CC-BY 2.0

  • Lessons from Architecture School: Part 1

    This is the first in a series of posts pulled from a talk I gave at O’Reilly’s online conference Experience Design for Internet of Things (IoT) on “Lessons from Architecture School for IoT Security.” The talk is a call to action for designers and non-technical people to get involved — with us at Simply Secure or elsewhere — in the worthy problems of experience design for IoT security. I want to encourage more people to think about security and to outline some ways UX design can support privacy in IoT applications. You can find the slides for the original talk here.

    Many thanks for all the positive comments from the online audience for the talk. I’m glad to see that a few of the participants joined our Slack channel. You should join too — by emailing — if you’re interested in being part of an emerging conversation about security, privacy, design, and more.

    Architecture School

    I studied architecture. Not systems architecture, but actual building architecture. The time I spent in the Colleges of Environmental Design in Boulder, Colorado and Berkeley, California shaped how I think and how I approach problems. No, I can’t tell you if your deck will fall down, and although I may have an opinion on your kitchen remodel, that’s not part of my professional education.

    My background is much closer to what we today call “design thinking” or even “Lean Startup” methods. Architecture school teaches problem finding, rather than problem solving, and it’s great preparation for work on many kinds of complex systems. There are many elements of a studio-based architectural education that make it useful for thinking about security, including rapid prototyping and getting feedback during critiques. However, the most relevant quality is how architecture school teaches new ways of seeing how buildings work and how people inhabit them. I’d like to share some of the lessons from architecture school that are applicable to security.

    Calling Designers, IoT Security Needs You

    Anyone working on a connected-home application is also in the data-collection business, as connecting and collecting go hand-in-hand. Bruce Schneier’s recent Guardian article includes examples that speak to designers’ priorities on crafting user experience: Samsung TVs listening in on near-by conversations and Mattel re-selling children’s questions to Hello Barbie dolls. As a designer, I understand the motivation for some of the choices that eventually led to privacy problems, and that some designers haven’t had exposure to the privacy implications of their designs. The desire to create convenient, positive interactions for users can have unintended consequences, and designers have a role to play in safeguarding end-user needs. Beyond that, designers have the power to make conversations about privacy richer than the tension between what’s technically possible and what’s legal. What’s desirable? What’s delightful? Security for IoT needs design.

    Here’s one lesson from architecture school for designing IoT applications with privacy in mind.

    Start with People, In Context

    Architecture is concerned with creating spaces for people to experience. As a Human-Centered designer, I encourage everyone to get out into the field as much as possible to understand the context. People working on domestic IoT applications should go and meet with people in their homes to get a deeper understanding of how their products fit into the rhythms of home life. But even without doing in-context interviews, looking at examples of how people inhabit space can inform the design of appropriate technology. Most of us spend quite a bit of time in buildings, and critically examining how people inhabit buildings can lead to the design of more Human-Centered IoT products.

    To be deliberately provocative, I’ve selected images for this series that feel historical and far-removed from the current hype around IoT. This one is from the Dutch Golden Age.

    <img src=”” alt=”Pieter de Hooch,

    1. “Man Handing a Letter to a Woman in the Entrance Hall of a House.” from the Rijksmuseum.”/>

    Pieter de Hooch, 1670. “Man Handing a Letter to a Woman in the Entrance Hall of a House.” from the Rijksmuseum.

    Looking at this painting with the eyes of an architect, this domestic scene tells us about the values of Dutch society in the 1600s. There are no window coverings, highlighting a social convention around transparency. There are also multiple people visible, each of whom has different privileges in the home. The man delivering the letter could be a guest, who would only have access to the entrance hall or certain semi-public areas of the home. There’s also a dog and a child. Children are an interesting case in the modern context because they are not able to consent to the collection of their data in many situations.

    One method for doing design research is to seek inspiration from observation of extreme users, or people outside the mainstream who are likely to invest time in creating work-arounds to make technology fit their needs. Extreme users can be helpful participants with well-articulated worldviews, whose experiences can create designs that work well across the entire spectrum of users. Designing to accommodate the needs of a child in a European country — who is protected from unwanted data collection by local regulations — can result in a user experience that works well for all kinds of people, including those eagerly opting in. We can expect additional regulatory constraints around children’s data in the future, so giving attention to these issues now can also help designers future-proof their product.

    UX Consideration

    Profile managers are examples of interfaces that can contribute to future-proofing by allowing people to opt in or out of data collection.

    Netflix profile manager

    Video-streaming service Netflix has an explicit interface for asking viewers to sign in with a profile to interact with their system, but what about other people who may also be watching? There are examples of using mobile phones to passively interact with video viewing systems, such as logging everyone in a group who may be watching at once, but without an explicit login moment, users have no ability to safeguard their privacy by opting out.

    Watching videos is a screen-based interaction, but the issues become more complex with ambient systems. We don’t yet have best practices around signing in to ambient systems where an explicit login at a console may not be appropriate. How can we as designers create appropriate interfaces to give users control of ambient IoT systems?

    Security Thought-Starter

    There is no single place to turn for guidance on this issue, but the European Article 29 Working Party is one starting point for understanding privacy protection and data collection mechanisms. If you’re a developer or entrepreneur thinking about collecting data in the home, keep in mind that there need to be ways for people to opt out, whether they’re guests or residents, adults or children.

    Plan for change, and don't take on privacy debt in a quickly-changing landscape.

    Planning for the ability to opt out of collection by ambient devices can help future-proof emerging design work.


    </tr> </table>

    To learn more about emerging challenges in IoT interfaces, check out the recently released Designing Connected Products by Claire Rowland, Elizabeth Goodman, Martin Charlier, Ann Light, and Alfred Lui.

  • What We Look For in a Software Partner

    As we gear up to start collaborating with open-source software projects, there are a bunch of things we have been pondering. There are a lot of compelling projects out there that we’d love to work with, but we need some criteria to choose which ones to focus on first.

    So, we’ve drafted a set of questions to ask about a software project and the team that develops it. As the document notes, these questions are not a quiz to judge the worthiness of projects or the people who work on them. Many questions have no “right” answer, and are included to ultimately help us ultimately foster a diverse portfolio of projects aligned with our core goals.

    The questions boil down to the following high-level issues:

    • What does the software do?
    • How is it built, and how is it licensed?
    • At whom is it targeted, and who actually uses it?

    Again, there are no right or wrong answers to these questions, but we do have some ideas about what types of tools and teams we want to work with first. To start, we are particularly focused on tools that enable secure communication — multi-way data exchange among end users – although we’re also interested in knowing about tools that perform related functions. Furthermore, we are most interested in working with development teams that are committed to improving the user experience of their software, and to integrating good design into their ongoing development practices. Finally, as part of our ongoing commitment to publicly-auditable software, we are committed to working on open-source tools.

    We expect this document of criteria to evolve and change over time, so we want to hear your feedback on it. Get in touch if you have suggestions on how to improve it!

    In a future post I will describe the models of collaboration we’re currently envisioning. In the meantime, if you are working on a secure-communications software project and you are interested in collaborating with us to evaluate and improve its user experience (or if you’re a software user and want to suggest a team for us to reach out to), please tell us about the project here. (Note: this form is hosted by Google. If you’re more comfortable communicating by email, you can also send a message to with answers to these questions.)

  • Making the Abstract Experiential

    It’s difficult for many lay users who are unfamiliar with the mechanics of how the internet works to make assessments of risk or to secure their communications. One way that design can help is by making abstract concepts understandable. There’s exciting work in understanding existing models of security and ways to leverage them in design, such as Rick Wash’s "Folk Models of Home Computer Security", but there’s still so much to be done.

    As an inspirational example of how design can contribute to making abstract concepts accessible to a lay audience, here’s a 1977 video from designers Charles and Ray Eames. It makes the abstract topic of exponential growth experiential by relating it to the scale of the human body. Having admired the Eames’ work since my days in architecture school, I think starting with the scale of the human body is a great way to approach problems. The whole 9 minutes is worth a watch to get a sense of the pacing, since timed transitions between views illustrate scale.

    Video: Powers of Ten by Charles and Ray Eames

    The strong narrative structure of the video connects abstract mathematical concepts to personal experience. Viewers learn by relating powers of ten to concrete things they can see or imagine, but the video feels more inspirational than educational. The inspirational tone is more powerful than a neutral explanation for increasing engagement, and I’m eager to see more inspiration in discussion of privacy.

    Sarah Gold’s Alternet video is one example of an experiential narrative that makes risks to privacy accessible to a mass audience. Similarly, the Do Not Track episodes are personalized videos that let people experience what various groups know about their online behavior. They are engaging and pleasurable to watch, even if the information in the video upsets people learning it for the first time. Like Powers of Ten, Episode 1 of Do Not Track also uses time to help viewers experience the scale of large numbers, with Do Not Track showing how much revenue technology companies and the US internet advertising market make every second.

    When designers approach privacy, we look beyond education and into action. Making abstract privacy topics experiential can motivate a broader group of end users to get involved in making better tools to protect privacy and security. It’s already pleasurable to relax at a picnic gazing at the sky, but Powers of Ten makes knowledge of exponents so compelling that gazing at the sky gets even better. I believe that understanding the mechanics of the internet can let people enjoy it more.

    Charles and Ray Eames are legends of modern American design, famous for making products people welcome into their lives. As a community, let’s work to help end users embrace privacy and security by making challenging ideas accessible and easy to integrate into their daily lives.

    US Postage Stamps Commemorating Charles and Ray Eames. Photo by Ame Elliott

    US Postage Stamps Commemorating Charles and Ray Eames.

  • What We Do

    You learned at our launch that we’re setting out to improve the experience users have with secure-communication tools. We told you that we want to work with the open source community, and that we’re committed to documenting our activities transparently. But what does this mean in practice – how will Ame, Trouble and I be spending our days?

    It’s much easier to show than to tell, so I expect you’ll get a better feel for our work as we describe it here over time. For the moment, though, you can expect it to fall into three major buckets:

    • Direct collaboration: We partner with open-source software-development teams to support them in researching and improving the usability of their tools. We’re focusing at first on secure-communication tools that have an established user base, but also collaborate with teams building related or emerging tools. I’ll tell you more in future posts about what we look for in a partner, and also introduce the projects we’ve already started working with. If you have a tool that you’d like a hand with, get in touch!
    • Information sharing: We take our collaborative research and use it to build public resources that help everyone – developers, designers, researchers, users, and the community at large – better understand great user experiences and how to achieve them. We also work to raise the profile of high-quality usable-security designs and projects, both those that we participate in directly and those conducted by other organizations. We share this information freely here, and through our newsletter, Twitter stream, and conference talks. Stay tuned in the near term for a reading list that will help you get started thinking about usable security – and let us know about your favorite books and papers that you want to see included.
    • Mentorship and capacity building: We work to support usable-security practitioners of all stripes, including developers, designers, and researchers. We encourage promising junior practitioners and students in their efforts to learn, and to participate in designing usably-secure experiences. As part of this, we are partnering with the Open Technology Fund on the Secure Usability Fellowship program.

    In addition to these formal activities, we also aspire to act as a hub for usable-security practitioners, and among the development, design, research, and user communities. Although we don’t provide funding for projects, we make it a point to know about different practitioners and activities in the space, and offer referrals when asked. So, although we’re still catching up on our backlog of email inquiries, we’re always interested in hearing from you if you’re doing work in this space. And, as always, we hope you’ll stay tuned to keep up with our activities!

  • Dia dhaoibh, mise Nóirín!

    Hi, I'm Nóirín (sounds like [n̪ˠõːɾʲiːɲ]). In Ireland, I have a pretty common name: I share it with professors, politicians, and even our police commissioner! Elsewhere, however, it's less simple. I've had conference badges in the names of "N√≥ir√≠n" and "NÛirÌn", online services often call me "N&#xF3;ir&#xED;n" or "N��ir��n", and I've even gotten mail for "N├âãÆ├é┬│ir├âãÆ├é┬¡n"!

    So at Simply Secure, I go by a name I picked up on the Appalachian Trail, that's found in most spell-checkers, and is a bit simpler to pronounce: Trouble.

    According to the IRS, my job title is Operations Manager. Much like my name though, the reality is more nuanced, and always evolving. I think of myself as Simply Secure’s Steersman1. I travel alongside our collaborators, through their varied communities, always collecting knowledge, connecting ideas, and helping to preserve and share what we see. I come from the Open Source world, and, with Scout and Ame, will be working hard to make sure that the things we learn, create, and do, are done as transparently as possible.

    Our shared vision at Simply Secure is that people shouldn’t have to choose between tools they like and tools that are secure. We know where we want to be: in a world where secure and private communication tools are available to everyone. We know the direction we need to go to get there: human-centered design is our guide star, open-source tools are the paths we walk along, verifiable and auditable cryptography is the shelter we depend on. But in the words of the Irish proverb, “it’d be a long road that didn’t have turns in it”2. So, bringing together the finest traditions both ancient and modern, I’m here to record what we’re doing, map out where we’ve been, and glean what I can about where we’re going.

    I hope you'll join me along the way!

    1. I prefer gender-neutral titles and pronouns where possible.

    2. Is fada an bóthar nach mbíonn casadh ann.

  • Hello, I’m Ame, Design Director for Simply Secure

    I’m Ame (sounds like “Amy”). Last month I joined Simply Secure after spending the past eight years at IDEO, a global design and innovation consultancy. While there, I designed consumer technology for entertainment, education, banking, media, business software, mobile/wearables, and home automation. Uniting all my work is Human-Centered Design, a set of practices and research methods that starts with people, studies their needs and preferences, and creates things they want and enjoy.

    I was fortunate to work with fantastic teams on projects I was proud of, but over time I began to feel a vague sense of unease about how little power people had to protect their privacy. A catalyst for my thinking more explicitly about about user experience + privacy was watching Mike Monteiro’s Webstock talk on How Designers Destroyed the World, which takes designers to task for the drastic real world consequences of bad design [the privacy settings example starts at 4:40].

    Mike Monteiro at Webstock '13: How Designers Destroyed the World, on Vimeo.

    In exploring the social and technical barriers to privacy for mass audiences, a common thread has been the need for design. And by design I don’t mean decoration – I mean research and implementation that makes things work for people. By the time I saw the announcement for Simply Secure on Boing Boing, I knew that I wanted to work on design for privacy and security. Simply Secure is founded on the idea that people shouldn’t have to choose between communication tools they like and tools that are secure, and that design and user research can play a role in eliminating the need to choose. That founding principle is what I practice, and why I’m here.

    At Simply Secure, I’ll be building a culture of design to address the challenges of privacy and security. That means working directly on the user experience (UX) of tools providing secure communication, as well as connecting to the community of people working on privacy and security around design. Everyone should be able to communicate privately and securely, and it will take collaboration by people of different backgrounds to make that a reality. So if design is unfamiliar, stay tuned. I’ll be sharing my passion and showing how design can make secure communication accessible.

  • Simply Secure's Growing Team

    Happy Spring! Like so much in the northern hemisphere, our blog and Twitter stream have been largely dormant for a while – but we’ve been behind the scenes getting ready for a season of tremendous growth. Since we announced Simply Secure in the fall, we’ve become formally established as a legal entity, interviewed and hired an exceptional staff, and fleshed out our plans for partnering with open-source organizations to make secure-communication tools more usable. We’ll tell you more about what we’re working on in future blog posts, but today I’d like to introduce you to our two awesome new staff members: Ame Elliott, our Design Director, and Nóirín Plunkett, our Operations Manager.

    Ame Elliott comes from global design consultancy IDEO, where she led Design Research for Fortune 500 clients. She holds a Ph.D. in Design Theory and Methods, and has spent her career creating and developing Human-Centered Design techniques, which ensure that the resultant design responds to the needs of the people who will engage with it, and not the other way around. This approach – improving technology by focusing on users – is at the core of Simply Secure’s mission, and we are ecstatic to have someone of Ame’s caliber applying these principles to the secure-communications space.

    Nóirín “Trouble” Plunkett is Simply Secure’s Archivist, Historian, and Operations Manager. Trouble brings impeccable organization and writing skills to this unusual role, along with deep experience in the open source world by way of the Apache Software Foundation, Google, and the Ada Initiative. As Operations Manager, they work to generally keep the ship that is Simply Secure sailing smoothly. As Archivist and Historian, they will be working to catalog (and make freely available to the public) resources to help make usability and design an integral part of software development.

    What is this excellent team up to? In a word, lots. In coming weeks we will share more about some work Ame is doing (a “listening tour”) to learn about where the design, software, and user communities stand in relation to current secure-communication technology. This series of qualitative-research interviews will provide a broad and deep review of the work being done, and help pinpoint clear opportunities for us to make a serious impact. We’ll also tell you about the ways in which we are hoping to collaborate with software partners to improve their tools, and how to get involved. And, of course, we can’t wait to introduce our inaugural group of Secure Usability Fellows!

    Thank you for your interest and support while we’ve been getting up and running. If you want to stay in touch as we start sharing out our work, please sign up for our newsletter, follow us on Twitter, or drop us an email – more info here!

  • A Fellowship of Usability

    Announcing a new program for usable-security researchers

    We are pleased to announce one of our first initiatives – the Secure Usability Fellowship Program (SUFP) – in partnership with the Open Technology Fund. This new program aims to cultivate applied research and creative collaboration at different levels and across institutions on the topic of usable security, especially the usability of open-source secure-communication tools to promote human rights and open societies.

    SUFP’s approach to this work is straighforward; the program supports qualified individuals to work within accomplished host organizations on clearly-defined and high-impact usable-security projects. Simply Secure will offer mentorship and community to the fellows, act as a host organization for some of them, and help OTF guide and administer the program overall.

    Two tiers of competitively-paid fellowships are available:

    • Senior Fellows - Offered as a one year term and typically taken up by postdoctoral or doctoral students, and/or researchers / practitioners with demonstrated expertise and experience.
    • Seasonal Fellows - Offered as a three- or six-month term and typically taken up by students and/or less-experienced practitioners. </ul>

      We encourage people from a varied and unlikely backgrounds – from students to established practitioners – to apply. We welcome applications from people in a diversity of disciplines; likely candidates have experience as usability researchers, interaction or visual designers, computer scientists, user-facing software developers, data-visualization designers, or other types of social scientists.

      To apply to be a fellow, please complete this online form. The deadline for applications to the inaugural group of fellows is January 2nd, 2015. If you’re interested in hosting a fellow, please contact

  • We're hiring!

    Join our team and let’s make this happen.

    We have ambitious goals, and the first step to meeting them is growing our team. We’re officially hiring two new positions – an Operations Manager and a Research Director (or Associate Director).

    If you’re interested in helping us quickly make an impact on the usability of secure communication tools, please apply. If you know someone who fits this description, please spread the word. In any case, check out the job descriptions and let’s get to work!

  • Why Hello, World!

    We're here to make security easy and fun.

    Internet software links us to our friends, allows us to transact across oceans, and forms a digital space for culture and society. Because these technologies provide forums for sensitive discourse and expression, we are all concerned about their security and privacy – but don’t always know what to do about it.

    In fact, when security-enhancing functionality exists, it often seems to add an extra layer of complexity. Under these circumstances, how can an average person tell how a given feature works – or validate that security claims are true at all?

    Celebratory balloons

    That’s why we’ve created Simply Secure: to help develop security and privacy tools that make this choice clear, easy, and available to everyone.

    Our founding principles

    1. The future of a positive, accessible, and people-­centered Internet requires trustworthy privacy and security.
    2. If privacy and security aren’t easy and intuitive, they don’t work. Usability is key.
    3. Technology should respect the user’s desire for privacy and security.
    4. Users shouldn’t have to choose between services they like and services that are secure; ­­they should be able to easily adopt privacy and security solutions for existing services.

    There are already many credible and exciting software-development efforts that aim to make privacy and security ubiquitous. Rather than create redundant initiatives, we will focus on supporting existing open source work by providing usability and development expertise, direct ties to user communities, connections to funding sources, and other resources.

    To build trust and ensure quality outcomes, one core component of our work will be public audits of interfaces and code. This will help validate the security and usability claims of the efforts we support.

    More generally, we aim to take a page from the open-source community and make as much of our work transparent and widely-accessible as possible. This means that as we get into the nitty-gritty of learning how to build collaborations around usably secure software, we will share our developing methodologies and expertise publicly. Over time, this will build a body of community resources that will allow all projects in this space to become more usable and more secure.

    We hope that this effort will result in more pleasing and robust tools that meet users’ privacy and security needs where they are – from layering additional security on top of popular name-brand cloud platforms, to augmenting small, stand-alone mobile apps.

    We’re excited to be launching, and hope you will contact us or follow us on Twitter if you’re interested in learning more and getting involved.

    PS: Thanks to W.P. for supporting our work by donating the domain!