What Good Are Secure Communications Tools if No One Uses Them?
Cross-posted from my piece on Medium
It was the second day of digital security training, and I was losing the room. The journalists, documentarians, and media activists around the table were more intent on following their friends and colleagues via Facebook chat than dealing with the fidgety, hard to install, but super-secure communications tools I was trying to promote.
They had good reason — it was winter 2014, during the tense final days of Ukraine’s EuroMaidan protests, going on just across town from our training. The urgency of communication was just too much. Overnight, most of the trainees had chosen to uninstall the app we’d burnt the better part of the previous day getting to install on a mix of Windows XP, 7, Macs, and even Linux systems.
But then again, I had good reason to urge security. Protesters were being arrested because of insecure communications. People were worried about their own government, but also about the small number of companies controlling their telecommunications.
I thought I had understood their need — they wanted a way to have trusted, private communications that spanned from mobile to desktop, chat to voice.
But I had failed. I was pushing a collection of tools I knew to be the best in its class for security, developed transparently as open source, with constant attention to not only bugs but the nuances of cryptography and careful, responsible implementation and monitoring of new possible flaws. The tools were also the only ones that combined these security features, with both text and voice capabilities that could bridge desktop and mobile.
These activists required a tool that they could show to others and start using in minutes; not one that took a day of training and debugging just to install. Tools that aren’t used aren’t providing security.
This was my first truly difficult digital security training, and I’ve become much more pragmatic in my approaches since. I focus on helping people understand how networking works, and what trust means when you’re talking about running a program or using a service. I focus on going in-depth on what data people really care about, what could happen if it falls into the wrong hands, and how to find creative, reasonable ways to reduce this risk.
I encourage users to find ways to make choices that are more secure, and to have an understanding of what security means. I have a strong personal preference for software that’s fully open, and with an active community of developers working on it, and with security researchers and code auditors looking at it for any weaknesses — but I accept that moving someone from a local ISP-provided, clunky email with no security to using Google’s mail services may be one of the biggest security improvements that I can encourage and know that it will “stick.” And if I can then get users to adopt two-factor identification, I’ve really reduced their vulnerability. This is also why I applaud services like WhatsApp adopting well-regarded end-to-end encryption, even while I prefer to stick to the open-source alternative, Signal.
Behind this pragmatism, I remain an optimist. I want everyone to be able to use and enjoy the best security tools out there — and to be able to share them with their friends and colleagues naturally and easily, so that these tools can spread as virally as any others.
The good news is that this is a growing trend — there are many others doing amazing work to improve the usability of security tools, from giants like Google to new organizations providing hands-on support like SimplySecure, as well as tool developers themselves, all looking for ways to make security accessible to everyone.
When Designing for Extremes becomes the Pragmatic Approach
Internews’ USABLE project (Usable Security Apps By Leveraging End Users) is taking a new approach to this problem by focusing on specific communities around the world who face digital security risks. These might be activists in closed societies, or women in patriarchal ones. Driven by their specific interests, we pair each community with an existing, proven digital security tool.
Through a combination of classic digital security training and facilitation around human-centered design processes, each of these sessions unearth both usage and design constraints — what are the features users find missing or hard to use, and where are these caused by knotty underlying security challenges?
After we have a representative sampling of these extreme use cases, USABLE will be bringing the participants together at the private UXForum to share their experiences and find common challenges across different communities and tools, and sketch out some prototypes and roadmaps, in partnership with tool makers. In combination with USABLE’s UXFund, we can make these changes reality through focused small grants bringing in human-centered design — and begin to make security and privacy usable.