Now, there are many problems in the world of digital security - from governments around the world undermining privacy technology or firewalling their citizens off from information to valiant but underfunded security tools having the time to focus only on keeping the tool safe, but not making it easy to use. Some of these problems are rather significant, some are more approachable, but there remains a hidden problem, so pervasive and pernicious that it undermis all of our good work in bringing usable, human-centered privacy and security tools to wider audiences.
Web 2.0 and F/LOSS
The Lock-in / Break-out Cycle
"This has all happened before. This will all happen again." - a refrain from the rebooted Battlestar Galactica science fiction series that is painfully accurate in the software world. This is not always ideal, but it is reality.
I am far from the first to compare digital security practices to safer sex practices. Heck, you can even see a rap career blooming as Jillian York and Jacob Appelbaum suggest that it's time that we "talk about P-G-P" at re:publica.
Talking about software and trust gets both very boring and very depressing quickly. Let's instead move on to the juicy sex-ed part!
A quick disclaimer: First, apologies for the at-times male and/or heteronormative point of view; I'd welcome more inclusive language, especially around the HTTPS section. Second, I am unabashedly pro-Tor, a user of the tor network, and am even lucky enough to get to collaborate with them on occasion. The garlic condom photo comes from The Stinking Rose..
Super-duper Unsafe Surfing
Using the Internet without any protection is a very bad idea. The SANS Institute's Internet Storm Center tracks "survival time" - the time a completely unprotected computer facing the raw Internet can survive before becoming compromised by a virus - in minutes. Not days, not even hours. This is so off the charts, that with a safer sex metaphor, using no protection is more akin to just injecting yourself with an STD than engaging in a risky behavior.
Barely less unsafe surfing
Adding in a constantly-updated anti-virus tool, and a firewall, and making sure that your operating system is up to date is akin to being healthy. You have a basically operational immune system - congrats!. You'll be fine if the person you're sleeping with has the common cold, but anything more serious than that and you're in trouble.
Using HTTPS - visiting websites which show up with a green lock icon - is also a good practice. You can even install some browser plugins like HTTPS Everywhere and CertPatrol that help you out.
HTTPS is kind of like birth control. You may successfully prevent *ahem* the unauthorized spread of your information, but you're still relying on a significant amount of trust in your partner (to have taken the pill, to withdraw), and there are things out of your knowledge that can go wrong - the pharmacist provided fake pills, or you have a withdrawal failure (please note this is about digital security advice, and not at all giving good safer sex advice - a quick visit to wikipedia is a good start for effective -- and non effective birth control methods!). With SSL Certificates, you are still trusting that the website has good practices to protect your information (insert the constant litany of password reset links you've had to deal with this year here), and there have been cases of stolen SSL certificates) and are tools to help an attacker try and intercept your encrypted traffic.
Slightly Safer Surfing
With digital security, a lot like with safer sex, some methods can be combined for a greater effect, but layering other methods can be a horrible idea. Adding using anti-virus tools, firewalls, system updates, and HTTPS on top of any other method here is a universally Good Thing.
Using a VPN is like using a condom, provided by your partner for this encounter, and given to them by a source neither of you have any real trust in. Asking the manufacturer for information about exactly how it's made, or what its expiration date is will often result in grand claims (but no hard evidence). Requests to see the factory floor and verify these claims are presumed to be jokes. The VPN-brand condom generally works, and is definitely fast and easy, but you're placing a lot of trust in a random company you found while searching the Internet, and probably also the cheapest one you found. On top of that, you're also still trusting your partner to not have poked any holes in the condom.
Overall, It's still much better to be using the VPN than not, and if you trust your partner (i.e. the website or service you're going to), and you trust the VPN provider for whatever reason - perhaps a widely trusted company has given an independent audit of the VPN, or you or your workplace has set it up yourself - then for most situations you're pretty safe. Layering a VPN on top of the above tools is good, but layering VPNs on VPNs or on other networks is actually not dissimilar to layering condoms - it actually makes failure in very weird (and, lets face it, awkward) ways /more/ likely.
Still, though, wouldn't it be better if you could rely even less on trust, and have that trust backed up with evidence that you yourself can look at?
Using Tor is like using a condom which you not only know has gone through extensive testing, you can even visit the factory floor, look at the business' finances, and talk with the engineers and factory staff. It's /still/ not 100% safe, but it is a heck of a lot safer, and you can verify each and every claim made about what it does and does not do.
And to be clear here, if you're logging in to a website over Tor, that website now knows who you are (you're no longer anonymous to them, and possibly others watching you do this along the wire), and that website is storing your password and may fail to protect it at some point. That website can still turn out to be malicious and attack you, and very powerful adversaries can even specifically try and intercept traffic coming from a website and going into the super-secret Tor network, change it, and include an attack they know works well against out of date versions of the browser you're using. An out of date Tor browser is like an expired condom - it's best not to bet your life on it.
To really (over-)extend the analogy, the Tor-branded condom business happens to be heavily funded by a religious organization that is strongly against birth control (and indeed has an entire project that tries to undermine birth control methods, to the point of installing secret hole-punchers in condom factories). This same organization (it's large!) does have a different and vocal component that strongly supports safer sex, and not only funds giving away condoms, but also the production of them. It's not, seemingly, the most logical set up, but hey, we're talking religion, politics and sex - logic doesn't always come in to play here.
Like sex, there is no truly "safe" way to play on the Internet, and it's unrealistic to expect that abstinence from the Internet is realistic. So, be careful out there, adopt safer practices, and keep your wits about you. Good luck!
There's a budding conversation on "trust" over in the twitterverse. I began a draft post a while back that compared Tor (the amazing privacy and anti-censorship network and all privacy-protecting software to condoms. More on that soon, but let's actually talk about how you might have trust in a software project, using Tor as an example. Tor has been in the news recently, and I've had a ton of people ask me about how safe it is to use, so I figured one click-bait headline is as good as another in having an open and honest discussion about Tor.
First, let's be transparent. Tor - not unlike the Internet itself - did in fact start out as a project by the US Naval Research Laboratory, and does continue to receive funding by the US Government to support freedom of expression around the world, with targeted efforts to enable free speech and access to uncensored information in countries where Internet connections are heavily filtered.
So, can you trust Tor? How do you know that the NSA hasn't forced Tor into building a "back door" into the Tor software, like they did with RSA Security, and many other pieces of software you use daily, or like what has historically happened to privacy-protecting services like hushmail?
The answer is actually that you should not actually need to trust the organization behind Tor in order to be confident that the software is built to be safe. This is enabled by the fact that Tor is open source - meaning you can read every line of the code they use to build the software you install. Of course, even with open source software, you're trusting whoever is compiling it do do so on a secure system and without any extra malicious intent. The Tor Project answers this problem by using "deterministic builds", which let you check, independently, that the code posted publicly is the code you're running.
If you use Windows or Mac, both "closed source" operating systems, you are absolutely, 100% trusting that no one in the company, nor any government with significant sway over these companies, has snuck in code to allow remote spying. You have no way to inspect the code running your operating system, and every tool you use on top of it is vulnerable to being undermined by something as simple as a hack to the tiny piece of software that tells your computer how to talk with the keyboard, which could just as easily also store every password you have ever typed in. You're also trusting your ISP, every web site you log in to, and thousands of other intermediaries and companies, from the ones who provide SSL Certificates (enabling the "green lock" of a secure website) to the manufacturer or your wifi router and cablemodem to not betray your trust by accident, under duress, or with malicious intent.
Of course, even back in the green pastures of open source, there is no "absolute" level of trust, no matter how much we'd like there to be. Rare is the user who actually checks the "signature" of the download file against the posted "signature" online to make sure the tool they're about to install is the intended one. And even rarer is the user who checks in on the deterministic build process (and it's still fragile, so hard to guarantee even so). Even at this level, you are trusting the developers and others in the open source and security community to write solid code and check on it for bugs. The Tor Project does an exceptional job at this, but as heartbleed reminds us, huge, horrible bugs can go unseen, even in the open, for a long time. You're also trusting all the systems that the developers work on to not be compromised, and to be running code that is also in more or less good condition, and to be using compilers that aren't doing funny things.
For what it's worth, this is hardly a new problem. In my unhumble opinion, I'd still rather have this more open model of shared trust in the open source world than rely on any single company, whose prime motive is to ship software features on time.
So - can you trust Tor? I do. But saying that I "trust" Tor doesn't mean I have 100% faith that their software is bulletproof. All software has bugs, and particularly security software requires a lot of work on the part of the user to actually make it all work out as expected. It's time to talk about trust less as a binary and more as a pragmatic approach to decision making based on best practices, source availability, and organizational transparency.
There's a point here about heartbleed and security — I promise. Keep with me.
As I am wont to once the weather finally begins to coöperate, I've been trying a few new things out on the grill. When I'm in this exploratory phase, I love digging through the infinitely interesting BBQ blogs of the Internet - they're full of hard-won knowledge about fire and smoke, but often lack a certain level of technical polish.
Case in point, my reference blog for this week's experiment was a well-seasoned old blog, but they'd lost every single comment from years of discussions. Why? No technical glitch, but simply because they'd chosen a private company to manage their comments - and it went out of business, leaving them not only without a commenting tool, but without those years of educational clarifications and discussions.
Ownership and control matter. This is true when you're talking about your possessions, your house, your comments on a BBQ blog, and with your software. I've railed against app-ification before, but I want to make a slightly deeper point here. If you bought a house, but with the condition that any repair, no matter how minor, you had to contract the previous owner (and only them) to make at a cost of their choosing - would you feel you really owned or controlled that house? Would you buy a car where the hood was locked shut, accessible only to the specific dealership where you bought it?
These cases are very much the situation with the vast majority of software you run on your computer. From Microsoft Word to Apple's iTunes, and even more insidiously, OSX and Microsoft Windows themselves - are all locked away from you. You've been forced to pay hundreds of dollars for them with the purchase of any computer - but you have no control or real ownership over them.
The alternative is what's called "free" or "open source" software (people get into fierce debates on the terminology here, which I'm ignoring for the time being). All software starts with instructions that are more-or-less understandable by humans; commands like if (this thing) then (do this other thing). Generally speaking, this "language" is then turned into something that's closer the more basic tools that computers understand. Imagine a particularly skilled dog with a great memory - by stringing together enough fetches, play deads, stops, roll overs and so on, you could eventually come up with a sequence of commands that would have this dog go out and buy a beer for you at the corner store, and bring in back.
"Closed source" software only gives you the computer-understandable version, and it's surprisingly difficult to turn that back into a simple, human-understandable chunk of logic. "Open source" software, on the other hand, always provides you with the original, understandable language.
This means a lot of things - one, you can tweak it. If you don't like the beer that your dog fetched, you can find the human-speak parts of the commands where it's selected, and make sure your preference for hoppy beer is respected, and then turn it back into the commands your computer can do.
This ability to change how your own tools work itself has many additional benefits - you can share that change, and if it's useful enough, that change itself will be included in the next version of the "core" software that everyone uses.
And finally, Heartbleed
This openness also means anyone can look at the logic that is driving their tool. This means that when you start talking about trusting software, there's a heavy preference towards the software that you can look at the source code of, and even more preference towards software where a lot of people have been looking at this same code.
So, that failed with heartbleed. The team behind OpenSSL is tiny compared to their impact. Two out of every three secure servers in the world are running the software that this four-person team manages. And on New Years Eve 2011, one of their developers committed a very, very subtle piece of code that basically didn't make sure that all the doors were closed behind it, and no one else at the time (or anyone who'd taken a look the in two years and chance since) noticed.
So obviously the whole open source thing is broken, right? The bug is out in the open for anyone to figure out, but no one fixed it!
It's not quite so simple. Do you really think that a working piece of closed-source code gets a second glance by its development team? They're just as bound by priorities and shipping product releases as an open-source team, but their code gets locked away with not even the chance for a third party to find a bug and lend a hand — but it's no more secure than the open source tools from concentrated probing, and testing for flaws just like heartbleed.
So yes, heartbleed was bad, but it was also a reminder in how powerful the open source software world can be in finding and fixing a bug. Most of us woke up with some updates to install, and that was the end of it. What horrible, dark bugs are lurking, unfindable, in every piece of closed source software? The precise number is unknowable, but the prevalence of viruses and malware that affect deeply closed systems like Windows might be a strong hint.
No more broken hearts
Going forward, I obviously have a long wishlist of things I'd like to see - a public discussion on what trust in software really means, better tools on every platform to guarantee software packages are what they claim to be (Tor is doing amazing work here), a return to inter-operable standards, especially when we're talking security systems... But as a beginning point, simply better support structures for open code development would be nice. We have volunteers building the basic structures of the Internet - which is an absolutely amazing and good thing - but let's make sure they have the time and resources to do it.
I've been reflecting on some of the challenges I've faced across multiple organizations trying to leverage the power of technology to create positive social change. This reaches way back to my work as a Peace Corps volunteer, up through grad school, my time as a contributing editor at OLPCNews, and through multiple NGOs balancing tech, impact, and budgets.
Obviously, there's no definite one-size-fits all approach to implementing technology in any sector, much less the world of the international NGO that stretches from hip online platforms to how to best use dusty Nokia feature-phones.
Here are the principles I've come up with to date. I took these to Twitter in a lively discussion, and want to expound upon them a bit more:
- Build for sustainability. Minimize what you have to build yourself, and leverage existing platforms
This means giving strong preferences to open source platforms or at least existing services that meet a set of criteria (their service meets your needs, you own your data, shared values, track record...) For any service, someone, somewhere has already built a powerful framework that will be constantly updated and improved, and bakes in thousands of features (security, translation, powerful content management, mobile interfaces, etc.) which will be effortless to turn on when you discover you need them. Focus your precious software development budget on the much smaller number of things that are custom to your work and don't exist. This greatly reduces the initial dev costs as well as ongoing maintenance costs.
- Seriously, don't build it yourself.
Create pro-consumer mobile technology and open up a new market of multi-platform and platform-agnostic users who want the best devices.
The Washington Post ran a great article on the increasing problems of vendor lock-in with tablets and mobile devices. In simple language it boils down the problem around why buying an app for one device doesn't give you access to that app anywhere else; if you switch from an iPhone to an Android phone, you'll have to re-buy your apps, and your iTunes content. This partially is lock-in, but there's also a halo-effect - you can transfer an app from on iPhone to a new iPhone, or content from your desktop iTunes to your iWhatever - and the more devices from the same vendor, the better the system works.
But this is a horrible direction to take, and why I rarely buy apps or content from locked-down stores like iTunes. My desktop computer runs Ubuntu Linux, my tablet Android, and my phone is an iPhone. The media server for our house is a Mac Mini, and I finally retired my hold-out Windows computer last year. I refuse to buy music that I can only listen to on one of those myriad devices any more than I'd buy a CD that only plays in my car, but not in my home, or food that I could eat in the kitchen, but not in the dining room or on a picnic.
By and large, I'm a good target demographic - some discretionary income, a gadget afficionado, and generally plugged in to fun new technologies, but my market is rarely well served.
Here are the video links for my presentations from Campus Party Europe:
GeekEconomy with Don Tapscott (Author, Speaker and Advisor on Media, Technology and Innovation) and Simon Hampton (Director Public Policy EU, Google)
My slides and notes here: joncamfield.com/blog/2012/08/scaling_social_innovation
Quick quiz. Which of these should not be protected as free speech?
[ ] A gun (you know, the kind you can hold and shoot)
[ ] Plans for a nuclear weapon
[ ] Political statements (lots and lots of them)
[ ] Detailed instructions on how to communicate privately
[ ] Detailed instructions on how to make an archival, digital copy of a DVD
The answer is either none or all of the above - we are in a world where free speech (in the form of computer code) can create real world objects and actions that are themselves regulated or outright illegal. But if the action is illegal, is the code that causes it also illegal? If so, the line gets very blurry very quickly. If not, we still have some fascinating problems to deal with, like printable guns. Regardless, we need to educate policy makers to understand this digital frontier and be prepared to defend free speech when this gets unpleasant. Spoiler: It's already unpleasant. Our world is defined by code, where programmed actions have very real, tangible effects.
Code of Protest
Civil disobedience can take some weird forms. While today masked digital vigilantes of Anonymous act as a curious type of Internet immune system; reacting against gross infringements of cyber liberty, their methods are not as new as you might think. In the late 90s, the Electronic Disturbance Theater (http://en.wikipedia.org/wiki/Electronic_Disturbance_Theater) was supporting the Zapatistas by flooding Mexican government sites with a rudimentary DDoS (Distributed Denial of Service) attack, which brings a webserver down by overloading it. This concept is at the heart of LOIC, Anonymous's "Low Orbit Ion Cannon" (http://en.wikipedia.org/wiki/Low_Orbit_Ion_Cannon). EDT's version, "Floodnet," had the nice touch of requesting webpages with names like "human rights" from the government sites, resulting in errors clogging up the server reading something like "404 - human rights not found." Asking for a webpage is pretty clearly something akin to shouting at a rally, or a "cyber sit-in" (http://angelingo.usc.edu/index.php/politics/cyber-sit-ins-grassroots-to-gigabytes/) - get enough people to do it, and it causes some level of annoyance - but it's still an act of speech.
Free speech and a dead-end for copy controls
Fortunately, CSS was not particularly well crafted, and was quickly and thoroughly broken with a chunk of code nicknamed decss by a Norwegian teenager nicknamed "DVD Jon". This caused a slight bit of controversy. DVD Jon was accused of theft in Norway, and users in the States were threatened with fines and jailtime for re-distributing it under the DMCA law.
In a predictable story arc, the next chapter of this story is of course the Internet digerati of the day getting royally teed off and causing a ruckus. The source code of decss was immediately turned into graphic art, secretly embedded in photos, turned into poems, and even a song (http://www.youtube.com/watch?v=GekuuNqAiQg) - a gallery of creative works using or containing the decss code remains online: http://www.cs.cmu.edu/~dst/DeCSS/Gallery/ . DVD Jon won his case (http://news.bbc.co.uk/2/hi/technology/3341211.stm) and we all celebrated the somewhat obvious win for free speech and consumer power.
Private speech and munitions export controls
We can rewind even further back to the early 90s, when Phillip Zimmerman published the entire source code of his powerful encryption tool, PGP, in a book (of the paper, box-shaped physical object type). Now, encryption this powerful was classified (until 1996) as a "munition" and subject to export controls with the types of penalties you might expect for selling military equipment on the black market. Had PGP been released as a program, it would obviously fall into this categorization. As text in a book, however, it appeared to be protected as free speech. The stupidity of the distinction of course also spurred many to make t-shirts and code snippets of this "illegal" code. Eventually, a series of court cases (Bernstein v. United States, Junger v. Daley) establishing that source code, indeed, counts as free speech.
Free speech and real munitions
Code is speech, code is reality.
In linguistics, you have the concept of "Illocutionary Acts" - acts which are embodied in language. There aren't many - no matter how I say that I'm going to go for an after-work run, the act of running can only be done by my whole body. Oaths are the best example of these acts - speaking the oath is making the oath, and that combination of idea and action is a powerful sentiment.
And every line of code can be just as powerful.
What follows are my speaking notes from my talk with on the role of open source models in scaling social change. You can see this, plus Ashoka Fellow Gregor Hackmack's presentation onhis own amazing scale, at http://live.campus-party.org/player/load/id/27aba4389df7558611f3f6d5967… .