Cyberpunk Standards

Photo by Alexander London on Unsplash

The future of technology requires a dramatic shift from the present to place ownership and control back in the hands of consumers.

We engage with technology in incredibly and increasingly intimate ways, both intentionally and not. Our actions online are scrutinized, our conversations listened in on, our behaviors predicted, and all of this is done cavalierly to market products towards us, with no safeguards or thought given to not only the risks and impact of having this data available about us.

We have always been at war with crypto

The dream of the 90s is alive

As we debate "responsible encryption," here is a long scroll of pullquotes from the previous incarnations of CryptoWars. If you're concerned about this, donate to the EFF -- they've always been there, fighting this insanity back.


"Opposing Clipper is an odd pairing of civil liberties activists and corporations. The activists worry that the government could have too much access to private exchanges. Companies have chafed at export restrictions that stop them from using the best encryption technologies in products they sell abroad. … Companies would rather include many different encryption technologies in the products they sell and don't want to be locked into government-approved hardware. They also point out that their customers overseas are unlikely to want to use the Clipper lock knowing that the U.S. government holds the keys."
-- https://www.washingtonpost.com/archive/business/1995/03/16/three-ways-t…

Entropy Story-time: From Claude Shannon to Equifax

Mix Two Colors / Pietro Jeng

There's an piece floating around that does a great, succinct job at summarizing Claude Shannon's contributions to our modern understanding of information. If you haven't read The bit bomb on Aeon, head over there. It'll make your brain happy with things like this:

"Shannon – mathematician, American, jazz fanatic, juggling enthusiast – is the founder of information theory, and the architect of our digital world. It was Shannon’s paper ‘A Mathematical Theory of Communication’ (1948) that introduced the bit, an objective measure of how much information a message contains."

The article digs deep into how easy it is to predict things - especially language. It ends up focusing on the power of pattern detection in being able to compress information:

"Shannon expanded this point by turning to a pulpy Raymond Chandler detective story […] He flipped to a random passage … then read out letter by letter to his wife, Betty. Her role was to guess each subsequent letter […] Betty’s job grew progressively easier as context accumulated […] a phrase beginning ‘a small oblong reading lamp on the’ is very likely to be followed by one of two letters: D, or Betty’s first guess, T (presumably for ‘table’). In a zero-redundancy language using our alphabet, Betty would have had only a 1-in-26 chance of guessing correctly; in our language, by contrast, her odds were closer to 1-in-2. "

Let's talk about PGP

I've been working on a new way to explain email encryption; I'd appreciate feedback on this approach. If you're looking to try email encryption out - buy me a beer (let let me buy you one) if we're in the same place, or check out the usable, in-browser work by Mailvelope.

New GPG Keys!

I am transitioning both my professional and personal GPG keys. This transition document (in full, below) and both updated keys are signed with both old and new keys for both personal and professional accounts to validate the transition.

In short:
[email protected] - new keyID 270C17F1
[email protected] - new keyID FDDB8C25

If this is all greek to you, GPG (or PGP) is a way to encrypt your email so that only other specific people (who must also be using GPG) are able to read it. While we think of email like regular mail, with a level of privacy like something in an envelope, the reality is that it's better to compare it to a postcard. If you're interested in getting started, I highly recommend EFF's excellent PGP guide, and Mailvelope is a super-easy browser plugin to help get you started in more secure webmail (it works great, for example, with gmail).

What Good Are Secure Communications Tools if No One Uses Them?
Jon Tue, 05/17/2016 - 19:38

Cross-posted from my piece on Medium

It was the second day of digital security training, and I was losing the room. The journalists, documentarians, and media activists around the table were more intent on following their friends and colleagues via Facebook chat than dealing with the fidgety, hard to install, but super-secure communications tools I was trying to promote.
They had good reason — it was winter 2014, during the tense final days of Ukraine’s EuroMaidan protests, going on just across town from our training. The urgency of communication was just too much. Overnight, most of the trainees had chosen to uninstall the app we’d burnt the better part of the previous day getting to install on a mix of Windows XP, 7, Macs, and even Linux systems.

But then again, I had good reason to urge security. Protesters were being arrested because of insecure communications. People were worried about their own government, but also about the small number of companies controlling their telecommunications.

I thought I had understood their need — they wanted a way to have trusted, private communications that spanned from mobile to desktop, chat to voice.
But I had failed. I was pushing a collection of tools I knew to be the best in its class for security, developed transparently as open source, with constant attention to not only bugs but the nuances of cryptography and careful, responsible implementation and monitoring of new possible flaws. The tools were also the only ones that combined these security features, with both text and voice capabilities that could bridge desktop and mobile.

These activists required a tool that they could show to others and start using in minutes; not one that took a day of training and debugging just to install. Tools that aren’t used aren’t providing security.

A Recent History of Back Doored Encryption, in 4 links

TSA Keys, 3D-printed

This is partially a footnotes section from last week's Crpyto Saves Lives post, but every week brings new stories, and this week was a doozy. So, let's recap the whole "backdoored crypto / secret golden keys can work" argument:


(1) We can protect private information

*Cough* OPM *Cough*

Update: "Security bloggers and researchers claim to have uncovered a publicly available database exposing the personal information of 191 million voters on the Internet. The information contains voters’ names, home addresses, voter IDs, phone numbers and date of birth, as well as political affiliations and a detailed voting history since 2000."

(2) Well, we are really good at protecting super-important crypto keys that only give good guys access,

So, those luggage locks with a "golden key", now required world-wide that only trained TSA agents can pop open? Yeah, about that... - TSA's master key set was allowed to be photographed, and while that photo was quickly taken off the internet, the damage was done. Anyone can now 3D print completely functional TSA keys.

(3) Besides, adding a backdoor won't cause problems!


Of Tor and Condoms

A garlic flavored condom

I am far from the first to compare digital security practices to safer sex practices. Heck, you can even see a rap career blooming as Jillian York and Jacob Appelbaum suggest that it's time that we "talk about P-G-P" at re:publica.

Talking about software and trust gets both very boring and very depressing quickly. Let's instead move on to the juicy sex-ed part!

A quick disclaimer: First, apologies for the at-times male and/or heteronormative point of view; I'd welcome more inclusive language, especially around the HTTPS section. Second, I am unabashedly pro-Tor, a user of the tor network, and am even lucky enough to get to collaborate with them on occasion. The garlic condom photo comes from The Stinking Rose..

Super-duper Unsafe Surfing

Using the Internet without any protection is a very bad idea. The SANS Institute's Internet Storm Center tracks "survival time" - the time a completely unprotected computer facing the raw Internet can survive before becoming compromised by a virus - in minutes. Not days, not even hours. This is so off the charts, that with a safer sex metaphor, using no protection is more akin to just injecting yourself with an STD than engaging in a risky behavior.

Barely less unsafe surfing

Adding in a constantly-updated anti-virus tool, and a firewall, and making sure that your operating system is up to date is akin to being healthy. You have a basically operational immune system - congrats!. You'll be fine if the person you're sleeping with has the common cold, but anything more serious than that and you're in trouble.

Using HTTPS - visiting websites which show up with a green lock icon - is also a good practice. You can even install some browser plugins like HTTPS Everywhere and CertPatrol that help you out.

HTTPS is kind of like birth control. You may successfully prevent *ahem* the unauthorized spread of your information, but you're still relying on a significant amount of trust in your partner (to have taken the pill, to withdraw), and there are things out of your knowledge that can go wrong - the pharmacist provided fake pills, or you have a withdrawal failure (please note this is about digital security advice, and not at all giving good safer sex advice - a quick visit to wikipedia is a good start for effective -- and non effective birth control methods!). With SSL Certificates, you are still trusting that the website has good practices to protect your information (insert the constant litany of password reset links you've had to deal with this year here), and there have been cases of stolen SSL certificates) and are tools to help an attacker try and intercept your encrypted traffic.

Slightly Safer Surfing

With digital security, a lot like with safer sex, some methods can be combined for a greater effect, but layering other methods can be a horrible idea. Adding using anti-virus tools, firewalls, system updates, and HTTPS on top of any other method here is a universally Good Thing.

Using a VPN is like using a condom, provided by your partner for this encounter, and given to them by a source neither of you have any real trust in. Asking the manufacturer for information about exactly how it's made, or what its expiration date is will often result in grand claims (but no hard evidence). Requests to see the factory floor and verify these claims are presumed to be jokes. The VPN-brand condom generally works, and is definitely fast and easy, but you're placing a lot of trust in a random company you found while searching the Internet, and probably also the cheapest one you found. On top of that, you're also still trusting your partner to not have poked any holes in the condom.

Overall, It's still much better to be using the VPN than not, and if you trust your partner (i.e. the website or service you're going to), and you trust the VPN provider for whatever reason - perhaps a widely trusted company has given an independent audit of the VPN, or you or your workplace has set it up yourself - then for most situations you're pretty safe. Layering a VPN on top of the above tools is good, but layering VPNs on VPNs or on other networks is actually not dissimilar to layering condoms - it actually makes failure in very weird (and, lets face it, awkward) ways /more/ likely.

Safer Surfing

Still, though, wouldn't it be better if you could rely even less on trust, and have that trust backed up with evidence that you yourself can look at?

Using Tor is like using a condom which you not only know has gone through extensive testing, you can even visit the factory floor, look at the business' finances, and talk with the engineers and factory staff. It's /still/ not 100% safe, but it is a heck of a lot safer, and you can verify each and every claim made about what it does and does not do.

And to be clear here, if you're logging in to a website over Tor, that website now knows who you are (you're no longer anonymous to them, and possibly others watching you do this along the wire), and that website is storing your password and may fail to protect it at some point. That website can still turn out to be malicious and attack you, and very powerful adversaries can even specifically try and intercept traffic coming from a website and going into the super-secret Tor network, change it, and include an attack they know works well against out of date versions of the browser you're using. An out of date Tor browser is like an expired condom - it's best not to bet your life on it.

To really (over-)extend the analogy, the Tor-branded condom business happens to be heavily funded by a religious organization that is strongly against birth control (and indeed has an entire project that tries to undermine birth control methods, to the point of installing secret hole-punchers in condom factories). This same organization (it's large!) does have a different and vocal component that strongly supports safer sex, and not only funds giving away condoms, but also the production of them. It's not, seemingly, the most logical set up, but hey, we're talking religion, politics and sex - logic doesn't always come in to play here.

Like sex, there is no truly "safe" way to play on the Internet, and it's unrealistic to expect that abstinence from the Internet is realistic. So, be careful out there, adopt safer practices, and keep your wits about you. Good luck!


Do you trust your tools?

There's a budding conversation on "trust" over in the twitterverse. I began a draft post a while back that compared Tor (the amazing privacy and anti-censorship network and all privacy-protecting software to condoms. More on that soon, but let's actually talk about how you might have trust in a software project, using Tor as an example. Tor has been in the news recently, and I've had a ton of people ask me about how safe it is to use, so I figured one click-bait headline is as good as another in having an open and honest discussion about Tor.

First, let's be transparent. Tor - not unlike the Internet itself - did in fact start out as a project by the US Naval Research Laboratory, and does continue to receive funding by the US Government to support freedom of expression around the world, with targeted efforts to enable free speech and access to uncensored information in countries where Internet connections are heavily filtered.

So, can you trust Tor? How do you know that the NSA hasn't forced Tor into building a "back door" into the Tor software, like they did with RSA Security, and many other pieces of software you use daily, or like what has historically happened to privacy-protecting services like hushmail?

The answer is actually that you should not actually need to trust the organization behind Tor in order to be confident that the software is built to be safe. This is enabled by the fact that Tor is open source - meaning you can read every line of the code they use to build the software you install. Of course, even with open source software, you're trusting whoever is compiling it do do so on a secure system and without any extra malicious intent. The Tor Project answers this problem by using "deterministic builds", which let you check, independently, that the code posted publicly is the code you're running.

If you use Windows or Mac, both "closed source" operating systems, you are absolutely, 100% trusting that no one in the company, nor any government with significant sway over these companies, has snuck in code to allow remote spying. You have no way to inspect the code running your operating system, and every tool you use on top of it is vulnerable to being undermined by something as simple as a hack to the tiny piece of software that tells your computer how to talk with the keyboard, which could just as easily also store every password you have ever typed in. You're also trusting your ISP, every web site you log in to, and thousands of other intermediaries and companies, from the ones who provide SSL Certificates (enabling the "green lock" of a secure website) to the manufacturer or your wifi router and cablemodem to not betray your trust by accident, under duress, or with malicious intent.

Of course, even back in the green pastures of open source, there is no "absolute" level of trust, no matter how much we'd like there to be. Rare is the user who actually checks the "signature" of the download file against the posted "signature" online to make sure the tool they're about to install is the intended one. And even rarer is the user who checks in on the deterministic build process (and it's still fragile, so hard to guarantee even so). Even at this level, you are trusting the developers and others in the open source and security community to write solid code and check on it for bugs. The Tor Project does an exceptional job at this, but as heartbleed reminds us, huge, horrible bugs can go unseen, even in the open, for a long time. You're also trusting all the systems that the developers work on to not be compromised, and to be running code that is also in more or less good condition, and to be using compilers that aren't doing funny things.
For what it's worth, this is hardly a new problem. In my unhumble opinion, I'd still rather have this more open model of shared trust in the open source world than rely on any single company, whose prime motive is to ship software features on time.

So - can you trust Tor? I do. But saying that I "trust" Tor doesn't mean I have 100% faith that their software is bulletproof. All software has bugs, and particularly security software requires a lot of work on the part of the user to actually make it all work out as expected. It's time to talk about trust less as a binary and more as a pragmatic approach to decision making based on best practices, source availability, and organizational transparency.