Tools – and tool development – are not neutral. While many specific technologies in the Internet Freedom / digital rights and resilience space, from end to end encryption to anti-surveillance and anti-censorship tools are also useful for less than savory activities, how you build the tools, and who you include in the design, shapes the tool itself.
The community of people working to support human rights defenders and at-risk communities as they navigate emerging risks of being online has expanded, diversified, and evolved significantly over the past decade. Many valuable community norms have become increasingly required standards, from an expectation that tools in this community use some form of open source license to an increasing requirement to go through a formal security audit before being rolled out to vulnerable populations. These are representative of some hard lessons learned around sustainability and responsibility in this space.
It’s time to start formalizing some more recent lessons learned - tool development that harbors abusers, rapists, and racists is not aligned with human rights.
To make tools that are more responsive to the threats faced by activists and marginalized communities around the world, we need to ensure this work is meaningfully inclusive. Not just in training, not just in outreach, not even just in usability testing, but in core development and ideation. This isn’t to say that many tools are not already doing this, but it should be the norm to build with, not for.
Choices around licensing, security audits, user engagement, and yes, even (especially) Codes of Conduct, provide critical signals to communities that their threat models are accounted for, and that their community is “welcome”. Open source (atop of many other benefits) provides an indication that the development is prioritizing long-term sustainability over profit and exclusion. Audits put user safety over speed of deployment. Any level of positive user engagement, from a welcoming “issue queue” to active engagement and feedback gathering provides a clear sign as to which communities the tool is focused on and responsive to.
We also must realize that this will be a change, and that as a community we need to build in support and help identify resources along the way to actively support this, not simply mandate it. A failure to do this will turn it into a box-checking exercise instead of meaningfully contributing to both high-quality work and actively building stronger, more diverse and inclusive tool creator/user teams. This means templates or even tools like the Code of Conduct builder, programs like Internews’ BASICS, and direct monetary support from funders in this space to support and encourage this work long before making it a condition.
This certainly is not new, and I’m certainly not the first to think this – the Code of Conduct builder alone has been working on this since 2015, and countless tool developers, funders, digital safery trainers are already working and quietly tracking which tools have responsibly dealt with (or prepared for having to deal with) finding an abuser in their ranks, and which ones have papered it over. It is long past time to formalize this.
Some useful CoC resources
- Project Include
- Code of Conduct Builder: https://dev.codeofconduct.tools/
- Contributor Covenant
- ConfCodeOfConduct
- Github community “quality” checklist (see also: https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/creating-a-default-community-health-file ; https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/adding-a-code-of-conduct-to-your-project)
- Netlify’s requirements for their free open source option
- Review some existing event and community COCs:
- Avoid “bare minimum”, “funny”, or overly simplistic CoCs. The much more detailed ones above are detailed for real-world reasons.
- Have at least a minimum response plan