“Software must become safer,” but how?

“Software must become safer,” but how?
Our tagline reads “Software must become safer”, and for good reason; We feel very strongly about this. But it does lead to the obvious and fair question: “What exactly do you do to ensure that the software your teams produce is safe and secure?”

It’s an interesting question for at least two reasons. First, 100% safety and 100% security do not exist, so in every real-world project, it means there are trade-offs. Also, the outcomes we achieve are not just dependent on technology choices, but on all three elements of what is known as the “Golden Triangle”: People, Process, and Technology.

In this post, I’ll go into all three of them to explain how we make the software we build safe and secure.

A note on safety and security

Let’s first make sure we are all on the same page and define some basic terms:

  • A system is safe when it is protected from harm and things are not likely to accidentally go wrong. An example is memory safety, an oft-discussed aspect of Rust, our technology of choice. Another example of safety is that you can be sure that your car won’t accelerate suddenly when you turn up the volume of the radio.
  • A system is secure when it is protected from threats and actors that want to break, misuse, or abuse (parts of) the system. For example, a car is secure when protected from people who want to physically break into it, and software is secure when protected from hackers who want to push an update containing malicious code.

Now, you can easily imagine a situation where a malicious actor abusing a system (due to a lack of security) also causes harm, making it unsafe. And anyone working in software knows that bugs (‘things accidentally going wrong’) in a system are vulnerabilities that can be - and too often are - exploited by hackers. This means that there is an overlap between safety and security. So, a system that you rely on should be both safe and secure.

Finally, it’s worth repeating that because there is no 100% safety or 100% security in real-life projects, there will always be trade-offs. That means that in the end, proportionality is important: taking measures - regarding technology, people, and processes - that are proportional to the safety that you need and the threats you anticipate to reduce the chance of problems while staying within the time and money constraints for the given project.

Setting expectations

The first thing we do at the start of any project is to take stock of the safety requirements and the scope of the security question.

That safety is important from the start is usually very clear for everyone involved, but we have worked on projects - like many people - where there was pressure to “think about security later”, or “delay considering security until we have an MVP”. We have experienced firsthand where people are coming from: limited budgets, wanting to get to market fast, not a lot of feeling for security, etc.

We also learned how to deal with it and our solution is conceptually very simple: we talk to prospective clients about the importance of taking security seriously before we start working with them, so before we even have a contract and start development. In that way, all expectations are managed; we know that our prospective client knows we are going to spend time on it. Even if that client is a start-up “only building a first MVP” to test if they’re on to something.

We talk about the security ‘profile’ of the project (i.e. the level of potential risks involved in the intended usage), and the security requirements we see right from the beginning. We also agree up-front on what to do when, in terms of working on the security-critical features. We decide whether to involve our security advisors (a term from the MSDL, see Practices) who may not be part of the dev team, and whether the code will need to be audited by an external party.

To establish the security requirements, we may choose to create a threat model. If we do, then we prefer to do this in the first few weeks of the project. We don’t have a one-size-fits-all methodology, but we have experience with the methodology created and taught by Eleanor Saitta, Principal Consultant at Systems Structure Ltd.

Writing our software in Rust

Out of ‘People, Processes, and Technology’, we consider the latter to be the easiest choice; whenever feasible we opt to build our software in Rust.

In short, we think Rust is an excellent choice for writing safe and secure software because it provides:

  • Memory safety, avoiding memory bugs (~70% of CVEs in large code bases);
  • Thread safety, avoiding hard-to-debug race conditions (another possible source of safety and security problems).
  • A strong type system, avoiding common programming mistakes while at the same time allowing for code that is easier to work with for programmers.

We are not the only ones. To name a few:

Practices we apply

Each project has a different software development process. Sometimes there are standards to adhere to that prescribe certain practices. For example, when we developed firmware (in Rust) for a medical device back in 2019, we put practices in place prescribed by IEC 62340. However, for most of the projects we run, it’s down to our own judgment.

Given the great variety of our projects, we cannot use a single set of practices to cover every project; each project is different. However, we do have several best practices we apply to all of them. These are as follows:

  • We involve one of our security advisors. “Security Advisor” is a role defined in the Microsoft Security Development Lifecycle, MSDL. It’s an obvious choice for us, as our development teams are usually small (2 - 3 people) and do not always have a dedicated security specialist on them.
  • We use CI to check adherence to our coding standards and to run tests and vulnerability checks, whether it’s for our web, systems, or embedded projects. Regarding testing, we make use of everything from unit tests, to integration tests, to fuzz testing. We can tell you more about fuzzing ntpd-rs in this article or this talk.
  • We apply a four-eye principle for code written in serious projects: it can only be merged when reviewed by another person.
  • We are selective in choosing dependencies, only working with ones that are well-maintained and widely adopted. (Read Marc’s blog for a more thorough discussion of using dependencies.)

For high-profile projects, we have our code audited by an external party. We usually work with Radically Open Security, who also performed the first audit of sudo-rs, as well as the ntpd-rs audit.

For open-source projects, like sudo-rs and our Rust implementations of NTP and PTP (project Pendulum), we also adhere to a coordinated vulnerability disclosure policy.

Working on education and awareness

That brings us to the final aspect of the Golden Triangle: us, the people doing the work. We think, for a consultancy of our size to keep achieving good security outcomes in projects, we need to:

  • Have in-depth security knowledge
  • Have solid security backgrounds
  • Keep educating ourselves
  • Continue to foster awareness

The knowledge and backgrounds are there. For example, our engineers David Venhoek, Marlon Baeten and Marc Schoolderman have academic backgrounds in security and/or formal methods.

To keep people educated, we organize yearly company-wide security sessions, usually in November, either providing in-house training or working with external consultants. Our Security Advisors transfer knowledge on the job weekly, year-round.

We also stimulate taking external courses and attending conferences like Tectonics (dedicated to memory safety), CYSAT (dedicated to security in the space industry), IDNext, or the Open Source Summit. (The latter, for example, brought us the OpenSSF Scorecard.) And whenever we feel we need to, we invest time to dive into security-related topics, such as supply chain security in Rust.

To keep everyone on their toes and ‘aware’ of the importance of security, we follow and tend to share global cyber security news regularly with each other. Awareness also follows from educational activities, such as our regular ‘lunch talks’ (an informal presentation immediately after lunch, open to all our employees, with talks from both internal and external specialists).

Conclusion

Of course, just like the safety and security of a product are never going to be perfect, neither are the things we do and the choices we make regarding ‘People, Processes, and Technology’.

We’re happy with the outcomes we have achieved so far, using Rust as a cornerstone technology, combined with carefully chosen lightweight practices. However, we will continue to evaluate and evolve our approach, because that is the only way we can keep building safer software.

(our services)

In need of reliable software?

Get help from the experts!

Rust offers:

  • better safety and security
  • faster time-to-market
  • lower total cost of ownership

> Contact us

Stay up-to-date

Stay up-to-date with our work and blog posts?

Related articles

Recently, we gave a workshop for the folks at iHub about using Rust, specifically looking at integrating Rust with cryptography libraries written in C.
In February of 2024, I was invited by Matthias Endler of Corrode to join him on his podcast Rust in Production. We discussed how Tweede golf uses Rust in production, to ensure the safety and security of critical infrastructure software.
Thanks to funding from NLNet and ISRG, the sudo-rs team was able to request an audit from Radically Open Security (ROS). In this post, we'll share the findings of the audit and our response to those findings.