Cloud Native Security Con 2023: highlights
I had the chance to attend Cloud Native Security Con North America 2023, which ran Feb. 1 and 2 at the Seattle Convention Center.
This was a special one as it was my first time attending a conference as a Program Committee member so it felt like I contributed (even in a small fashion) to create an opportunity for the open-source security community to gather and learn.
Security for cloud-native apps is a hot topic right now, especially after disclosures of the Solarwinds software supply chain attack, the Apache log4j vulnerability and many other software security breaches — followed by an Executive Order from the U.S. government. The order sets a baseline of requirements for software to be considered secure, proving that the problem remains a big, as-yet-unaddressed concern for the whole industry.
The aforementioned attacks demonstrated two key flaws in software security:
- Lack of transparency in the software development process
- Limited focus on practices to build software that is more resistant to tampering by malicious actors
These problems have deep roots in the assumptions made about the integrity of the tools used to develop and ship code.
As a remarkable example and part of his 1984 Turing award lecture “Reflections on Trusting Trust”, Professor Ken Thompson demonstrated the feasibility of altering a C compiler to make it accept otherwise illegal C code. His conclusion?
“No amount of source-level verification or scrutiny will protect you from using untrusted code”
In other words, you can’t trust that the entire software development process is secure only by verifying the integrity of source code.
More than 30 years after Thompson’s lecture, the software industry still suffers from the results of misplaced trust assumptions.
Considering how open source can lower the barriers to the adoption of AI & Data technologies, it’s crucial to understand the depth of the OSS community’s focus on security.
Here are four key takeaways that I extracted from this conference:
1. History repeats itself.
It was very interesting to realize that I’ve been in the tech industry long enough to start seeing repeating cycles. Multiple sessions discussed the concept of Zero Trust Architecture and the idea of enforcing granular security controls even at the kernel process level.
Listening to it, I remembered that it was almost 10 years ago when I first learned how ZTA and defies the assumption that a private network is a trusted network. Instead, ZTA proposes moving security controls down to the kernel level to achieve the right balance between context and control (aka “The Goldilocks zone”). Now it’s a bit different because the ideas and technologies implementing them are no longer locked within vendor’s commercial offerings, but they’re out in the form of specs (like SPIFFE/SPIRE) or open-source projects (like eBPF).
2. Adoption of new standards takes time.
After the U.S. Executive Order was issued, NIST published a series of regulations intended to provide a framework of security practices for software developers. Eventually, the software industry responded with a specification to guide adopters through several milestones as part of a maturity model, giving birth to SLSA.
SLSA provides guidelines to secure the path to production, ensuring that each artifact generated in the process can be verified and traced. Nevertheless, consensus remains elusive when it comes to important questions such as how to report vulnerabilities effectively.
To address that shortcoming, several industry members have launched the OpenVEX specification, a standardized mechanism designed to help organizations assess and manage vulnerabilities more efficiently. OpenVEX provides a way to answer the question: “Which of the vulnerabilities in this report are actually exploitable in my particular environment?” While the outlook of OpenVEX looks promising, its adoption will probably take some time mainly because it’s complimentary to Software Bill of Materials (SBOMs), considered one of the key indicators of a secure software development process. As David Wheeler, director of open-source supply chain security at the Linux Foundation recently admitted: “Trying to get the entire software industry to provide SBOMs is a huge change. We should not expect this to happen overnight. This is going to take time”
What’s more, the scope of SLSA only covers the process from code to artifact. A new collaboration coordinated by the Continuous Delivery Foundation is seeking to augment SLSA to cover the entire artifact lifecycle, from deployment to decommissioning.
3. The intersection of two big problems.
In regards to software security, AI and Data communities face equivalent challenges with similar real-world implications. I’d say that the common denominator between the two worlds has to do with traceability. In fact, according to a survey of 313 ML practitioners (Serban, Blom 2020, 8) traceability, or the ability to trace outcomes of production models back to its configuration and input data, is the family of software engineering practices with the lowest adoption so far. This resembles the whole purpose of specs like SLSA or open-source projects like Sigstore, which are rapidly becoming a standard to address the lack of transparency and verification for software artifacts.
Conversations I had during the event with individuals from the Open Source Security Foundation (OpenSSF); Continuous Delivery Foundation (CDF); and the Cloud Native Computing Foundation itself (CNCF) convinced me that our industry needs to agree on ways to address common challenges in the AI/Data and software engineering communities, extending current approaches about software security to include the artifacts ML and data pipelines consume and produce.
4. The road ahead.
Security is never done, and we should defy the assumption (long understood by the ML community) that software is long-lasting. During his keynote, OpenSSF GM Brian Behlendorf remarked, “Software should also come with an expiration date, especially because its security posture degrades with time.” Every effort to increase the adoption of standards for secure software development should account for the particular challenges the MLOps community faces — extending mature security practices to a new family of workloads.
There’s a steep but rewarding road ahead in the pursuit of more resilient systems, and we are excited to enable collaboration between different communities to the benefit of any project or organization looking to improve their security posture.