Revenera logo
Image: 2024 Software Security and Compliance Predictions

In our webinar, 2024 Software Security and Compliance Predictions, featuring Russ Eling of OSS Consultants and Alex Rybak from Revenera we review the 2023 trends, discuss the importance of automation in security, the impact of AI on code generation, and the shift towards specialized teams for managing security issues. The conversation covers the need for cybersecurity in financial reporting, the role of the Open Chain community, and the concept of security by design. Practical advice is given on improving security measures and integrating them into development processes. 

So if you’re using or managing an open source or third-party components, creating policies, overseeing your organization’s IP, or if you’re responsible for asset management or security, or if you’re concerned about the quality of software solutions you’re shipping to customers, then you’re in the right place. Our speakers on this webinar talk about some of the highest priority trends that they see coming in the year. They’re also going to take a few minutes up front to go back in time and review last year’s trends that they said to be on the lookout for. 




Russ Eling is the founder of OSS Consultants, a business dedicated to helping organizations of all sizes manage their use of open source software. They help organizations with everything from scanning and audit services to building an entire open source program. OSS consultants is an official open chain partner and offer services for both open chain ISO 5230 for compliance, as well as ISO 18 974 for Security Assurance specification conformance. Before starting the company, Russ spent more than 20 years in several engineering roles at General Motors. He had responsibility for designing and implementing a successful open source governance program at GM, which was among the first of its kind in 2013, and was once regarded as among the most comprehensive OSS programs in the automotive industry. As the open source compliance Officer, Russ built the Open Source Program Office, responsible for viewing software in every part used in every GM vehicle across the globe. Today, Russ and the rest of the team at OSS consultants offer their recognized industry leading expertise to help companies of all sizes scan, audit and report on their software, as well as helping them to create efficient, comprehensive and robust open source program offices.  

Alex Rybak – As the senior director of product management at Revenera, Alex Rybak is responsible for revenue product strategy. Alex also heads Up Revenue’s open source program office and is a member of the Internal Cybersecurity and Incident Response Team. Alex led Revenue’s efforts to obtain open chains, legal and security certification, and he’s been in the open source software supply chain space for more than 18 years.  

Transcript (note this transcript has been edited from the original recording to help improve readability, if you would like the full recording check out the links to the podcast or webinar above. Please excuse minor grammatical errors due to transcription. 

2023 Predictions Review

“Comprehensive tooling solutions would become the norm for software supply chain management”

Russ Eling – So we said, last year that comprehensive tooling solutions would become the norm for software supply chain management. This prediction proved to be a bit of a wash or a net zero for best of breed tooling is still important for certain industries. You could sometimes spread the risk across multiple tools or multiple vendors which might require multiple data points to trust the results according to your levels of risk tolerance. But as development teams continue to reduce the universe of repositories for managing dependencies, they look for more of best of breed tools for their ecosystem rather than a one size fits all. 

Alex Rybak – We’re seeing this from our customer base as well. We saw some desire to consolidate and have single vendors that do lots of different things. We’ve also seen the exact opposite where somebody is working in a particular ecosystem. So NPM, for example, or Ruby, they look for tooling that is best equipped to handle that ecosystem. And in many cases there’s multiple redundant tools being used just to have multiple data points and be able to overlay results and make sure that there’s confidence in the results coming out of the tools. 

So a bit of a wash on this one, but definitely something worth keeping an eye out for. As you know, what vendors people are selecting and what are the key requirements they’re looking for.  

“Shift from internal to external proof of compliance”

Alex Rybak – So a second one, the shift from internal to external proof of compliance. So this is a trend we’ve been seeing for a few years now. And we definitely, from Revenera’s side have seen more of this. Really what we’re talking about is kind of the inflection point in SCA from things that were best practice, things that you were doing because your competitors did it, things maybe you were doing because at some point you got in trouble from a legal or security compliance perspective, and you beefed up your program to increase the chances of not encountering that again. The big shift that we’re seeing is external regulation really driving the next generation of compliance programs. So we see more requirements from industry, governments and internal customers. If you happen to be in the middle of the supply chain and a customer of your customer is selling to the government or military or foreign governments, then they’re going to impose requirements upstream to all of their suppliers so that they are able to comply with the regulations they have to comply with. 

So it’s not just you alone in the universe. You got to look at whom you’re taking things, from, whom you’re selling to, and then keep going downstream to understand who’s the ultimate and customer. We also see as a result of this, new teams being established. So we’ve seen teams that are cross-functional tiger teams, if you will. So there’s typically an engineering representative, legal, security, you may have release management, and you may have a risk person. So ultimately we’re seeing more burden shifted to this team as the front line of corporate compliance perspective. We’ve seen this called software assurance, software excellence, or release excellence. And this creates a bit of a buffer for engineering. So it protects them from lots of disruptions. But ultimately there are representatives from all the  stakeholders that ultimately benefit from an open source compliance program. And this team is in the front lines to be able to facilitate all the work being done by all the teams that roll up into them. 

Russ Eling – We’re seeing obviously similar trends with some of our clients. The organizations they provide software to are driving a lot of this because of regulations. So it’s not just regulation. It’s also, organizations impacted by those regulations that are driving that requirement further downstream. 

“Consolidation of ideas and players in security space”

Alex Rybak – Okay. number three, consolidation of ideas and players in security space. So, this was a bit of a guess. This is a little crystal ball moment last year where we just saw in 2022 just an explosion of security vendors, lots of startups, lots of niche players trying to tackle different angles on security management and remediation and shifting more left to cases where lots of different startups were working around. How do you prevent issues from ever occurring rather than dealing with them as part of the development process? So there was quite a bit of M&A activity in 2023, certainly a lot more than 2022. 

The trend that we’re really saw was large organizations acquiring security pieces. We also saw smaller players merging their offerings. But for the most part, you know, Thales acquired Imperva, Cisco gobbled up Splunk, IBM acquired Polish security, HPE acquired Access Security. So we saw a lot of large organizations that are in the security context filling out their portfolio of capabilities picking up smaller organizations that were really best of breed in that particular space. So something to keep an eye out for in 2024. The security platform keeps getting bigger and bigger, and new capabilities are required as new exploit and exploit vectors are discovered. So definitely a trend we expect to continue on. But that’s kind of how we would go back and score 2023. A lot of big ones acquiring little ones and not so much emerging around smaller security offerings.  

“Clearer, more consistent SBOM requirements.”

Alex Rybak – So this one was both a win and a lose. So we definitely saw Cisa be very active in documenting and releasing best practices. They released lots of documentation in the middle of the year and into Q3 from various contexts. Right. Whether you’re a developer or whether you’re a software buyer producer, whatever your role may be, you know how to deal with generating SBOMs, how to deal with consuming SBOMs, how to deal with best practices, and various facets of security exploits. However, we had a big pause in June on cyber executive order regulations, right? Everything was supposed to kick off June 11th. The industry clearly wasn’t ready to comply. So the brakes were put on. We saw renewed interest in coming up with agreement on the software self-attestation form. So we’re at a point where six months later, we’re seeing all of this reigniting and happening again. 

So although we didn’t get to this in 2023, you’ll see that’s something we definitely expect to happen early on, likely in late Q1, maybe early Q2 in 2024. So I would say this not that this didn’t occur. It just got deferred. And we fully expect this to materialize in 2024. 

“The next phase of open source becomes more prominent”

Russ Eling – And then the last one we’ve got here the next phase of open source becomes more prominent. I think we got this right. We definitely saw this over 2023. Companies are more knowledgeable now about open source than ever before. If you look at the the GitHub stats for 2023, you’ve got 100 million plus developers using GitHub, which was a 26% increase in all global developer accounts on GitHub. 98 million new projects started on GitHub in 2023. Developers made 301 million total contributions to open source projects across GitHub. And the other interesting thing was that nearly 30% of fortune 100 companies had OSPOs in 2023. 

So many organizations are becoming aware that financial sponsorship of key open source projects or ecosystems is required to ensure not just their survival, but also their timely remediation for security events.  

Alex Rybak – I think that that last point is really important to me. If you go back the past few years, the real big flip has happened from companies that are just huge consumers of open source to ones, number one, realizing that they want to be more strategically involved. So you see an uptick in OSPOs or open source program office, and they’re typically responsible for strategizing where your organization is going to play in the open source space. So you typically don’t need an OSPO if all you’re doing is using open source, OSPOs really get to more strategic use contribution, perhaps financial sponsorship. Where you really want to prop up certain ecosystems that either you rely on or you really believe in as an organization, and you want further development innovation in the space. 

So we’ve seen a big switch, especially coming on the heels of the Log4j issue, where people realize that there is a developer or a handful working on an infrastructure piece that you have all over your company. And if that person is not reachable or busy with their full time job or happens to be on vacation and something happens, there’s really nobody else. So we’ve seen this mental shift that if you’re an organization and you rely on certain components as a critical infrastructure piece, you need to be invested in making sure that those people are supported, they’re paid, that there’s some redundancy there, because otherwise, if something happens, you’re at the mercy of their schedule and their availability to react. So, so big shift there. And we definitely expect to see more of this going forward.  

2024 Look Ahead 

We finally get legislation

Alex Rybak – So the first one is that we finally get to a point where there is actual legislation. So there has been lots of discussion, lots of documents, lots of best practices, lots of regulatory conversations. We are expecting that there will be some laws enacted in 2024. When we talk about that, we’re really talking about an initial industry agreement on: What does it mean to sign off on software? If you are selling to the US government, if you’re selling to the EU, if you’re selling to any sort of public sector, what is your responsibility? And more importantly, accountability? If your executive signs an attestation that says we followed an industry best process and you can trust our software. And what happens if they omitted something? What happens if they weren’t aware of something? What happens if somebody finds an issue downstream and you have to go back to the organization and get that fixed? So we have had this attestation form being circulated by Cisa. 

It’s had, uh, industry review and feedback. It’s right at the cusp of being finalized for round one. And we fully expect once this goes live, there’ll be lots of feedbacks and more iterations and improvements to follow. Along with that kind of the same concept, you know, initial agreement on, well, how deep does a SBOM need to be? We talk about kind of the columns of the SBOM spreadsheet, but we don’t really talk about the rows. So is first level dependencies enough? Do we need transitive dependencies? Do we need to go beyond packages? What if you have snippets of code? What if you have other IP items? Documents, images, and other things that maybe aren’t a security threat, but there’s certainly IP concerns about it as well. So, some sort of definition of these may vary. Right. So by industry, well what’s expected for regulated industries. What’s expected for non-regulated industries. You know, How big does your SBOM need to be? And also what are reasonable remediation timelines. So every company has SLAs with their customers. Well what happens in the open source world where there are no SLAs in place. What is a reasonable expectation for if there’s an issue. How long do I need to wait as a software buyer for the supplier to go in? And first of all, give me an estimate. And number two, actually fix the software.  

So mitigation by you and remediation by the supplier kind of becomes a dance. So, I expect there to be some guidelines coming out from that. Things around consolidation of tools, technologies, formats. Over time, for example, I fully expect SBOMs to gobble up all the other license obligations. So today we do third party notices, we do third party source distributions. It becomes difficult because there’s lots of documents to prepare, lots of people involved. Well, over time I expect there to be a single compliance envelope, if you will, where the SBOM schema will support all of these additional pieces of information so that all stakeholders are satisfied by that. 

Also, a consideration of standard of care. Right. What’s the level of depth of analysis required for certain industries, if it happens to be automotive or medical devices or critical infrastructure? Clearly that requires you to take a deeper look. Then if you have a direct to consumer widget that you’re selling. So, all those things have been discussed, but there really hasn’t been lots written down as requirements in that space. So during the first nine months or so of adoption of all these things, we expect lots of discussion, iteration and overall improvements in the space. 

Russ Eling – Continuing on with the SBOM theme there, I think 2022 and 2023 were more about learning how to generate SBOMs within an organization, which many found out was like herding cats to pull all the information together, despite the minimum subset of elements. I think several companies started receiving early or initial SBOMs last year, st least that’s what we saw in our practice. Which was great, but it left many of us asking, what did you do with the SBOM you received? And, oftentimes the answer was they weren’t doing anything with the SBOM because there wasn’t yet an internal process for what to do with the results, beyond perhaps running it through, a SBOM validator.  

So I think the primary, the main focus was on being able to generate an SBOM and things will have to change or mature a bit for 2024. And we expect this to happen, especially for certain industries like automotive and medical devices, just due to the increased level of depth required. We’ll have to go beyond basic formatting validation, obviously, and progressed to doing something that’s more effective with the data that we’re receiving. For example, what does security or license compliance risk look like when we combine s bombs for a given product or platform? We’ll have to have, uh, internal processes defined to handle that. 

So those new processes will also need to account for proactive monitoring for when the new vulnerabilities are discovered in an SBOM. Then what do we do when it’s discovered in an internal SBOM versus an SBOM we received from a supplier? Legislation and SBOM experience and maturity will drive more organizations to generate more SBOMs or more frequently, potentially with every release. As opposed to receiving an initial SBOM for a given product, eventually we’ll need to have larger processes and automation to handle this. I’m not sure we’ll get to SBOM automation at scale in 2024, since there’s still several elements that require human intervention. Alex was talking about the columns and rows. If we could talk about the elements for a moment. We’ve seen SBOMs that have contained additional elements like repo ID, which technically meets the SPDX or CycloneDX formats, but they’re outside of the defined SBOM minimum elements. 

You could also have missing minimum elements, like a component without a version. Another common difficulty is invalid license ID or license expression. All of these scenarios are going to still force some level of human intervention in order to resolve them. Overall we’re expecting most or many organizations and industries to mature their SBOM processes, which will result in the continued investment in resources to support the SBOM efforts in 2024. 

Alex Rybak – You just jogged my memory and a couple other things. We’re definitely seeing another topic around concern of IP leakage by sharing SBOMs. Everybody is still sitting around going, okay, if I send you my SBOM, am I not giving you a roadmap on how to hack my product or how to attack my cloud offering? There’s definitely lots of discussion and debate on whether, today, are we sending an SBOM with an NDA? To kind of protect ourselves.  

Well, clearly you can’t do that at scale if you’ve got thousands of customers. So do we build a portal where there’s entitlements built in and there’s access control lists, and you get to see real time SBOMs. You don’t get to see other product’s SBOMS, that you haven’t bought. All these things are being discussed and debated across our customers. The other topic is do you really need a full SBOM if you are buying software from somebody, or do you need more of a red, yellow, green indication of the entire product? In reality, if you are a software buyer, you’re not going to fix these things yourself. If you find a security issue in an upstream piece of code that you purchased, then you’re going to go back to the vendor and give them the information and expect an update. So do you need to see all 2000 items or do you need to see are there X items that are red and that’s enough to push back and say, no, this does not meet our policy. 

It’s really how much data needs to be shared and how deep does it need to go? And what happens when this is at scale and you’re working with somebody who has a cloud offering and their build happens ten times a da? Are you expecting to receive a refresh ten times a day, or do you do it every Friday? Do you do it once a month? All of these things are still yet to be kind of crystallized as best practices. I think it’s going to be a really big year on the spec on how to get to scale in the future and working through all these use cases and really making sure you get value from the SBOM, not just a checkbox that yes, we received one and now we feel secure, when you haven’t really parsed through it.  

Exponential growth in AI generated code

Russ Eling – I recently heard a term for a new kind of pandemic that’s sweeping the development community. A client last week said his team had been infected with AI fever. The expression is pretty consistent with a recent GitHub report that we were referring to earlier. They saw more than twice the number of generative AI projects by June of 2023 than in all of 2022. So obviously, we expect that trend and growth to continue in 2024. So what can we do to prepare for this additional growth? At a high level,  ensure you have some level of controls in place and that they’re updated as needed. Use SCA, or software composition analysis, scanning products or managed services to detect the use and partial use of reused code. Policy is important, as is promoting good behaviors. I don’t know if it’s realistic anymore to have a policy that says no generative AI at all, but I’m sure the companies that do, they have valid reasons for doing this. Let me touch on some of the considerations for managing use of generative AI. 

Managing license compliance with AI generated code is newish ground. And it’s presenting challenges not fully considered by some organizations. For example, some AI solutions or many AI solutions for developers don’t currently disclose information about the origins and licenses of the source code that they produce as an output. So even if these AI systems were trained using only permissive licensed open source, which is an argument I’ve heard, there are still licensing obligations that need to be met, even if that means you only need to reproduce a copyright notice or a license file. How would the average user be able to comply with the licensing obligations for source code in an AI’s output if they don’t know who the copyright holders are or what the license terms are? Interestingly, or perhaps correspondingly, we’ve had a marked increase in the number of requests for us to assist with detecting use of AI generated code over the last several months.From our perspective, we treat AI generated code similar to how we would treat something that came from StackOverflow or any other copy paste source. 

This generally means that we’re running a scan, we’re looking for snippets or other evidence of open source that might require attribution or other licensing obligations. This can also carry over to some of the SBOM concerns that we spoke about. If provenance and licensing of source code that was generated by the AI system are unknown, since the users wouldn’t be able to generate an accurate SBOM and then it also leads to potential inability to track any security vulnerabilities. So as organizations adopt AI technology, I think we have to carefully assess the code that’s being pulled in, particularly in safety critical and regulated contexts.  

Alex Rybak – I’ve got lots of thoughts in the space. If we backtrack a few decades we had our first speed bump with Salesforce. Right? I’m not going to put my financial data in the cloud. My company will cease to exist because everyone’s going to steal my data. Then, go back 20 years or so, we were seeing policies that say we absolutely forbid use of open source at our company. We’re 100% proprietary. We’re not going to use any open source. And then you’re at a huge disadvantage compared to your peers or your competitors because they are leveraging open source. So we’re on a third speed bump now, which is AI. I wholeheartedly agree this concept of “you are prevented from using it” is not sustainable. You got 100 developers on GitHub, 92% of them are using AI coding tools both inside and outside of work. If you are a developer and you are ignoring the fact that there’s AI tooling out there, you are a disadvantage with all your peers. That’s just a new tool. It’s something you need to learn, adopt, but adopt properly. 

So to me, all the focus needs to be around educating, making sure development teams are set up for success. Don’t try too strong arm teams where they can’t use it because they will just find a way around it. So it’s not about preventing it. It’s about working with legal security on guardrails on how to do it properly, training people on prompting, making sure that you’re not leaking any proprietary information by asking, you know, an AI client to improve your proprietary code and putting your code in as a prompt entry, right. Things like that. So don’t do silly things, but the way you don’t do silly things is you educate your teams to understand what those things are and how to go around it. Again today, if you’re not leveraging this, your competitors are. You’re falling behind. Your time to market is going to suffer, right? You need to embrace the technology. If you’re doing SCA scans, great. If you’re not, you need to. And you can’t just look at manifest files. You’ve got to look at snippets. You got to look at fragments of code. Because if you have AI generated code in your proprietary code, like I said, you got to get to the owners of that to be able to deal with attribution requirements. But what about open source projects, right. You may not be using AI directly, but you’re using open source projects which likely had AI generated code put into those projects. So one way or another, it’s leaking into your code base. So you really need to have policies and think through how you’re going to deal with it. I kind of split it into two buckets, right? There’s things like, models, training data that you need to consider. These typically aren’t distributed. They’re being used to generate output. So you need to have an approach for that. And then there’s ultimately the generated code that goes directly into your code stream. That requires a different perspective a different way of handling it. 

So if you haven’t had a conversation between engineering and legal engineering security, please do. Understand their concerns and where they see risk and come up with a combined approach on how to mitigate it, but allow your developers to still use the technology so that you can continue innovating and getting to market quickly.

Shift left and automation becomes a required solution capability

Russ Eling – Feels like we haven’t heard the term shift left in a while. I think there might be a little bit of developer fatigue around the term. Possibly also because there’s been an increased focus on automating. Most often there’s way too much code that we’re working with and the majority of modern developers are already overburdened with tasks. So it becomes almost impossible to expect developers to balance all the responsibilities that industry is demanding. I mean, let’s face it, no one has room for additional or unnecessary manual processes. 

So automation or maybe automating left is definitely something we think we’ll see more of in 2024, with the ultimate goal of moving from early indication of security concerns to prevention of these concerns. But it needs to be manageable. So you have to balance the right amount of risk. So broadly, for those that are unfamiliar or maybe unfamiliar, the shift left approach aims to discover and resolve bugs or compliance issue as early in the development cycle as possible. So this generally increases software quality and then decreases time spent later in the pipeline correcting those errors. Unfortunately, and in many cases, some organizations have simply taken old security testing methods or license compliance scanners that ran late in the process and then stuffed them earlier into the development cycle, which can sometimes be more of an interruption to the workflow, as opposed to making security or compliance processes more efficient and effective. Whatever your stance might be on shift left, you can’t deny that we’re using a lot of software in our products and services. 

So our position is that you should still be looking for what we call big rocks, like security vulnerabilities or compliance issues early. So ideally that would be automated. And we expect that to grow in 2024. But for those that aren’t or can’t get there yet, it can also run manually or even a hybrid approach if need be. Adding automation early in the process is important, and it’ll become even more critical with the trending growth and software use. The amount of code is just going to be too big eventually to identify everything manually prior to a release. That said, most of the tooling in the marketplace loses some level of granularity once you integrate with the build system for automation. So there might be some trade offs to consider, which also ties into what we said on SBOM and AI prediction. So there’s still a need for a human to manually review certain elements or in certain scenarios, whether that’s an unexpected element on an SBOM that fails an automated process, or a snippet of copyrighted code generated by AI that requires review. 

So perhaps a, a blended or hybrid approach is more appropriate. Where you have automation combined with manual review when necessary. We’ve been taking in this kind of hybrid approach recently with a few clients. Where you use the less granular scanning automation, along with a periodic forensic level scan and audit that might require more manual or human effort. So you might not capture every new snippet in automation with each automated scan after that, but that might mitigate enough of the risk to enable the efficiencies gained. Another approach might be to use either two different tools or perhaps one tool with two different profile deployments for scanning. So one in the build pipeline that’s less granular, or with a less granular configuration. And that covers the majority of code like packages and dependencies. And then another that a dedicated team like an OSPO might use for a forensic level scan and audit that detects snippets and other licensing and copyright information. 

Alex Rybak – When we talk to customers or prospects, we hear three things. We hear that they have less people available, either in general because they have unfortunately had layoffs or just people can’t be bothered, they’re fully burdened. They expect more automation. And we’ve kind of shifted from, do you have APIs? Do you have a way to plug into my pipeline to, if you don’t, we the rest doesn’t matter. Like we can’t. You know, that’s our only deployment model that’s supported by management. So definitely a big push for automation. And injecting SCA earlier into the dev lifecycle. I was listening to the GitHub annual meeting and Mike Hanley, who’s their CSO, essentially the theme there was that we’re really moving and redefining shift left to prevention. So we’re not going to try to catch things early. We’re going to do whatever we can at the point of code injection to ensure that the developer either doesn’t make a mistake or is immediately alerted to it if they do, and can course correct before checking in code. 

So we’re not expecting our pipeline to catch things. I mean, we are, but not as the primary piece of data. We’re expecting to kind of identify and redirect developers right there. So we avoid checking in anything that’s bad. So I think the point Russ made about kind of multiple layers is spot on. We saw a lot of our customers over automate to where they had this false sense of, well, we’re doing everything early. We’re scanning, we’re getting clean reports. But those scans didn’t catch everything. So then later on, a customer comes back and says, hey, we did our own scans and we found an issue that didn’t pop up as part of your automated scanning. So you need both. You want every one of your teams to be plugged into the build, to be doing this automated along with, security, static and dynamic testing. But you also need a back end process that goes deeper. 

Perhaps it’s at a reduced frequency, but you do need a human involved in the process. At some point to make sure that you’re not missing things. And again, this varies by teams, varies by industry. You really need to understand how much risk you’re willing to carry, and how much you’re willing to hand off to automation versus having an OSPO or having external assistance or services. Really looking at this and making sure that you are catching everything you need to catch. So what I really see happening in 2024 is a lot more interest in a shift left as an initial deployment and then perhaps measuring miss rates and depending on how critical that is and how much that digs into your sense of comfort and your risk tolerance, perhaps adding a more manual process on the tail end to make sure that you’re catching everything.  

Continued shift of ownership and accountability

Alex Rybak – So we definitely continue to see this. We see developers more and more burdened with running security tools, running SCA tools, understanding how to remediate security issues, and conducting training. So we’re seeing less and less percentage of daily work time being dedicated to innovation. So these teams really need support. One thing we started seeing last year and continue to see is the formation of new teams. So software excellence, software assurance or release excellence, where we’ve got a cross-functional team where every role as a stakeholder is represented and these teams are put there as essentially a shield to engineering. So engineering doesn’t get interrupted every time there’s an issue that comes up. If you are a software supplier and you’re selling to thousands of customers, well, each of these customers have their own risk management program. They’re likely going to scan the software you sell them. What happens if 500 of them come back with an issue? Your developers will never get out of tech debt and security bug fixing. 

So we see almost the equivalent of a customer facing engineering team being set up in the compliance world. As mentioned, about 30% of the fortune 100 companies have formed OSPOs. Now, open source is being done strategically. It’s not just ad hoc and react to what we’re seeing. We personally are seeing companies in our OSPO/SBOM maturity assessment up on our website. So if you haven’t taken it, go ahead, you’ll get some great insights by answering just a few questions. But the results tend to be biased towards the low end. Either companies didn’t have a strategy and process in place, or they are still figuring out what is the appropriate investment on their end. Do we need a single person OSPO? Do we need everybody represented? Do we need to go outside the company for help? So that’s a place where Russell’s firm helps out. So I’ll let you talk about managing those. But we definitely see a need for someone to own the process. It can’t just be left up to every single product team operating independently and hoping to achieve a common outcome. 

Russ Eling – Let me give a little background first, and then I’ll briefly touch on potential options for companies. So, organizations are using a lot more software in their products and services, and it’s become impossible to expect developers to balance everything. All the responsibilities that have been placed on them over the last several years. I said for many years that we shouldn’t expect developers to become security and license compliance experts, especially on top of everything else they’re expected to manage and deliver. And it certainly doesn’t make sense to put your best or most productive developer on chasing license compliance issues, which happens in some companies. So clearly, neither security nor license compliance obligations are going away anytime soon, they only seem to be increasing.  

In order to reduce the burden on development or engineering teams, we think companies will need to consider moving this responsibility to specialized teams, like an OSPO or other specialized team structure. This could be really effective, but sometimes challenging to get off the ground, since much of what we’ve been discussing here is not really taught in academia. For organizations that aren’t sure how to do this or want to start right away, OSS consultants offers a managed OSPO service that can jump in right away with experts and other resources, such as tooling. This allows you time to staff and train an internal team while still addressing the critical functions. We also do this longer term for companies that aren’t necessarily interested in a dedicated internal team and also for some that aren’t large enough to warrant a dedicated internal team. 

Um, then there’s also open chain that, uh, for those that want to get started on their own. So open chain is a global community of organizations collaborating to create trust in the open source supply chain. So open chain does this through ISO 5230, which is the international standard for open source license compliance as well as open chain, the ISO 18 974, which is the industry standard for open source security assurance.  

Alex Rybak – The other thing I want to bring up is there’s been some SEC regulations that came into force on December 18th 2023. Essentially what they say is if you are a publicly traded company, as part of your 10-K, so this is the annual financial report they have to publish, cybersecurity impact is now a required criteria.  

In the past you would say things like, “we have a new CEO” or “we gained/lost the customer who’s responsible for 15% of our overall revenue. So clearly there’s risk with that one deal that if they go away you know, earnings go down. If we signed up a new customer earnings go up”. Those things typically move stock prices. Well now cybersecurity is another element that needs to be considered. Did you have a breach where your supply chain was impacted for two months? Do you have a case where you weren’t able to sell software or you have to recall software and spend three months to six months remediating it, which meant you couldn’t get a new product out to the market. These things also impact stock prices. The SEC has rules around 10-Ks, which is going to be the annual report. They also have rules around 8-K. An 8-K is any sort of kind of discrete material event occurred. For example, we just hired a new person or we gained/lost a customer. Or perhaps there was a significant, and that word significant isn’t really well defined yet, but there is a significant or material impact to the company financially due to a cyber event. 

And if that occurs, the company has four days or 96 business hours to file a report. So for that to happen, first of all, someone has to be paying attention. Someone has to identify that we hit the threshold for a significant cyber event. The second thing that has to happen is somehow engineering and finance need to get together with security, coordinating, to define to the market what does this mean? You don’t want to spill all the gory details of how you’re compromised, but you need to have a statement that is understandable by investors so they can make informed decisions. Otherwise, you’re non-compliant by SEC rules. Today, perhaps other companies have this. But in general engineering doesn’t really talk to finance much.Maybe at the end of the year where they’re putting up their financial numbers for how well they did. But typically that’s not a conversation that occurs on an ongoing manner. So as part of this ownership and accountability shift, make sure if you’re a publicly traded company, your finance people or whoever is responsible for complying with being a public company is in the loop on these conversations, and there’s some way to for them to reach you and for you to reach them with information.  

We saw a couple of cases last year with MGM and Clorox. Both had material cyber events. Both ended up costing hundreds of millions of dollars in bottom line. We saw stock prices drop. We saw in Clorox’s case. It took a long time for them to unravel their supply chain and the backlog of goods because their manufacturing facilities are impacted. MGM’s case, there’s eight casinos in Vegas and others around the country that were impacted. Companies are losing hundreds of millions of dollars of revenue as an impact to these security events. We had examples of real hard dollars as a consequence of either not having the steps in place, lacking communication, or ultimately just not knowing what to do when this occurs and how to directly go fix it. So definitely a very expensive problem to deal with. So make sure that that line of communication is open in your organizations. 

Security-by-design / by-default becomes the de-facto approach

Russ Eling – Historically, organizations have relied on addressing vulnerabilities discovered after customers have already begun using products, which then requires users to apply the updates on their own. So only by implementing secure by design methods can we stop that loop of producing and then applying the security solutions. Products that are secure by design are those where the security of the customer is a core business school and not just a technical feature. Secure by design products generally start with that goal before development even starts. Tying this in to the first topic with legislation, this encourages or even mandates that manufacturers should implement security throughout a product’s lifecycle in order to prevent introduction of vulnerable products into the market. And we’ll go a step further and say, we think security by design will become the de-facto approach in 2024. 

This type of approach is used in the development of new technology products and the maintenance of current ones, and it can significantly improve customer security. In terms of their posture. It reduces but doesn’t eliminate the likelihood of compromise. Secure by design principles not only improve customers security posture, they also reduce the long term maintenance and patching expenses. Unless you’re startup, you’re likely dealing with lots of legacy code. You might find yourself having to possibly bolt on security as opposed to designing it in at least for the legacy portion of that code. The last thing I’ll say about this topic is the best time to build security in was when the product was first developed. The next best time is right now. So the sooner you do it, the better protected you are. 

Alex Rybak – This is a super obvious thing and an incredibly difficult thing to do. The way I look at this is, typically when products are shipped, they’re shipped with the default configuration. So you have default ports, you have default passwords, you have, secure transport either turned on or turned off, and then this product gets installed by the customer. Then you have the call that says, okay, now let’s harden the product that’s been installed.  

So the whole concept here is it should be shipped as hardened as possible. We want everything on, we want the most conservative deployment possible. Then you work with a customer on the inverse, which is, do we need to back some of this off? The reason for that is you just don’t forget to do this. If anything you’re over-protecting, and then over time you back off or not required versus forgetting to change the default password that gets hacked or default port or, you know, forget to turn SSL on. Suddenly you’ve got unencrypted traffic running between the product and the client. I’ve had the privilege of working on a product that’s been over ten years old with lots of different developers touching it, as well as bringing a new product to market ground up. 

It’s clearly a lot easier to do this when you have a brand new product being brought up. You know, start with threat modeling. You think through all possible exploit scenarios. You architect properly, you put controls in place. This is all great when you have the luxury of not having coded anything yet. You can do all the right things. Well, what happens when something like this trend comes up and you’re ten years in and you’ve got millions of lines of code, a bunch of files. How do you do this? The way that we attacked this was: work with security team, they’re the experts. Let them drive it. Understand how your product works, where it’s deployed. What kind of data does it capture? Is there privacy information? Is there financial data? What industry are you in? Do you have any regulations you have to comply with? All of that needs to be factored in. 

Start with threat modeling. Threat modeling is a great way that doesn’t require coding to take a holistic look from 10,000 feet on what are all the exploits possible? What are all the threats we have in the product? Do we secure data at rest? Do we secure data transit? Do we have proper, protection around endpoints? If we’re hosting this in the cloud, is our data center secure? If customers are installing it, is their data center secure? This is a great way to start to at least identify areas of concern. And then at that point you will have to build a remediation plan and start bolting on bits and pieces. This really has to be part of the team’s culture. This can’t be “Okay, we built the product, now let’s make it secure”. It doesn’t work. Number one, no one ever has time to stop and now make it secure. Number two, you just forget to do things. And your customers shouldn’t be the ones catching these things downstream. This is a key element in both the cyber executive order in the EU, the Cyber Resilience Act. They talk about security by design, security by default. It definitely requires developer education. It’s a mind shift from the way a lot of developers were trained originally. This is not something that historically has been taught in academia. We are seeing lots more of this, in fact, my son’s high school has a cybersecurity class now. There’s definitely a lot more emphasis on everybody thinking in the security context. But if you have an existing legacy product, that was not the mindset when it was originally developed. So you really have to go back and take a look and assess where is the biggest risk.  

Another thing to consider beyond developer education, is this will likely impact velocity. When you take on this project. Your developers are going to be pulled off of coding to do something else, even if it’s just consulting. So you have to work with product management to make room in the roadmaps. Communicate this with your customers as well so that they realize that there will be a slowdown in delivery, or perhaps, a feature may get kicked to a next release. But ultimately, you know, this definitely pays off because it really helps to minimize future disruption by getting it right the first time (or getting it right the second time, when you get a chance to put this into practice after releasing the product).  

To summarize what our predictions for 2024 are: 

  • Finally getting to legislation 
  • Exponential growth in AI generated code 
  • Shifting left or more preventative measures becoming a required solution capability.  
  • Shift of ownership and accountability to both engineering as well as specialized teams 
  • Security-by-design becoming the de facto methodology.