How to Find and Fix Your End of Life OSS - A Free Tool by HeroDevs

Episode Overview
See how top engineering teams are using HeroDevs’ new free open source tool to uncover unsupported frameworks before auditors or attackers do. HeroDevs End-of-Life Dataset (EOL DS) is a free tool that uncovers unsupported open-source components across your tech stack, before they disrupt your roadmap, security posture, or compliance audit.
Transcript

All right. Welcome, everybody. We're going to get started in about a minute and a half. We're going to give everybody a little bit of time to join before we jump in. All right, for everybody new that's joined so far, we're gonna get started in a little bit under a minute here. We're gonna give everybody just a little time to join. All righty, we are at the two minute mark. So we're gonna go ahead and get things kicked off here. Thank you everybody for joining today's webinar. A couple of quick notes and then I'm gonna share my screen and we're gonna go through a brief presentation. So this webinar here is gonna have three parts. So the first part is, like I alluded to, I have a brief presentation. We'll set the stage a little bit, provide some context about end of life data, about what the motivation was for building systems that help you identify and prioritize remediation. Then we'll jump into the meat of what we have to talk about here, which is going to be focused primarily on the actual tool and the data set, showing a demonstration of how you can use it, how to get set up. And then the last part of this is going to be a Q&A section. So as we go through everything here, please feel free to post questions, post comments. Rest assured, we will get to those in the latter half of our time here. So don't be shy if anything comes up. And with that being said, we're going to go ahead and move right in. So I'll do a brief introduction. My name is Isaac Wiest. I'm a product manager here at Herodevs. Been doing product management for almost ten years at this point, with about the last three years particularly focused on developer tooling, data products, and open-source software. So very excited to kind of roll up my sleeves and get into the technical elements of what we're looking at when it comes to end-of-life software here. So with that being said, why is HeroDevs in the business of trying to help build end-of-life data sets and scanners and such? Well, let me just briefly talk about the industry and why we care about end-of-life software. So I imagine everybody here has some reason you are here. In certain cases, it might be that one of your SCA tools, one of your security systems, an endpoint scanner, has flagged end-of-life open-source software before, and that causes a type of internal churn or spin where people have to react, they have to respond. If there's a deadline, they have to try and make migrations quickly, etc., In other cases, what I hear quite often is that many of the new compliance frameworks are talking about end of life software, but in implicit ways. That means you may have gotten some auditor or pen tester flagging some sort of open source end of life framework that exists in your systems. A good example is like PCI section six dot two focuses quite a bit on software that you're running needing to have timely patches from the original manufacturers of that software. Now it doesn't say end of life, but it's implicit because end of life, of course, won't get those timely patches from the original, you know, manufacturer kind of old language and the PCI compliance rules there. Now, this is where we noticed a really big hole in the kind of end-of-life open source software space, which is this last part right here, which is that there is no comprehensive end-of-life data set out there. So if we kind of unpack that just a little bit more, what you're going to see is that there are a few data sources. Many of you on the call here may already know about endoflife.date. There's other just like direct upstream maintainer websites where they're going to publish their support timelines. Now, the problem with this, let's take endoflife.data as an example, is that they have about eight hundred open source packages that represent about six thousand versions across those packages. Now, that's focusing only on open source software that is explicitly marked as maintainer attested end of life. That's a situation where the maintainer, the community around that framework said, yes, this is end of life on this date. The problem is what we noticed when we looked at the state of kind of open source end of life software was that over ninety nine percent of all end of life software out there is abandoned and never actually maintain or attested. So the last kind of industry estimate for the number of open source versions in existence across every ecosystem is in excess of about sixty million, and it's growing at a faster and faster rate. So what that means is you have this extremely long tail of packages for which you just don't actually have an explicit attestation, an explicit declaration about its end of life state. And that often has been causing a lot of heartburn when it comes to auditors, pen testers, open source security tooling systems, you know, SEA systems, endpoint scanners, et cetera, trying to get their arms around the risk implicit in that category that is end of life open source software. So that is where HeroDev stepped in. We said, okay, We also understand a lot of the kind of end-of-life state of large frameworks, your angulars, your views, your springs, et cetera. This is kind of our bread and butter. However, when we were working with our own customer set, we noticed that many of the additional packages associated with those large frameworks or libraries were also end-of-life. And auditors and pen testers, others were starting to notice this, but companies themselves had an identification problem. Meaning they had a challenge actually figuring out what is end of life and then prioritizing remediation steps towards that, which resulted in a lot of industries that are in more regulated spaces getting pinged unexpectedly and kind of having these fire drills. So we decided to kind of step into that space over the last year and say, well, let's build a free end of life data set Up until this point, we have this heuristic-powered database of more than ten million package versions. And we're going to talk about what those heuristics look like in the context of a demo that I'm going to give here in just a minute. We took that really powerful comprehensive dataset and we paired it with a delivery mechanism we call the CLI or the CLI scanner. What I'm going to show you today is a way to manually run scans, but it's built in such a way that you can generate automated scans within a CI process as well. We'll talk a little bit about that further down the list here. All right, so the final thing I'm gonna mention before we jump into the demo here, and I'm gonna share my screen, is that just like, if you're familiar with a lot of security tooling, there's a very classic model where step one is often referred to as identification. So when it comes to risk, whether it's security risk or technical risk. A lot of these tools are trying to first flag what is risky. That's that identification. The next step in that model is usually prioritization, right? So you've flagged a bunch of stuff, now we have to rank it. And that final step is taking action or remediation. So the tool that I'm about to show you here takes you through those steps of identification, prioritization, et cetera. So I want you to kind of have that mental model as you're seeing what we have here. Okay, so I'm going to go ahead and share my screen and make sure everything is going according to plan here. Okay, perfect. All right, so we're in my terminal right now. I kind of alluded to the fact that there's a CLI. So what I've already done, what you haven't seen right now in this screen here is that I've already downloaded the CLI onto my machine. So the first thing I'm going to do in order to initiate a scan, in order to actually start the process of identifying what is likely or definitely end of life, I'm going to start by jumping into a directory on my machine called code. This directory here contains a handful of projects. I want to make sure I get the right one. Here we go. So there's, I just listed it out. I'm going to go ahead and jump into my Nuxt to demo project. All right, so this project here is quite literally, you know, if I brought a VS code, an IDE, I could open this and take a look at the files. And there are some manifest files in here. So what I'm going to do is initiate a scan using this command right here. All right, so what is happening? Well, first, the system is looking inside this directory, trying to find any manifest files. This is going to be your package lock.json files. These could be POM files. These could be any number of manifests. Then it's using, under the hood, cdxgen. So cdxgen is a very ubiquitous open source tool to generate SBOMs. And it's doing all of that on my computer here. That's all on device. Once that SBOM is generated, it does something called trimming that SBOM down. So it takes a bunch of the data we don't need, trims it to a list of just the packages that are displayed in that SBOM, sends it to hero devs, and we return a scan. So that's what's happening kind of in the background. You'll notice at the top line here that we got a summary of what was presented. So we have fifteen hundred packages. We have about two hundred and fourteen that are marked end of life. We have zero upcoming. We have about twelve hundred that are not end of life. One hundred and fifty five with an unknown status. And there's twelve packages in which our own NES remediations are available. Should you need an interim solution prior to being able to make that migration or that upgrade work? All right, so from this point here, we have a couple options. We often talk about one as the technical route. This is gonna be more interesting for those of you who are thinking about how to scale this across dozens of projects or applications in your ecosystem. Remember, this is the manual scan we're looking at. Though, of course, we could set this up in a CI process. That technical route allows me to actually save the JSON output, which contains all the details that we're gonna go over today. The visual route, of course, is this link that was generated on the fly. So the first thing I'm going to do is go ahead and open this link. And I'm going to pull a window into my main screen here. Make sure everybody can see what's going on. So what we're looking at here is a detailed view of the scan we just ran. On the left-hand side, you're going to see the number of packages scanned, how many are end-of-life, how many are not, and how many are unknown. Now, the next thing I want to bring your attention to is this top line insight section. So remember the step one I talked about in this whole arc of dealing with the risk of end-of-life software. That first step is identification. And that's, of course, what we've done with the scan. The next step is prioritization. Not everything marked end of life is equally risky or dangerous. And the reality is that while it might be nice to get this number, this two hundred and fourteen down to zero, the reality is there is going to be a case of diminishing returns. And this is, of course, true for any kind of security oriented division in a company is getting perfectly secure. That last five percent or ten percent takes ninety percent more effort. So as a result, we need a way to prioritize. And that's what these insights here are designed to do. So you'll see on the left-hand side, we have highest risk EOL packages. If I go ahead and click on this, what it does is it's going to take those twelve packages currently end of life with active vulnerabilities. This is a great place to start when it comes to identifying what is in fact riskiest. So let's take a look at braces, for example. Braces, it was in our manifest, meaning I'm using version two dot three dot two. It is end of life. And of course, you might be wondering, well, what do you mean by end of life? How do we know that is the case? I can go ahead and hover over this end of life column and I get the specific reasons. So we're going to see this in the JSON output here. But remember how at the beginning I mentioned those maintainer attested end of life states? Well, there's that long tail of packages that are end of life and ought to be treated as such, but don't have an explicit maintainer attested. They were abandoned, the maintainers moved on, et cetera, et cetera. What we do in that case is we run a set of advanced kind of heuristics that analyze metadata about that package. say, hey, this package is end of life. Here are the reasons. And in this case, we have an unpatched CVE in the version and unpatched CVE in the version release line. So importantly, these are all enumerated in documentation. Importantly, we provide you this information to one, kind of show you our homework so that if you disagree or you want to go research it, you can do that. But the second thing is that you can index your systems on whichever ones you care about the most. So let's say taking it back to compliance, you're subject to PCI compliance and you're really concerned about that, you know, violating section six dot two. Well, there are a handful of EOL reasons that directly map to section six dot two. In fact, the ones we're looking at here are examples that would put you in direct jeopardy of violating PCI compliance if you're subject to that. So of course, the point is that you can take this information and say, only show me packages that are end of life that have these reasons. And that's going to help you prioritize which ones are most concerning. Now, once we've gone through that process, we can go ahead and see what the next supported version is that I need to jump to or to migrate to. And then, of course, we can see the actual CVEs affecting that specific package. So we can see the ID in the score here. These are all ways to kind of find the overlap of the packages that represent potential compliance risk and also security risk and say, this is where we want to focus our remediation efforts. And that's particularly important when you have some larger number that are identified as end of life. All right, so let's keep moving through. Of course, there's a section here where we're focusing more on challenging end of life migrations. Now, there's a lot that goes into the concept of a difficult migration, especially for those of you who have been through a process of whether it was upgrading Vue or Spring or whatever the case was, you know that the further out you are in major versions, the further out you are in days, but also the nature of how you're using that specific package makes a tremendous difference on how difficult it's going to be to migrate. So in this case, what we're doing here is trying to flag for you the ones that are going to be potentially more work than others. So you can factor that in to your remediation plan. Remember, we have identification, prioritizing them, and then figuring out what remediations actually need to happen. All right, now I'm gonna go ahead and clear these filters. A few other things that I'll draw your attention to here. Firstly, you do have an export JSON. So we can go ahead and export the same JSON report that we would get in the context of the CLI, this view right here. and this has a detailed breakdown of everything that you just looked at i know this is likely a little bit challenging to see on my screen but has all those values listed directly there this json export here is where we really have the kind of scaling effect we're looking for where we can input our tool into your different ci processes consolidate this data into a bi tool and actually give you that single kind of view or pane of glass regarding the open source software that might put you at risk whether it's compliance risk security risk etc all right let's jump back over here for a moment all right so that's that export json Now, we're about to move into Q&A. A couple of other things I am going to point out for you. You'll see at the right hand section, we have this Herodevs NES available. So this right here is the actual products, the actual open source packages that we offer direct support for. Of course, these are designed to be a stopgap. Should you need some support from us, we can cover those for you. If you see something in this list that is not available, but you'd like to know if we can support it, you just simply click down here and that's going to take you to a form that you can fill out and make that request. All right. Okay, perfect. So I'm going to go ahead and move back the view. Let's make sure this is all configured well. Okay, thank you all so much. That was the demo. So we're going to move a little bit into the Q&A section of this webinar here. I'm going to give everybody a little bit of time to get those questions in. Let's see here. And I have a couple people on my end helping me out with questions. So I'm going to look in the chat and see what we have. Okay, so we have a question. Does this scan infrastructure as well as projects? Okay, so just to make a kind of a point of clarification, what I'm guessing, Linda, you're asking here is that when we say projects, right, you have the literal, you know, package lock file, you know, or .json that would be sitting there, that would be sitting inside of a project. These are like the applications that your engineering teams are building and deploying to your own customers or users or whatever the case is. That is what our tool focuses on right now. Now, we do plan to support more infrastructure-oriented scanning where we're looking at the operating systems, for example, that are running on your servers. Are you on an outdated version of Linux or CentOS or something like that? but that is not currently supported. Right now we're focusing on those applications. This in particular is because when pen testers or auditors come in and they flag something, they tend to flag as kind of the first point of remediation or priority, those frameworks and technologies that are exposed in production to your own users. Okay, we have another question coming in here. How do we determine the end of life status for a package? Thank you, Taylor. So that is a fantastic question. We could get really kind of heady and deep into this. I'll give the top line overview. And then anybody that wants to dig a little deeper, we have some great documentation published on our doc site that gives you a little bit more comprehensive understanding of how we go about that. So there's a couple of ways that we determine end of life. So first, we have a number of data sources upstream where we pull in information about open source packages. We consolidate that information about open source packages. In some cases, we have that maintainer attested. Now that's a very slim number, but we capture that information. And of course, if it's maintainer attested, we display that there. However, for everything else, what we do is we run a series of calculations. We call them heuristics. So for example, let's say we see a package version that is five majors away from the most recent major. It's seven years old since the last time it was released. There's an active critical CVE that was never patched on it. All of those pieces of metadata about that specific package version are extremely powerful indicators that the maintainer has abandoned that release line, maybe that entire package. At that point, we call it end of life. We score it on a mathematical weighted scale and then produce that Boolean value, that true or false, it is end of life or it is not end of life. In this case, that would be considered end of life. And then we provide those end-of-life reasons in the list there. So the point is we're always going to show you our homework, how we got to our conclusion. The secret sauce, the reason we're going to give you that Boolean value is the actual weighting that exists on the back end. We're experts in the end-of-life space. We feel very confident saying this is end-of-life and you should treat it as such. All right, let's see here. We have a few more questions coming in. Ooh, we have a good question from Dave here. Which CVSS scoring version does the tool use to measure CVSS vulnerability scores? Okay, so this is a very deep cut in the CVSS scoring system. So there are a couple of different scoring models, just to give everyone some context here around this question, that exist with CVEs. So you have a CVE. The CVSS score is that score when I hovered over that vulnerability column and you saw that, you know, seven dot two score. That's that CVSS score. Now, there's two different ways to answer this question. So what I want to say first is that that score has actually changed over time. Initially, it wasn't a number if you go back far enough in scoring versions. Initially, it was just a single word, which would say high, medium, low, et cetera. If we don't have a number and that CVE has the old scoring mechanism, we'll simply show you the value associated with that. So that's the first way to kind of answer that question. Now, there's another way to answer that question, which is that while you have an official body, while you have like NIST and others that actually do the scoring and produce that number that we use today for CVEs, there are actually other bodies. For example, GitHub will sometimes produce their own CVSS score. So when you go to ask the question, well, what score does this, what risk level is this CVE? You might have two different scores. It doesn't happen super frequently, but it does occur. In that case, we always default to the governing body. That would be like NIST or whoever the case is, rather than an individual organization doing scoring. Simply because we have a higher sense of reliability, those governing bodies that are doing the scoring have a more kind of holistic view from their perspective, rather than an individual like, you know, reporter or CNA like GitHub that would offer a score. All right, let's see what else we got here. Another one from Linda. What data do you collect when running a scan? This is a really good question. So I alluded a little bit to what that data is during the presentation. Now, importantly, we're going to be publishing documentation in the next week that gives you the literal code snippet that ends up getting sent to our system. But this is a point of concern and question for any organization that wants to be really thoughtful about security. So I'll answer this in the context of those concerns. So there are two ways, of course, that you can get that scan information. So the first way that I showed you all here was I scanned a directory on my machine, and then an SBOM is generated, and then that SBOM is trimmed, anonymized, and then shared with us. Another way is to hand us an SBOM directly. If you already have an SBOM, there's a flag that you can pass through and say, just scan this SBOM. Don't worry about generating one. That's fine. Now, the SBOM generation, the trimming, the anonymization, that all happens on device. And whether or not you give us an SBOM or you don't give us an SBOM, we're still going to end up trimming it before it gets sent to HeroDev's APIs to return an actual end-of-life report. Now, what that means is the literal data we collect that gets sent to us is, one, anonymous. We don't currently have a user login or account process, so we don't know who's running these scans. We don't know what organization, what individuals are running these scans. And then what gets sent to us is quite literally a list in a JSON format of pearls that come in. Those pearls are just essentially IDs for specific open source software versions. That's how our system knows what to return. Again, we'll have some documentation linked that shows you the literal code, but you'll see that it has that list of pearls that get sent to us. And that is the data that we collect. In addition though, we do have some click tracking in the UI that of course refers back to making sure there aren't any bugs and that it's getting used properly. But that isn't collecting any data actually about your systems, etc. All right. One thing I do want to mention for folks looking at the chat, Kevin just jumped in. Kevin is the engineering manager on the end-of-life side of things. So if I misspeak about some of the ways things operate technically, Kevin is going to be an authority on how that works. All right. I see we have a question from John. How long do we have to review the web report before it expires? A fantastic question. So that web report is sitting there right now in perpetuity. It is available. It does not expire. Now, you'll notice that the web report has a GUID in that URL. If you looked up at my URL section there, that is unique and generated on the fly. In that web report, once again, there's zero identifiable information. So that web report is anonymous in that sense. Now, I do want to be clear, over the coming months, we are currently working on user accounts logins. All of those will ultimately be behind an account creation flow. It will be secured in that way. And at that point, you will have the ability to delete or remove any given reports, scans, et cetera, that you no longer need. Okay, so let's see here. We have another question coming in. This is from Perrine. Perrine asks, your example has a lot of EOL packages. What is the best way to prioritize them? This is a fantastic question. So there are a number of ways to prioritize end-of-life packages. Now, tucked in this question, and I hope I'm not reading in too much here, Perrine, my example does have a lot of end-of-life packages. However, if you start scanning very kind of standard, you know, what you'd expect as kind of an enterprise level project or application, you'll actually see that there is a long tail of end of life packages that are surfaced. And, excuse me, the first reactions we got when building this data set and scanning was of course, holy smokes, this is a lot of end of life stuff. And it makes the prioritization question far more salient. How do I decide what to tackle first? So there's a couple of angles that we recommend as best practice. And again, we have some documentation. We're going to be releasing some example charting for how to do this in the coming weeks. But there's a few ways to do prioritization. So step one is to understand the risk inherent in the actual open source version you're using. We think of risk in two different ways. There's security risk. This is what most people think of when they think about end-of-life software. They think, well, it's a security risk because if CVEs pop up or whatever the case is, it's not going to get patches, it's not going to be secure, etc., The first step is to prioritize based on what is riskiest. Show me the CVEs that are most impactful, that might be critical. Most of the time, your SEA tools are already trying to identify this for you. If you use like a Sneak or a Mend or a Black Duck, they're also trying to give you this information. The second form of risk is what we refer to as technical risk. This is very similar to tech debt, which is the idea that, OK, maybe this given package version is end of life. However, it's particularly technically risky. the reason for that is maybe it's a large framework which means that it's deeply embedded in my application it's one thing if you know some you know font awesome or some like very specific you know niche open source kind of package i pulled in is end of life it's like okay but there's like no technical risk i can just go upgrade it i can also you know if it's like a font or whatever the case that it might be i'm simply not concerned about the risk of having to suddenly migrate it should some critical CV pop up, as opposed to, you know, I'm on view two dot seven, right? And if that had a vulnerability pop up, it's already end of life. My team's going to have a tremendous amount of work required to make that prioritization, to actually make that migration occur to a supported version. So security risk, technical risk. Now, the final layer and kind of the layer inside of that that I alluded to was understanding the impact or the relevance of that package and whether or not that package is a direct or a transitive dependency. So anybody on the call that's a developer probably knows the hardship or the pain of trying to remediate a single package that has some sort of problem. So most of the packages in that scan that I showed in the demo, most of those packages aren't even direct dependencies. What that means is that those are dependencies of the direct dependencies that that Nuxt-to application pulled in. Now, the problem with that is that from a prioritization or even like a remediation perspective, your options about what you can do about a transitive are limited because of the direct dependency that's pulling it in. Meaning if a transitive has a CVE on it, but its version range is bound to the direct dependency, then I can't even go in and make that upgrade without likely upgrading the direct dependency itself. And that has this cascading effect throughout your application. And the work goes from a simple migration, a simple upgrade, has now ballooned to a much more sizable project. So what often happens is a security team or an auditor will come in and say, hey, there's a vulnerability on this package. You need to fix it. And it looks small. An engineer goes in, takes a look and goes, oh, shoot. Well, it's a transitive of a transitive of this direct. Do we actually have to? I don't think it really applies. Sometimes there's paperwork that has to be filled out to say, we're not going to worry about this. Other times, if they do have to worry about it, now it's a much bigger deal. Now we're having conversations with engineering managers and security teams and product managers? Does the roadmap have to shift? All of that churn, all of that effort can be preempted if this information is available upfront. So what we're doing with our tool here is giving you not just the full perspective of everything that's end of life, But in the coming weeks, we're going to be adding more information about transitive and direct production dependencies versus dev dependencies, et cetera, which we recommend you factor in at the start, right? When those scans are running automatically to get a sense of, well, I can't just, you know, I can't, the migration path for package XYZ isn't just, you know, go to the next major. It's actually XYZ because we now see it's all connected. This is a transitive, this is a whatever. Again, there's a lot more there that we're going to be building on, but that's the next big push in terms of improving our dataset to help you make those prioritization decisions. All right, let me take a look at what we have next. What types of SBOMs does your system support? This is a great question. So there are a couple of different SBOM formats, the two most ubiquitous formats. And actually, let me pause for a second because I know different people in different departments may or may not be as familiar with the concept of an SBOM. So an SBOM stands for Software Bill of Materials. These are industry formatted formats. kind of lists that are generated off of manifest files, as you saw kind of in that demo that I presented. So there's a few different formats that are rather ubiquitous. So the one we focus on and use the most, these are called Cyclone DX formatted SBOMs. There's another type of format called SPDX, arguably a little bit more ubiquitous, though in some cases a little bit more dated than Cyclone DX formats. Our system will accept either. Those are the two that our system supports, to answer your question directly, Taylor. A few notes on this, though. While we support Cyclone DX and SPDX, many other systems that you have are likely generating or can generate these SBOMs. Many of the systems I alluded to, whether the SNKs and the MENs, et cetera, these are the software composition analysis tools out there, also generate SBOMs. So the point of our tool is that you can use it in conjunction with those tools I just mentioned and just have it ingest the SBOMs those tools generate. Or if you don't have one easily available, like you already saw in the demo, our system will just generate one for you. You can also save that SBOM that is generated on your machine locally. If you do, what you are saving is a Cyclone DX formatted SBOM. All right, so we have another question here from Matt. Matt asks, I already know my leadership team want to know if we can integrate this with our SCA tool. Yes, so I just alluded a little bit to this a minute ago. While we are working on a few partnerships with SCA tools, we view that our data set here is really helpful for folks in the SCA space. There are some ways that you can take our data and collate it or amend it to your SCA tooling. Now, I do want to be really clear. Right now, we do not have any pre-formatted or pre-built integrations with SEA tooling. However, we do have best practices for how to use what we have in conjunction with the data that your SEA tool is giving you. So let me give you two examples of how to do that that we recommend. The first one is the actual data analysis component. Let's say you're scanning ten different projects inside of your organization. And those ten projects all have different levels of relevance and risk in terms of how much your organization cares about and invests in them. And all of those scans are producing these JSON files with all that output. So the best practice is to take that JSON and load it into the BI tool that your organization may be using. Some of you may be using Looker, Domo, Snowflake, Power BI, Metabase. There's a long list of standard BI tooling available where you can view these in the context of a single pane of glass. You can view everything, all those ten projects and everything that's end of life and create the filters similar to the insights filters you saw in the demo. Now, what we do is we take that and we actually then do the same thing with our SEA tool and amend the additional context from our SEA tool to that same data set. The second way that we recommend doing this is by looking inside your SEA tool And viewing the rules engine or the standards engine. So different SEA tools have different names for these. However, when you set up that scanner, that security composition analysis scanner, it will have a way to create standards or rules. These are like checks. So when a build runs in the CI process, it's going to say, well, if there's a... If there's a package that has a critical CVE, we're going to create a rule that flags it, that alerts somebody, that sends a message, and the whole organization, the engineers are going to have to respond to security folks, et cetera, et cetera. What we recommend is taking those rules and using our EOL reason data. to kind of map onto those rules inside of that same BI tool. So what does this mean practically? Well, many organizations would have a rule, for example, that they don't want applications to be built in the CI process that have greater than a nine point five. I'm going to pick a really high number, a critical, critical CVE and CVSS score. We can do the same thing using our end of life reasons, but from the end of life side of things. If we're looking at an end of life package that also has a critical CVE, we want to flag that as well. So the point here is we're creating alignment between the standards, even if we don't have a direct technical integration yet. All right, the next question I think we have from Ethan, is there a way of having the report downloaded on disk instead of having it published online? This is a great question. So the answer right now is no, there is not a way to do that. However, we are working over the next couple of sprints to add some additional functionality that gives you this type of control and options. Obviously, when we deliver that JSON payload, which is the end of life report to your machine, From there, of course, you can download it just like you said on disk and have it available there. Right now, the publishing online is just an out-of-the-box piece of functionality. In the next couple of sprints, we're going to be adding some functionality that gives you more control about whether or not that gets published online. And again, once you have user accounts, or once we, excuse me, have user accounts in place, you'll have significantly more control as to where this data is or isn't available to view. All right, let's take a look here. Awesome, glad to see, Perrine says that this looks great, thinks it's gonna help their team prioritize since they've been struggling with some of these issues here. Yeah, absolutely, absolutely. One of our goals with this data set is to reduce the amount of like spin that happens inside of organizations when they get flagged by that compliance. audit or that pen test or whatever the case is that flagging causes all this churn where now you know engineers have to go do research and you know engineering managers have to go do research and then they have to figure out do we have to do the migration now what are the migration and all of that additional work can be preempted right with proactive scanning and flagging uh configured to align with whatever that compliance rule is that you're subject to So the point is we want to save everybody time so they don't have to worry about that by being aware upfront and being able to take action upfront. A lot of those audit cycles happen, you know, annually or every six months or whatever the case is. And many of you who have lived through it know, right? Oh, well, when March comes up, just get ready. Like we're going to have to react to something, right? We want to reduce how often you might be dreading that annual, you know, check or scan or pen test. All right, let's take a look here. All right, we're coming up on, let's see here. So we have another question from Linda. Does this tool paths to migrate or dependencies that could affect migration? Does this tool paths to migrate or dependencies that could affect migration? Ah, so what we do, so in terms of actually flagging and saying, hey, there are dependencies that are going to affect any kind of migration. We don't have that data present yet, but we are moving in this direction. And we actually are working right now on adding data about dependencies that would affect migration paths. So I'm going to take this question, I'm going to split it up into a couple different components that are relevant, I think, to the underlying question around can you tell us what the migration path is, what the dependency changes would be required, et cetera, whenever looking at something that's end of life that we might need to take action on. So as many of you know, large frameworks often publish migration guides. If I'm, you know, like Vue and Angular and, you know, a lot of these other, you know, large frameworks out there are going to publish, here's the way to get from this version to this version. You jump from, you know, three dot one to four dot two to then five dot O or whatever the case is. Now, we're looking into first simply adding some links to those publicly available migration guides. We haven't decided yet if it makes sense to digitize that migration path jumping process there or simply make sure that we're publishing the right path for you to research and find that information. So that's something that we're actively investigating. Feel free in the chat here if you have requests or you'd like to see certain bits of data added. I am the product manager on this specific product. So you're doing me quite the favor if you have questions or have things that you would like added to it. That feedback is all welcome. But the point here is that we will be adding in the future information about where you can learn about migrations. Now, there's a second part of this question we could talk about briefly, which is that those dependencies that are reliant. So if you have a direct dependency of a project, let's take view, view two dot seven is end of life. Keep using that example. then you have Vue pulls in a bunch of transitives, right? In addition, Vue also has the concept of a peer dependency. So Vuetify, for example, is a peer dependency, meaning it's not pulled in by Vue, but most of the time, if you're using Vue, you're probably gonna use Vuetify and they work very closely together. Now, all of these connections, transitive, peer, et cetera, create these dependent kind of versioning schemas where, well, Vue two dot seven only works with beautify X dot X and also only works with Vue router and all the other additional kind of Vue components and package utilities that get pulled in as well. So we are going to be adding data around those transitive dependencies inside of that JSON structure and inside of the web report that I showed you here. So this is going to allow you to upfront know of those two hundred end of life packages, eighty percent of them are transitive and are constrained on these twenty percent. Once you know that, you can start actually seeing what would be required to upgrade, fix, resolve, whatever the case is, remediate that given end-of-life transit dependency. All right, let's see what else we have here. Okay, perfect. One of the things I do want to call out here in the chat, Jared mentioned, anyone that may have joined a little bit late, there is going to be a recording of this webinar generally available within a few hours once this concludes here. So if you missed the original demo, if you missed any of the slide presentation, that is all going to be published and available. We're also going to be sharing, before we conclude here, we're going to be sharing a couple of links to getting started. So the tool, everything you saw today, is free to use for everyone on this call and is also available right now. We have documentation. We have a landing page on our marketing site that provides a little bit more just like information and context, something that you can share with anyone else in your organization that may not have been able to attend. but might be a stakeholder in actually getting this data or approving the setting up of this type of scanning. All right, I'm going to give everybody just another minute, see if there's any other questions before we officially wrap up here. Thank you all for the engagement so far. This has been fantastic. These are very good questions. And I'm excited to hear how using this tool goes for several of you. All right, perfect. Well, I think we are going to be clear to go ahead and wrap up. So thank you everybody so much. Really appreciate your time here today. I'm going to go ahead and just put this final slide here. We're going to post these links in the chat below. So there's two ways to get started using this tool today. That first link up there, like I said, is going to take you to the landing page on our marketing site. The second link there is a link to our documentation. So both of those, that documentation is going to be that first step into actually using and understanding, using the tool and then understanding how to interpret your results, how to actually get set up. For any of you on this call that are interested in getting some assistance setting this up in the CI process, we do have documentation on that. We are also more than happy to meet one-on-one with the folks on this call and help get any of you set up here. We can get the automated scans running and triggering and make sure that your data is configured and flowing into the correct systems that you need it to move into. All right, let's see here. Yes. Yes, Sonia, there will be a recording sent out. Whether or not you can share it internally, yes, you should be able to. I see Jared hopping in there. So this will be something that if you need some other leaders inside of your company to take a look at, to get approval from, you can just send this recording to them and they'll be able to view that. Awesome. Well, thank you, everyone. If you have any questions, you are more than welcome to send us a message through our support channels. There's also a support link on our tool itself, which you can add any additional questions or reach out directly to myself. I hope you all have a fantastic rest of your day and rest of your week.

Summarize with AI
HOSTS
Isaac Wuest
DATE
October 7, 2025
Duration
1 hour