Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[dupe] CrowdStrike CEO: "defect in a single content update for Windows" (twitter.com/george_kurtz)
27 points by sgammon on July 19, 2024 | hide | past | favorite | 34 comments


https://infosec.exchange/@littlealex/112813425122476301

> Too funny: In 2010 McAffe caused a global IT meltdown due to a faulty update. CTO at this time was George Kurtz. Now he is CEO of #crowdstrike > https://www.zdnet.com/article/defective-mcafee-update-causes...


If you know anything about how MCAF was organized, or about what GK does at Crowdstrike, the idea that he was in some way involved with MCAF's AV or with the installers/updaters at either company is especially funny.

MCAF was a conglomerate of security products driven by sales of their AV suite, which might as well have been developed on a different planet for all that it mattered to the "security people". Same with SYMC.


He was the CTO, what do you mean he wasn't in any way involved?


People on HN have a very, very strange idea of what a "CTO" does. Kurtz was the CEO of Foundstone when MCAF bought him. Foundstone was not small. The most important skill most CTOs have is identifying themselves as "the CTO" on calls with customer prospects.


I'm trying to imagine anyone on the CrowdStrike board saying, "As a critical infrastructure single-point-of-failure, we gotta fortify ourselves with McAffe culture, from the top, down!"

In their defense, maybe they don't care about the service they claim to provide, and are just looking at it as a money machine black box.


Their stock is down only 9%; barely an event for a tech stock. He didn't even apologize.


What he said at the start of the cnbc interview sounded like an apology to me. The first thing that came to mind was that's great but at the same time might be a problem for a lawsuit IANAL.


Move fast, break things.


And move to the next cushy job paid millions


Is this the part where someone says the rest of us are working too damn hard?

Because on days like today, I do wonder….


The problem isn't working too hard. It's working for others. We should be starting companies and then not selling it to FAANG.


If that can break a kernel driver and lead to a blue screen then the code in the kernel driver is shit and that means the product is shit and a liability.


... and security vendors themselves becoming the clear and present danger if they're able to push untested kernel drivers, bypassing testing, directly into production systems. A significant portion of these production systems are 'life-critical' (i.e. people die if these systems fail).


> (i.e. people die if these systems fail).

People definitely died today. 911 was down for entire states, dozens of hospitals cancelling surgeries, pharmacies not able to serve customers etc


100% agree.

I'm starting to think these vendors are a higher risk and cost than not having them.


> Kurtz @ CrowdStrike: This is not a security incident

Of course it is a security incident. What planet is this guy on??


It's a win for security if your computer can't do anything!


Failure to secure availability is a lose.


Why was there no gradual rollout of the update?


Security companies are always cowboys.

Gradual rollout? No no, we need the ability to respond to attacks and vulnerabilities fast.

Limiting the service's access and power, like we do for every other service? No no, we need to run as root and access every single user's SSH private keys and browser cookies. How else would we check they're encrypted, in line with your IT policy?

Secure boot? You'll have to bypass it for us so our 'security' kernel module can load, go into the BIOS and install this special key of ours.

Strict code reviews? We consider this bash script run as root to be 'configuration' rather than code.

Installing all software updates? No no, although we need to roll out our changes immediately, we don't support a new LTS Ubuntu release until it's been out for 6 months....


Rushed to finish enough Jira points by the end of this sprint - because those points matter more in a corporation than sensible periodic slowdown.


As a QE, I wonder how QA process looks in this company. Someone put their stamp of approval on this release. Or did they?



There's no part where he apologises


I’m comfortable with him holding apologies until after service is restored.

A lot of IT staff, including myself, woke up to this and are focused on triage and restoration.

Once the noise globally has died down then I’ll expect an apology, among other things.


I don’t think their crisis management team has formulated an apology in corporate speak yet, but he was on NBC (remotely) saying “we apologize”.

I’d rather a commitment to a thorough and public accounting of what went wrong. Not just something locked behind the portal login. They owe the world answers about why this wasn’t caught in testing.


Is it wrong to expect an apology? (Trying not to be rhetoric, but instead know more expert opinions)


These people were talking about this 4 years ago:

https://www.reddit.com/r/crowdstrike/comments/ie8wos/sensors...

...but honestly these types of bugs have been inherent in software since day 1. We have had canary deployment models also for ages - so for this to happen tells us some things about the IT administrators of these companies that were impacted.

I don't think CrowdStrike bears much of the fault here. I recall this similar thing happening with Norton in the early 2000's and many others since then.


Check out this Reddit comment (not mine)

https://www.reddit.com/r/crowdstrike/comments/1e6vmkf/bsod_e...

Quote: "Multiple sensor versions apparently. I checked we haven't received a sensor update since the 13th so it must be something else they're updating to cause it. So much for our Sensor Update Policies avoiding things like this..."

Edit to add: Based on the Reddit comment and this thread, https://news.ycombinator.com/item?id=41004103, I would put this on CrowdStrike doing something that was unavoidable by the customer (CrowdStrike could have avoided this). But maybe there are some customer settings that could have prevented this.


IMO the fault lies 100% at CrowdStrike. The software does not only run on mission critical systems, but also one such systems, where a automatic update is okay and even wanted, where the operators maybe just don't have the capacity to run tests before that. Many people trust CrowdStrike, and yeah, sure, everyone should do tests before updates in a perfect world, but in reality (as we now see) this is not always the case. Not because people are actively sabotaging themselves, but often the priorities are somewhere else, that's why they are using high-quality software, and trust their automatic updates to not cause a total blackout.

I install software -> PC crashes and can't recover itself -> it's the Software's fault. Sure, I could have prevented it, but this doesn't change who's at fault.


Crowdstrike bears the responsibility for the effects their product has on the world. Firms have the responsibility to use canary deployment and other practices to mitigate the potential harms third party products might cause.

Crowdstrike deployed a flawed update resulting in widespread harm. They are responsible for that harm. Companies failing to mitigate that harm through responsible preventive practices are also at fault.

Nothing will change. The people in charge of purchasing and deploying enterprise scale kabuki security software like this aren't interested in accountability or real world efficacy, it's entirely about crafting a narrative sufficient to remain employed. The game isn't security or practicality - box checkers gotta check boxes.


Should CrowdStrike themselves not follow a canary deployment model?


Crowdstrike released a change that should have been caught by automated testing. That does require an explanation, I think, and a change to prevent recurrence.


"CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts. Mac and Linux hosts are not impacted"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: