More interestingly, Cavium (now Marvell) also designed and manufactured the HSMs which are used by the top cloud providers (such as AWS, GCP, possibly Azure too), to hold the most critical private keys:
Ayup. We use AWS CloudHSM to hold our private signing keys for deploying field upgrades to our hardware. And when we break the CI scripts I see Cavium in the AWS logs.
Now I gotta take this to our security team and figure out what to do.
I'd be surprised if you get anything more than generic statements about how they take security very seriously and they are open to suggestions, but avoid addressing the mentioned concerns directly (and this applies to all cloud providers out there, not just AWS).
I'm sure a few others here would like to see their response as well.
We've had other issues with our CloudHSM instance, especially with the PKCS1.5 deprecation on January 1. And their support has been pretty dismal. Not expecting much from them at this point.
AWS support is pretty fucking terrible generally. We’re a very high rolling enterprise customer and it’s pretty obvious that some of their shit is being managed by two guys in a shed somewhere who don’t talk to each other.
As someone that is deciding between AWS, Google and Azure - could give an outline of some of the Azure painpoints? Are there any blogs or other articles that outlines what your concerns would be?
I'm pretty aware of how painful it can be to configure AWS well, IAM roles, the overly large eco-system that we won't need and unmitigated complexity to configure it all. It's not comforting to think Azure is worse yet.
I work on and off with both, AWS may be more feature complete in some areas but Azure is frankly easier to work with for me, I can actually get support on issues I have from Microsoft. And while I've generally only done so from the large enterprise account perspective, Microsoft is way more open to feature requests/enhancements than Amazon is. I don't have any experience with GCP so I can't speak on that.
Imagine doing a job interview they ask do you know AWS. Sure, I know AWS, and explain what you built with Greengrass, Lambda's, RDS etc. and then get rejected for not knowing AWS lol
wouldnt such a backdoor invalidate all promises made by external audits e.g. https://cloud.google.com/security/compliance/offerings
and more importantly wouldn't it violate safe harbor agreement with the EU or whatever sham this safe-harbor was replaced with?
As you say, a sham : as long as the Patriot Act is still effectively ongoing, everyone else is still trying really hard to look the other way, (especially while the war is still ongoing !), ignoring the CJUE, which has no choice but to shoot down one agreement after another, since they automatically violate the EU Charter of Fundamental Rights : https://en.wikipedia.org/wiki/Max_Schrems#Schrems_I
The Intel Management Engine always runs as long as the motherboard is
receiving power, even when the computer is turned off. This issue can be
mitigated with deployment of a hardware device, which is able to disconnect
mains power.
Intel's main competitor AMD has incorporated the equivalent AMD Secure
Technology (formally called Platform Security Processor) in virtually all of
its post-2013 CPUs.
I think Ylian Saint-Hilaire hasn’t been with Intel for about a year now, after some layoffs. As a result the software ecosystem around AMT/vPro is lagging these days.
Hardware wise nothing changed, it’s just even harder for the actual owner of the hardware to use the legitimate management features while presumably easier for whoever could illegitimately abuse them.
But being able to request it and having a built-in backdoor for anyone with a key are different things. It has happened before that the Chinese government figured out network equipment backdoors that were put in for the US government. All your company secrets are there for the taking for anyone with the resources to figure out that backdoor. Especially now that people know it exists. Shouldn't this at least start the clock on expiring this hardware?
Considering the scales of Amazon and Google, and their involvements with US government agencies in the US, I think it is fair to suspect that there is a lot we don't know about...
Is there anyone here who actually thought cloud provider HSMs were secure against the provider itself or whatever nation state(s) have jurisdiction over it?
It would never occur to me to even suspect that. I assume that anything I do in the cloud is absolutely transparent to the cloud provider unless it's running homomorphic encryption, which is still too slow and limited to do much that is useful.
I would trust them to be secure against the average "hacker" though, so they do serve some purpose. If your threat model includes nation states then you should not be trusting cloud providers at all.
Lots of people believe that. They believe truthfully you can get to the level of AWS, MS, Google, Facebook or Apple whilst standing up to the nations that host those companies. I've walked into government employees in the hallways of tiny ISPs, I see no reason to believe at all that larger companies are any different except for when easier backdoors have been installed.
The really concerning part is to be STILL believing that after the Snowden scandal, after everybody has seen the slides that explain in detail how the NSA sends an FBI team to gather data from (then, in 2013) Microsoft, Yahoo, Google, Facebook, PalTalk, YouTube, Skype, AOL, Apple (and Dropbox being planned).
Also how Yahoo first refused but was forced to comply by the Foreign Intelligence Surveillance Court of Review.
(Note that supposedly, "the companies prefer installing their own monitoring capabilities to their networks and servers, instead of allowing the FBI to plug in government-controlled equipment.")
I don’t know how many believe it and how much is willful ignorance. The big cloud providers make big mistakes but how many trust their organizations to do better against a nation state level actor?
The underlying architectures of our systems are not secure and much of the abstractions built on top of them make that insecurity worse, not better.
For nation state level issues, the solution likely isn’t technical, that is a game of whack-a-mole, it will take a nation deciding that digital intrusions are as or more dangerous than physical ones and to draw a line in the sand. The issue is every nation is doing it and doesn’t want to cut off their own access.
> Lavabit is an open-source encrypted webmail service, founded in 2004. The service suspended its operations on August 8, 2013 after the U.S. Federal Government ordered it to turn over its Secure Sockets Layer (SSL) private keys, in order to allow the government to spy on Edward Snowden's email
> He also wrote that in addition to being denied a hearing about the warrant to obtain Lavabit's user information, he was held in contempt of court. The appellate court denied his appeal due to no objection, however, he wrote that because there had been no hearing, no objection could have been raised. His contempt of court charge was also upheld on the ground that it was not disputed; similarly, he was unable to dispute the charge because there had been no hearing to do it in.
At my Fortune 250, our threat model apparently includes -- rather conveniently and coincidentally -- everything! Well, everything they make an off-the-shelf product for, anyway. It makes new purchasing decisions easy:
"Does your product make any thing, in any way, more secure?"
"Uh... Yes?"
"You son of a bitch. We're in. Roll it out everywhere. Now."
This reminds me of our own security team, who as far as I can tell do nothing but run POC's of new security tools. And then maybe once a year actually buy one, generating a ton of work (for others) to replace the very similar tool they bought last year. Seems like a good gig.
Occasionally security products turn into malware delivery platforms as well, because they run very privileged, are sometimes more shoddily developed than what they’re protecting, and have fewer eyeballs on them than the vanilla operating system.
And then when there is a security issue you ask them share the log files from all their spyware and suddenly half the stuff needed is not there because we did not get that module.
Ahh, I've been there. I'm sure no concern is given for usability of the result.
Welding your vault shut may make it harder for thieves to break in, but if your business model requires making deposits and withdrawals, it's somewhat less helpful.
Luckily, all but tiny portion of security products have a door you can open if you ask support nicely enough you didn’t know about before. So you can still get your stuff after you weld the door shut.
I’m not privy to those discussions, but it certainly doesn’t feel like they’re happening. We implement every security “best practice,” for every project, no matter how big or small. We have committees to review, but not to assess scope, only to make sure everything is applied to everything. Also, we have multiple overlapping security products on the corporate desktop image. It feels EXACTLY like no one has ever tried to gauge what a compromise might cost.
It's interesting to consider the people who, with the very same set of facts, come to completely opposite conclusions about security.
For instance, Amazon has a staff of thousands or tens of thousands. To me, that means they can't possibly have a good grasp on internal security, that there's no way to know if and when data has been accessed improperly, et cetera. To others, the fact that they're a mega-huge company means they have security people, security processes and procedures, and they are therefore even more secure than smaller companies.
For one of the two groups, the generalized uncertainty of the small company is greater than the generalized uncertainty of the large. For the other, the size of the large makes certain things inevitable, where the security of smaller companies obviously depends on which companies we're talking about and the people involved. More often than not, people want to generalize about small companies but wouldn't apply the same criteria to larger companies like Amazon.
There's a huge emotional component in this, which I think salespeople excel at exploiting.
It fascinates me, even though it's a never-ending source of frustration.
Even if you trust your nation state 100% having a backdoor means it has already fallen into the wrong hands. That's because 'nation state' is not synonymous with 'people running the nation state'.
Literally hosed. There's a funny jargon term "rubber hose cryptography" that's used to refer to the cryptanalysis method where you beat someone with a rubber hose until they give you the key. It's 100% effective against all forms of cryptography including even post-quantum algorithms.
You would be surprised that for a percent this would not work. Some even like it. Some have a deathwish and want to be a martyr. Some people blow themselves up to further a cause. Also put under heavy stress memories of keys cannot be recalled at times.
It's probably slightly less effective than threatening to kill family members but probably more than threat of jail time.
Either way you require someone alive and with mental awareness. The mind reading tools found in science fiction hasn't been developed yet.
It doesn't matter, something will be found that will coerce them into talking. Nobody is an island. Everyone has a breaking point, if it's not rubber hoses, it's socks full of rocks, or it's bottles of mineral water, or any number of methods. Don't think for a second that someone hasn't thought of a better way to get information out of somebody else.
Terrorists are generally highly altruistic, not psychopaths.
It’s a lot easier to blow yourself up(or to spread ideology which encourages it)for a cause that you believe is helping people, in particular _your_ people.
The terrorists that blow themselves up and that blow other people up are usually misguided brainwashed angry young men. It's nothing to do with ideology, everything to do with power. Or did you think blowing up schools full of girls is something people genuinely believe helps their people, to give just one example?
Ordinary people just want to be left alone. Old guys wishing for more power will use anything to get it, including sacrificing the younger generations.
No, it's something that a bunch of old guys with issues told them helps their people.
Beliefs stop when they are no longer about yourself but about how other people should live. Especially when those other people loudly protest that this is how you think they should be living. Killing them is just murder, not the spreading of ideas.
But hey, those human rights are just for decoration anyway.
The old men persuade the would-be suicide bomber that educating women will liberate and liberalize them, and that this is counter to the interests of those who prefer the traditional order of society. Are they even lying?
You're deeply mistaken if you think there aren't men who don't genuinely prefer the traditional order of women being subjugated by men.
1. Not everybody shares your values.
2. People who don't share your values are not necessarily brainwashed.
3. People may do things that are irrational under your system of values, but rational under their own.
And BTW, there is no a single fighting force in the world that doesn't have old men persuading young men to sign up and risk throwing away their lives. There's not a whole lot of difference between regular soldiers persuaded to participate in a forlorn hope or banzai charge attacking a defended position and a suicide bomber or kamikaze.
That's actually not true. It can do nothing about M of N cryptography. (That's when a key is broken up such that there are N parts, and at least M (less than N) are required to decrypt. It doesn't matter how many rubber hoses you have, one person can fully divulge or give access to their key and it's still safe.
I always giggle a little when really smart people forget thugs exist and do what they’re told. If that includes breaking the knees of M people to get what they’re after, then M pairs of knees are gonna get destroyed.
This isn’t hard to understand, but it’s easy to forget our civilization hangs by a thread more often than any of us care to admit.
I don't remember the provenance of the quip, but somewhere at a def con or a hope, I heard, "The point of cryptography is to force the government to torture you."
They're perfectly ok with that, and depending on where you live this may happen in more or less overt ways. If the government wants your information, they will get your information. Your very best outcome is to simply rot in detention until you cough up your keys.
Now that I think about it, I'm pretty sure it was a session about root zone security, and Adam Langley was in the room. I was thinking, damn, kinda sucks to be the guy that holds Google's private keys. They want someone's information, so they let you rot...
You know there are other ways to have a video and send it to people than YouTube, right? You can just email a link from dropbox or gdrive, or an attachment, or send a WhatsApp/Telegram/etc. message, send a letter with a USB drive, etc.
Bob, Jon, and Tom have pieces of the key. Bob and Jon are in the US and arrested over and commanded by a court to give up the key. Tom is the holdout. The US will issue an international arrest warrant, and now Tom can never safely fly again or the plane will be diverted to the nearest US friendly airport where they will be extradited. So, yea, "safe" is very situational here.
This probably works if each person has a cyanide+happy drug pill or a grenade and is willing to sacrifice themselves and the rubber-hoser(s). I think that requires a rare level of devotion. This process must also disable a simple and fragile signalling device to let the others know what's coming.
This would not work well, because you can’t do it in a secret manner. Overuse of the rubber hose cryptography will become known, and there will be public backlash.
Seems like the NSA is threatening everyone of arrest (=state-organized violence) if they don’t secretly give them keys, and Snowden revealed it, and there is no public backlash.
I mean in the end everything is people just like Logan Roy said in Succession. Cryptography or any software protections are the same. It's a great quote that is very true:
> "Oh, yes... The law? The law is people. And people is politics. And I can handle of people."
Addendum: if your threat model includes any nation state that has significant ties to the nation state that hosts your physical or transit infrastructure, you're hosed.
The US authorities can make the same orders that they made with LavaBit (i.e. ordering them to produce a backdoored build and replace yours with it), and they can make them secretly. Given that Signal by design requires you to use it with auto-update enabled (and, notably, goes to some effort to take down ways of using it without auto-update), and has no real verification of those auto-updated builds, I would consider it foolish to rely on the secrecy of Signal if your threat model includes the US authorities or anyone who might be able to call in a favour with them.
Signal started keeping sensitive user data in the cloud a while ago. All the information they brag about previously not being able to turn over because they don't collect it in the first place, well they collect it now. Name, photo, phone number, and worst of all a list of all your contacts is stored forever.
It's not stored very securely either. I wouldn't doubt that three letter agencies have an attack that lets them access the data, but even if they didn't they can just brute force a pin to get whatever they need.
I mean, there's a reason that the government was involved with setting up the first cell networks. No assumptions need to be involved. They ARE all compromised.
You’re missing the point. It was designed to be transparent to interception efforts up front, so you can’t tell if you’re being surveilled, lawfully or not.
I think there’s such a thing as plausible deniability here. We didn’t know for certain so we weren’t culpable, but now that it’s public record, we really have to do something about it or risk liability with our customer data.
You don't need to think about this in a binary fashion. You can split your trust across multiple entities. Different clouds, different countries, or a mix of cloud and data centers you own.
This breeds the familiar scenario where a group will start saying the link between the two is so clear that there must be a connection. Then you’ll get another group calling the first group conspiracy theorists, and say it’s just a coincidence of probability.
Narrative control and information modeling is so powerful it’s scary.
That's not how this works. Plenty of conspiracies are just that: idiots pretending they have special knowledge or that believe that behind everything that doesn't quite mesh with their worldview there is someone pulling invisible strings. Those people have a mental issue. The big trick is to be able to tell the two apart, not to categorically assume that because some conspiracies that had a whole bunch of evidence to go with them turned out to be true that all conspiracies, even those that have no evidence to go with them are true as well. That's just faulty logic.
Now get yourself some half-decent psyops and contaminate the first group with supporting voices that emphasize weaker evidence, use poor logic, name-drop socially questionable sources, and go out of their way to sound ridiculous.
I find the levels bizarre. Chromebooks are highly exposed to physical attack. Keys in the cloud are not nearly as exposed. Yet people seem okay with level 1 for chromebooks but apparently want level 3 in the cloud?
I’d rather see a level 1 or level 2 auditable cloud solution, with at least source available.
This is so weird. The idea of an adversary covertly walking off with an IBM Mainframe or covertly bringing an electronics lab, a microscope, logic analyzers, glitching hardware, etc to the aforementioned mainframe is rather strange. Whereas someone doing that to a phone or a laptop or a game console is very likely.
If I wanted to store an important long term key in a secure facility, I would worry, first and foremost, about software attacks, attacks doable over a network, malicious firmware attacks, and maybe passively observed side channel attacks. Physical attacks would be a rather distant second.
The adversary will show up and badge in just like everyone else. They might have worked there for 20 years, or they might be an outside repair person or external consultant.
They will definitely fit in. They're supposed to be there.
It will be the most normal thing in the world. And you may never know their real purpose.
Sure. But the attacker needs to actually get in, which is considerably harder than getting into a hotel room. But more relevantly, the kinds of countermeasures that get you from level 1 to a higher level don’t seem likely to help at all — if some evil-maids or otherwise fully compromises a machine hosting a FIPS 140-2 level 4 HSM, they likely get the unrestricted ability to perform cryptographic operations using keys protected by that HSM, but they get this by using the HSM’s normal API. If they can convince the HSM to export its keys to another HSM (oops) or to otherwise leak the key material, they get the key material. But this doesn’t seem like it has much to do with physical attacks against the HSM.
Now if someone evil-maid attacks the HSM itself, that’s a different story. Any good HSM should resist this, especially one found in a portable device. And this is because you can steal an entire important corporate laptop or other portable device without necessarily raising an quick alarm, whereas I have trouble imagining someone walking off with the HSM out of an IBM mainframe or with an AWS HSM without the loss being noticed immediately.
(To be fair, in the mainframe case, some crusty corporations seem to have a remarkable ability to fail to notice obvious crypto problems like their public facing certificates expiring. But a loss of an entire HSM from a secure large cloud datacenter will, at the very least, immediately trigger “elevated failure rates” or whatever they like to call it…)
Wiping for no reason: that could well be a difference between the view of the firmware of the world versus your view and I guess they just decided to err on the side of caution?
And low power alarms may well be a variation on that theme. Glitching the power supply has been a tool in the arsenal of reverse engineers for a long time so that sort of sensitivity may well make sense. Voltage spikes and drops can be very short, short enough for you not to see them on a DVM but on a memory scope with a trigger value set much lower than you might expect they'd show up with alarming regularity in some hardware that I've worked on. And that explained some pretty weird instability issues. Good power is rare enough that really sensitive hardware usually has power conditioning circuitry right up close to the consumer.
Wiping for no reason: that could well be a difference between the view of the firmware of the world versus your view and I guess they just decided to err on the side of caution?
No. I said I've been in touch with technical support, and the manuals, docs, and their support is clear. It should not be wiping, it has a backuo battery too.
We've spent hours and hours testing, to validate the issue, and cause.
They likely have a firmware bug, or bad board design. And we've seen this from cards from different batches, bought years apart.
Their support is incompetent, and I say that with 30+ years of dealing with, and providing tech support. They fail to read tickets, and even spend (supposedly) weeks running tests, while ignoring vital data in tickets, and conveyed in support calls.
They. Are. Incompetent.
In terms of "issues with power", no. Not over dozens of servers, in different datacentres, and even just with the card at rest, out of server, on battery.
Understand, their job is to provide stable. HSM cards are useless, if they randomly wipe when in use, while under power "just cause".
I find it weird that you're playing devil's advocate here, describing how hard this is, this is an enterprise grade card, and people have been making reliable, and safe HSMs for decades.
Hehe, ok! Clear case of faulty product then. Thanks for the extra context.
I'm not so much playing devils advocate as that I'm aware how hard making such devices is and the difference between 'user error' and 'incompetent staff/faulty product' can be hard to distinguish in a comment.
HSMs are mainly for compliance, where a customer needs to check a regulatory box, because some rules says you must use a HSM. The more standard it is, the easier it is to demonstrate to the auditor that you've checked the box.
I'm not saying you are wrong but I can make a website which claims some cloud provider uses my hardware too. Their website is irrelevant. Do we have a Google (or AWS/...) page regarding this?
> Note: Currently, all Cloud HSM devices are manufactured by Marvell (formerly Cavium). "Cavium" and "HSM manufacturer" are currently interchangeable in this topic.
https://www.prnewswire.com/news-releases/caviums-liquidsecur...