Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> unless it's open source and the servers decentralized, you are always trusting SOMEONE

Specifically, open-source and self-hostable. Open source doesn't save you if people can't run their own servers, because you never know whether what's in the public repo is the exact same thing that's running on the cloud servers.



You can by having an attestation of the signed software components up from the secure boot process, and having the client device validate said attestation corresponds to the known public version of each component, and randomize client connections across infrastructure.

Other than obvious "open source software isn't perfectly secure" attack scenarios, this would require a non-targeted hardware attack, where the entire infrastructure would need to misinterpret the software or misrepresent the chain of custody.

I believe this is one of the protections Apple is attempting to implement here.


Usually this is done the other way around - servers verifying client devices using a chip the manufacturer put in them and fully trusts. They can trust it, because it's virtually impossible for you (the user) to modify the behavior of this chip. However, you can't put something in Apple's server. So if you don't trust Apple, this improves the trust by... 0%.

Their device says it's been attested. Has it? Who knows? They control the hardware, so can just make the server attest whatever they want, even if it's not true. It'd be trivial to just use a fake hash for the system volume data. You didn't build the attestation chip. You will never find out.

Happy to be proven wrong here, but at first glance the whole idea seems like a sham. This is security theater. It does nothing.


If it is all a lie, Apple will lose so much money from class action lawsuits and regulatory penalties.

> It’d be trivial to just use a fake hash

You have to go deeper to support this. Apple is publishing source code to firmware and bootloader, and the software above that is available to researchers.

The volume hash is computed way up in the stack, subject to the chain of trust from these components.

Are you suggesting that Apple will actually use totally different firmware and bootloaders, just to be able to run different system images that report fake hashes, and do so perfectly so differences between actual execution environment and attested environment cannot be detected, all while none of the executives, architects, developers, or operators involved in the sham ever leaks? And the nefarious use of the data is never noticed?

At some point this crosses over into “maybe I’m just a software simulation and the entire world and everyone in it are just constructs” territory.


I don't know if they will. It is highly unlikely. But theoretically, it is possible, and very well within their technical capabilities to do so.

It's also not as complicated as you make it sound here. Because Apple controls the hardware, and thus also the data passing into attestation, they can freely attest whatever they want - no need to truly run the whole stack.


It is as complicated as I make it sound. Technically, it's trivial, of course.

But operationally it is incredibly complicated to deliver and operate this kind of false attestation at massive scale.


Usually the attestation systems operate on neither side having everything to compute a result that will match attestation requirements, and thus require that both server-side and client-side secret are involved in attestation process.

The big issue with Apple is that their attestation infrastructure is wholly private to them, you can't self-host (Android is a bit similar in that application using Google's attestation system have the same limitation, but you can in theory setup your own).


Attestation requires a root of trust, i.e. if data hashes are involved in the computation, you have to be able to trust that the hardware is actually using the real data here. Apple has this for your device, because they built it. You don't have it for their server, making the whole thing meaningless. The maximum information you can get out of this is "Apple trusts Apple".

Under the assumption that Apple is telling the truth about what the server hardware is doing, this could protect against unauthorized modifications to the server software by third parties.

If however, we assume Apple itself is untrustworthy (such as, because the US government secretly ordered them to run a different system image with their spyware installed) then this will not help you at all to detect that.


Attestation of software signed by who?

If apple holds the signing keys for the servers, can they not change the code at any time?


> exact same thing that's running on the cloud servers

What runs on the servers isn't actually very important. Why? Becuase even if you could somehow know with 100% certainty that what a server runs is the same code you can see, any provider is still subject to all kinds of court orders.

What matters is the client code. If you can audit the client code (or better yet, build your own compatible client based on API specs) then you know for sure what the server side sees. If everything is encrypted locally with keys only you control, it doesn't matter what runs on the server.


But in this use case of AI in the cloud I suppose it's not possible to send encrypted data which only you have the keys to as that makes the data useless and thus no AI processing in the cloud can be made. So the whole point of AI in the cloud vs. AI on device goes away.


This is what the “attestation” bit is supposed to take care of—if it works, which I’m assuming it will, because they’re open sourcing it for security auditing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: