Hacker Newsnew | past | comments | ask | show | jobs | submit | the8472's commentslogin

I don't quite understand the issue about public error enums? Distinguishing variants is very useful if some causes are recoverable or - when writing webservers - could be translated into different status codes. Often both are useful, something representing internal details for logging and a public interface.

I agree. Is he really trying to say that e.g. errors for `std::fs::read()` should not distinguish between "file not found" and "permission denied"? It's quite common to want to react to those programmatically.

IMO Rust should provide something like thiserror for libraries, and also something like anyhow for applications. Maybe we can't design a perfect error library yet, but we can do waaay better than nothing. Something that covers 99% of uses would still be very useful, and there's plenty of precedent for that in the standard library.


I doubt epage is suggesting that. And note that in that case, the thing distinguishing the cause is not `std::io::Error`, but `std::io::ErrorKind`. The latter is not the error type, but something that forms a part of the I/O error type.

It's very rare that `pub enum Error { ... }` is something I'd put into the public API of a library. epage is absolutely correct that it is an extensibility hazard. But having a sub-ordinate "kind" error enum is totally fine (assuming you mark it `#[non_exhaustive]`).


It's not uncommon to have it on the error itself, rather than a details/kind auxiliary type. AWS SDK does it, nested even [0][1], diesel[2], password_hash[3].

[0] https://docs.rs/aws-smithy-runtime-api/1.9.3/aws_smithy_runt... [1] https://docs.rs/aws-sdk-s3/1.119.0/aws_sdk_s3/operation/get_... [2] https://docs.rs/diesel/2.3.5/diesel/result/enum.Error.html [3] https://docs.rs/password-hash/0.5.0/password_hash/errors/enu...


Why is it an extensibility hazard (assuming you mark the `pub enum Error` as non-exhaustive)?

I mean I don't see the difference between having the non-exhaustive enum at the top level vs in a subordinate 'kind'.


imo https://x.com/SadlyItsBradley/status/2001227141300494550 is a better demo than their own project page

shipping base64 in json instead of a multipart POST is very bad for stream-processing. In theory one could stream-process JSON and base64... but only the json keys prior would be available at the point where you need to make decisions about what to do with the data.

Still, at least it's an option to put base64 inline inside the JSON. With binary, this is not an option and must send it separately in all cases, even small binary...

You can still stream the base64 separately and reference it inside the JSON somehow like an attachment. The base64 string is much more versatile.


Even with binary, you can store a binary inline inside of another one if it is a structured format with a "raw binary data" type, such as DER. (In my opinion, DER is better in other ways too, and (with my nonstandard key/value list type added) it is a superset of the data model of JSON.)

Using base64 means that you must encode and decode it, but binary data directly means that is unnecessary. (This is true whether or not it is compressed (and/or encrypted); if it is compressed then you must decompress it, but that is independent of whether or not you must decode base64.)


> Still, at least it's an option to put base64 inline inside the JSON. With binary, this is not an option and must send it separately in all cases, even small binary...

There's nothing special about "text" or binary here. You can absolutely put binary inside other binary; you use a symbol that doesn't appear inside the binary, much like you do for text.

You use a divider, like " is for json, and a prearranged way to avoid that symbol from appearing inside the inner binary (the same approach that works for text works here).

What do you think a zip file is? They're not storing compressed binary data as text, I can tell you that.


This reminds me that I just learned the other day that .a files are unix archives, which have a textual representation (and if all the bundled files are textual, there's no binary information in the bundle). I thought .a was just for static libraries for the longest time, and had no idea that it was actually an old archive format.

It may amuse you to learn that tar headers are designed as straight up text tables with fixed-width columns, marred only by the fact that modern implementations pad with 0s instead of spaces. The numbers are encoded as octal digits!

Binary usually means arbitrary byte sequences so you can't choose a single delimiting character. The usual approaches are storing the length somewhere or picking a sufficiently long random sequence that it's vanishingly unlikely to occur in the payload.

I don't get why using a binary protocol doesn't allow handling strings. What's the limitation ?

Couldn't they ship pre-compromised? Storing the RNG seed and private key at the factory.

It won't be as easy as that because you can generate a private key multiple times and notice it's the same.

However yes a very limited entropy in the private key is much harder to detect especially because on this kind of device you can't see the private key directly.


Devil’s advocate: How do they map that data to a user when you are buying through a maze of resellers?

they dont, they try against all the keys, there are at most a few billion of them

see Dual_EC_DRBG


> All of these are speculative ideas, but at this point they’ve been circulating a bunch so should be pretty robust.

> looks at TCP congestion control literature

> closes tab

Eh, there are a few easy things one can try. Make sure to use a non-ancient kernel on the sender side (to get the necessary features), then enable BBR and NOTSENT_LOWAT (https://blog.cloudflare.com/http-2-prioritization-with-nginx...) to avoid buffering more than what's in-flight and then start dropping websocket frames when the socket says it's full.

Also, with tighter integration with the h264 encoder loop one could tell it which frames weren't sent and account for that in pframe generation. But I guess that wasn't available with that stack.


NSA is pushing for PQ algos.

> They're just breaking shit to make the world a worse place.

Well, it's the people who want to MITM that started it, a lot of effort has been spent on a red queen's race ever since. If you humans would coordinate to stay in high-trust equilibria instead of slipping into lower ones you could avoid spending a lot on security.


Copying between GPUs is a thing, that's how integrated/discrete GPU switching works. So if the drivers provide full vulkan support then rendering on the nvidia and copying to another GPU with outputs could work. And it's an ARM CPU, so to run most games you need emulation (Wine+FEX), but Valve has been polishing that for their steamframe... so maybe?

People have gotten games to run on a DGX Spark, which is somewhat similar (GB10 instead of GH200)


It's not possible to correctly implement any cryptographic algorithms in any high-level language with an optimizing backend where timing is not considered an observable/perserved property. Currently this includes anything backed by LLVM or GCC, though there's a proposal to introduce such guarantees through a new builtin in LLVM https://github.com/llvm/llvm-project/pull/166702 though those could still be broken by post-build optimizers, like wasm.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: