Even after 40 years of battle-hardening, it has had buffer overflow and double free vulnerabilities discovered recently, which Rust protects against. sudoedit one was pretty bad. https://www.sudo.ws/security/advisories/
It's funny that converting the first example to the second is a common thing a compiler does, Static single assignment [0], to make various optimizations easier to reason about.
I've had LLMs cite me bullshit many times, links that don't exist and claiming it does. It even cited a very realistic git commit log entry about a feature that never existed.
I've had issues in a database with billions of rows where the PKs were a UUID. Indices on PK, and also foreign keys from other tables pointing to that table were pretty big, so much so that the indices themselves didn't all fit in memory. Like we would have an index on customer_id, document_id, both UUIDv4. DB didn't have UUID support, so they were stored as strings, so just 1 billion rows took ~30 GiB memory for PK index, 60GiB for the composite indices etc. So eventually the indices would not fit in memory. If we had UUID support or stored as bytes it might have halved it, but eventually become too big.
If you needed to look up say the 100 most recent documents, that would require ~100+ disk seeks at random locations just to look up the index due to the random nature of UUIDv4. If they were sequential or even just semi-sequential that would reduce the number of lookups to just a few, and they would be more likely to be cached since most hits would be to more recent rows. Having it roughly ordered by time would also help with e.g. partitioning. With no partitioning, as the table grows, it'll still have to traverse the B-Tree that has lots of entries from 5 years ago. With partitioning by year or year-month it only has to look at a small subset of that, which could fit easily in memory.
I had similar issues, raid 1 on two hdd and the server would randomly reboot and be slow because it was resyncing the raid. Have to pay extra to get new refurbished drives.
It’s great for throwaway machines, e.g. CI. But don’t rely on them
In Hetzner's defence, it happened twice on RAID 1 setups on one of our servers, and after dropping a ticket basically saying "look, this is the second time, can you give us a drive that isn't a dinosaur please" they did put a brand new one in.
These days I'd take their ampere VPS servers over the dedicated ones though, the performance and reliability is way better (mostly just due to it being brand new hardware).
This. the main trick, outside of just bigger hardware, is smart batching. E.g. if one user asks why the sky is blue, the other asks what to make for dinner, both queries go though the same transformer layers, same model weights so they can be answered concurrently for very little extra GPU time. There's also ways to continuously batch requests together so they don't have to be issued at the same time.