Hacker Newsnew | past | comments | ask | show | jobs | submit | maxmcd's commentslogin

does having to maintain the slatedb as a consistent singleton (even with write fencing) make this as operationally tricky as a third party db?

It’s not great UX on that angle. I am currently working on coordination (through s3, not node to node communication), so that you can just spawn instances without thinking about it.

Is this possibly an intentional reference to GNU Linux, or unrelated?


Within the book itself the clacks system has its own technical protocol which is briefly touched upon. The "overhead" is essentially packet or request metadata.

From the LSpace wiki, GNU is a metadata that means:

    G: Send the message onto the next Clacks Tower.
    N: Do not log the message.
    U: At the end of the line, return the message.

And yes, it is almost certainly a reference to GNU as in "GNU's Not Unix". =)

https://wiki.lspace.org/GNU_Terry_Pratchett


Quite intentional.


It's Terry Pratchett, so of course it's an intentional reference.


TCP performance gets quite poor over long distances. CDNs are very helpful if you're trying to make your site work well far away from your servers.


I think the bottleneck is rarely CDN. Think about it - my server sits in Germany. My target audience is in the US. My latency to the west coast is 150ms. I can see it being a big thing in competitive online game, but for website load performance it's less than the blink of an eye. The real bottleneck is usually poorly configured page or some bloated JS.


Quite surprising. It does seem like you can get an https download with

    aws s3 cp --no-sign-request s3://download.opencontent.netflix.com/sparks/creative-commons-attribution-4-intl-public-license.txt .
Which is hitting the bucket path route at: https://s3.amazonaws.com/download.opencontent.netflix.com/sp...

"aws s3 ls" similarly requests: https://s3.amazonaws.com/download.opencontent.netflix.com?li...



Index Size

    Biscuit 277.09 MB
    Trigram 86 MB
    B-Tree  43 MB
Pretty much you exchange space for speed


I think a lot of the original temporal/cadence authors were motivated by working on event-driven systems with retries. They exhibited complex failure scenarios that they could not reasonably account for without slapping on more supervisor systems. Durable executions allow you to have a consistent viewpoint to think about failures.

I agree determinism/idempotency and the complexities of these systems are a tough pill to swallow. Certainly need to be suited to the task.


Great article for demystifying durable execution: https://lucumr.pocoo.org/2025/11/3/absurd-workflows/


I believe you are making use of gVisor’s userspace TCP implementation. I’m not sure if there is something similar in Rust that would be so easy to set up like this.


There isn't something as mature as gVisor afaik. https://github.com/smoltcp-rs/smoltcp implements many of the same abstractions as gVisor.


"Reply in the tone of Wikipedia" has worked pretty well for me


> > You can force an fsync after each messsage [sic] with always, this will slow down the throughput to a few hundred msg/s.

Is the performance warning in the NATS possible to improve on? Couldn't you still run fsync on an interval and queue up a certain number of writes to be flushed at once? I could imagine latency suffering, but batches throughput could be preserved to some extent?


> Is the performance warning in the NATS possible to improve on? Couldn't you still run fsync on an interval and queue up a certain number of writes to be flushed at once? I could imagine latency suffering, but batches throughput could be preserved to some extent?

Yes, and you shouldn't even need a fixed interval. Just queue up any writes while an `fsync` is pending; then do all those in the next batch. This is the same approach you'd use for rounds of Paxos, particularly between availability zones or regions where latency is expected to be high. You wouldn't say "oh, I'll ack and then put it in the next round of Paxos", or "I'll wait until the next round in 2 seconds then ack"; you'd start the next batch as soon as the current one is done.


Yes, this is a reasonably common strategy. It's how Cassandra's batch and group commit modes work, and Postgres has a similar option. Hopefully NATS will implement something similar eventually.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: