Hacker Newsnew | past | comments | ask | show | jobs | submit | suprasam's commentslogin

ZeroFS doesn't exploit ZFS strengths with no native ZFS support, just an afterthought with NBD + SlateDB LSM Good for small burst workloads where everything kept it in memory for LSM batch writes. Once compaction hits all bets off with performance and not sure about crash consistency since it is playing with fire. ZFS special vdev + ZIL on ssd is much safer. No need for LSM. MayaNAS ZFS metadata at SSD speed and large blocks get throughput from high latency S3 at network speed.


ZeroFS author here.

LSMs are “for small burst workloads kept in memory”? That’s just incorrect. “Once compaction hits all bets are off” suggests a misunderstanding of what compaction is for.

“Playing with fire,” “not sure about crash consistency,” “all bets are off”

Based on what exactly? ZeroFS has well defined durability semantics, guarantees which are much stronger than local block devices. If there’s a specific correctness issue, name it.

“ZFS special vdev + ZIL is much safer”

Safer how?


Yes that is the value prop. Cheap S3 instead of expensive EBS.

  EBS limitations:
  - Per-instance throughput caps
  - Pay for full provisioned capacity whether filled or not

 S3:
  - Pay only for what you store
  - No per-instance bandwidth limits as long as you have network optimized instance


For RDBMS pages on object storage - you might be thinking of Neon.tech. They built a custom page server for PostgreSQL that stores pages directly on S3.


I hope you are not having that massive storage storage on public-cloud then you would need MayaNAS to reduce storage costs. For S3 as frontend use MinIO gateway - serves S3 API from your ZFS filesystem


It is all part of ZFS architecture with two tiers: - Special vdev (SSD): All metadata + small blocks (configurable threshold, typically <128KB) - Object storage: Bulk data only If the workload is randomized 4K small data blocks - that's SSD latency, not S3 latency.


Yes, this is a core use case ZFS fits nicely. See slide 31 "Multi-Cloud Data Orchestration" in the talk.

Not only backup but also DR site recovery.

  The workflow:

  1. Server A (production): zpool on local NVMe/SSD/HD
  2. Server B (same data center): another zpool backed by objbacker.io → remote object storage (Wasabi, S3, GCS)
  3. zfs send from A to B - data lands in object storage

  Key advantage: no continuously running cloud VM. You're just paying for object storage (cheap) not compute (expensive). Server B is in your own data center - it can be a VM too.
For DR, when you need the data in cloud:

  - Spin up a MayaNAS VM only when needed
  - Import the objbacker-backed pool - data is already there
  - Use it, then shut down the VM


Could you do this with two separate zpools on the same server?

    zfs send -R localpool@[snapshot] | zfs recv -F objbackerpool
Is there a particular reason why you'd want the objbacker pool to be a separate server?


New ZFS file system on object storage from https://www.zettalane.com


Experience very high-throughput > 3GB/s, even on a single VM instance with no traditional disk resources other than S3 object storage.


Demo regarding unified ARC & Pagecache https://youtu.be/be0ph4b9vUE


Is that video a late april fools joke. I didn't yet watch it fully, but a presentation on zfs on Windows in an Internet Explorer window? Wtf?


You know, you say that, but...

https://github.com/openzfsonwindows/ZFSin

(I am aware this is not the codebase the presentation is running on or about.)


Didn't know that anyway. Thanks for the link


The fun part is you can even develop from Linux using MinGW32 Cross Compile environment which produces native windows programs.

Crossmeta FUSE also includes sshfs, fuse-nfs for remote file access and s3backer to connect to any S3 compatible cloud storage.

All Crossmeta File systems fully visible to Windows Subsystem for Linux (WSL) but integration can be better with your help by voting on https://wpdev.uservoice.com/forums/266908-command-prompt-con...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: