Hacker Newsnew | past | comments | ask | show | jobs | submit | lewo's commentslogin

I think this is really close to the way nix2container works (https://github.com/nlewo/nix2container). nix2container generates metadata at build time and streams the required data at runtime.

At build time, it generates a JSON file describing the image metadata and the layers data location. At runtime, it consumes this JSON file to stream layer data and image configuration to a destination. This is implemented by adding a new transport to Skopeo. Thanks to this implementation, nix2container doesn't need to handle all various destrination since this is managed by Skopeo itself.

Recently, we introduced a mechanism to also produce such kind of JSON file for the base image (see https://github.com/nlewo/nix2container?tab=readme-ov-file#ni...).

I'm pretty sure the added (not usptreamed yet) transport could be useful in some other close contexts, such as Bazel or Guix.

I'm the nix2container author and i will be glad to discuss with you if you think this Skopeo transport could be useful for you!

(btw, your blog post is pretty well written!)


In the same kind of spirit, when i was the tech team manager in a company without any transparent salary policy, i've been practicing what i've been called the "symmetrical salary management": when I knew the salary of a managed team colleague, i told him/her my salary. I also asked to keep my salary private, as I kept their salaries private.

I think this is a pretty important requirement to build trust in a team.


More precisely, this is the price per person but if you travel alone, you have to share a room with someone else. Or you would have you to make private a double room which costs about 5400E.


> With no way of splitting up the Nix dependencies into separate layers

nix2container [1] is actually able to do that: you can explicitly build layers containing a subset of the dependencies required by your image. An example is provided in this section: https://github.com/nlewo/nix2container?tab=readme-ov-file#is...

For instance, if your images use bash, you can explicitly create a layer containing the bash closure. This layer can then be used across all your images and is only rebuild and repushed if this bash closure is modified.

> > pull in dependencies often results in massive image sizes with a single /nix/store layer

This is the case for the basic nixpkgs.dockerTools.buildImage function but this is not true with nix2container, nor with nixpkgs.dockerTools.streamLayeredImage. Instead of writing the layers in the Nix store, these tools build a script to actually push the image by using existing store paths (which are Nix runtime dependencies of this script). Regarding the nix2container implementation, it builds a JSON file describing the Nix store paths for all layers and uses Skopeo to push the image (to a Docker deamon, a registry, podman, ...), by consuming this JSON file.

(disclaimer: i'm the nix2container author)

[1] https://github.com/nlewo/nix2container


Just wanted to say thanks for nix2container. I’ve been using it to do some deploys to AWS (ECR) and my iteration time between builds is down to single digit seconds.


We’ve had issues with docker image sizes and have been meaning to take some time to experiment with nix2container. Thanks for your work!


And regarding this part of the article

> Particularly with GitOps and Flux, making changes was a breeze.

i'm writing comin [1] which is GitOps for NixOS machines: you Git push your changes and your machines fetch and deploy them automatically.

[1] https://github.com/nlewo/comin


> Would Nix work well with GitHub Actions?

You can use Nix with GitHub actions since there is a Nix GitHub action: https://github.com/marketplace/actions/install-nix. Every time the action is triggered, Nix rebuilds everything, but thanks to its caching (need to be configured), it only rebuilds targets that has changed.

> How do you automate running tests and deploying to dev on every push

Nix is a build tool and it's main purpose is not to deploy artifacts. There are however a lot of tools to deploy artifacts built by Nix: https://github.com/nix-community/awesome-nix?tab=readme-ov-f...

Note there are also several Nix CI that can do a better job than a raw GitHub actions, because they are designed for Nix (Hydra, Garnix, Hercules, ...).


We have an issue to support non flake deployments: https://github.com/nlewo/comin/issues/30


No, it has been initially developed to manage a "COMmunity INfrastructure" and it sounds like the word "coming". I didn't know this meme but it's a nice coincidence because this infrastructure is actually a kind of "CHATONS" [1] (kitten in french)! Thx for the ref which could be useful for a future logo!

[1] https://www.chatons.org/en


The main issue is to use configuration files residing somewhere in the filesystem. This looks like a global variable in a codebase (something we generally try to avoid). Instead, the configuration file should be explicitly provided as a command line argument. Systemd sandboxing can also be useful to ensure the program only uses the expected set of files.

For instance, on my NixOS machine, the Nginx configuration is not in `/etc/nginx` but explicitly provided and can then be known with ps:

$ ps aux | grep nginx

nginx: master process /nix/store/9iiqv6c3q8zlsqfp75vd2r7f1grwbxh7-nginx-1.24.0/bin/nginx -c /nix/store/9ffljwl3nlw4rkzbyhhirycb9hjv89lr-nginx.conf


> This looks like a global variable in a codebase (something we generally try to avoid).

Aren't they more like global constants than variables? Loaded at startup, and never change during that run of the program. (With the exception of only explicitly being re-read on SIGUSR1 for daemon-like programs.)

And global consts, or #defines, or whatever, are things we generally don't try to avoid?


It's not a bad idea but it's not applicable to every piece of software. I don't think that passing a config file for every git command would be convenient.


You can change the commandline string at runtime. You could inject a fake "-c correct/path" even if it's not there. (That's useful for other things too, like injecting the git revision of the app into the commandline)


How long does it take you to type out that path?


/nix/store/9ii<tab>nginx.conf


> The key factor behind our decision was the realization that while Docker images are industry standard, moving around 100s of megabytes of images seems unnecessarily heavy-handed when we just need to synchronize a small change.

I think the culprit is more the GitHub Actions cache than Docker since it seems to be hard to get a clean cache management. I'm not sure about caching Docker image layers, but caching the Nix store with GitHub Actions is pretty complicated (not even sure it's possible): this means we have to download all required Nix store paths on each run, but i consider this is because of a GitHub Action cache limitation.

So, did you consider using another CI, which offers better caching mechanisms?

With a CI able to preserve the Nix store (Hydra[1] or Hercules[2] for instance), I think nix2container (author here) could also fit almost all of your requirements ("composability", reproducibility, isolation) and maybe provide better performances because it is able to split your application into several layers [2][3].

Note i'm pretty sure a lot of Docker CI also allows to efficiently build Docker images.

[1] https://hercules-ci.com/

[2] https://grahamc.com/blog/nix-and-layered-docker-images

[3] https://github.com/nlewo/nix2container/blob/85670cab354f7df6...


There's been a recent Launch HN of Depot.dev [1] - I've integrated it quickly into my GitHub Actions workflow and it's blazingly fast (13x speedups for me). It also was a drop-in replacement since I was using Docker Bake and Docker Action and Depot mimics that almost fully (except SBOM and provenance bits). It also works with Google Cloud Workload Identity Federation so image pushes to Artifact Registry didn't need any tweaking.

[1] https://news.ycombinator.com/item?id=34898253

Disclaimer: not affiliated, a happy paying customer.


Thanks for the interesting links - I'll check them out! We would need not just another CI but also another container platform because launching a docker container is also slow.

Irrespective of the CI, I believe all cached Docker layers will need to be downloaded onto the build machine before it can be rebuilt.

Still, I believe it is possible to build and deploy faster even with a "docker image only" design and it's something we are still looking at. The question is what is the lower bound here - would be hard to beat "sync a file to a warm container and run it". Pex gives us a pretty good lower bound that is also container platform agnostic.


> I believe all cached Docker layers will need to be downloaded onto the build machine before it can be rebuilt

Docker making some sort of layer-sharing mechanism that constantly distributes layers to all build runners would be worth some cash, I reckon.


> would be hard to beat "sync a file to a warm container and run it"

It depends on the size of your Pex file (i don't think you mentioned it and sorry if i missed the info). With a Docker/OCI image, it would be possible to create a layer with only the Python files of your application (without deps and interpreter). (I say "would be possible" because that's currently not easy to achieve with Nix for instance.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: