Hacker Newsnew | past | comments | ask | show | jobs | submit | _ananos_'s commentslogin

wasn't familiar with proot -- with a quick look I think proot is a fancy chroot -- which, in turn, is kind of "the first step" for a generic container.

to achieve the isolation that gvisor offers you would have to intercept syscalls, create a separate mount/user/net namespace etc.

regardless, I don't think proot is somehow related to gvisor ;)


It does though, it has user-space implementations of chroot, mount and kernel syscalls. You can even run a debian image built with a later kernel on an older linux system


well, jokes aside, what you're describing, is kind of what a "secure" (with many air/literal quotes) MCP/Agentic architecture looks like :D

In this context we're experimenting with gvisor on various platforms, and we're preparing a demo for kubecon about a fine-grained sandboxing approach for AI agent tasks spawned from a sandboxed agent.


yeap -- compute would be nearly the same. I suspect you need some kind of I/O to make your compute useful (get input for the computation / produce output etc.) so, still, this would have a negative effect overall.


the simplest one (and the one we're targetting) is multi-tenant services. You want to sandbox your service so that it doesn't affect the rest of the services running.

<shameless plug> We're building a container runtime to do this, and we are comparing alternatives, that's how we got there: https://github.com/urunc-dev/urunc</shameless plug>


> given that the target is a Raspberry Pi?

Why one would use gVisor is clear, but why would one do that in RPi?


a number of reasons -- power budget, form factor, experimenting as a testbed for more "elaborate" setups (like robotics combined with a low-end TPU like the coral, or a jetson nano)

consider that you can take advantage of all the cloud-native goodies, all wrapped up in a 10x5 box with 5-10W (or 25-30W if you consider jetson boards).


well, the tricky detail here (which we do not mention in the post, our bad) is that we got the raspbian config (cp /boot/config ... .config && make oldconfig) which includes most modules, and that's why it took more.

But yeap, good point about using the -j flag, it really accelerates the build!


indeed! thanks for that. Could it be a malformed SEO/robots whatever these things are called from our website?


If I had to guess, I'd say the submission title is not very good, not very descriptive. Not only that, but you submitted it with a title that is different from the title on the site. That's usually not a good idea.


thanks, good to know!


thanks for your comment & suggestion!

I'll drop the HN admins an email to make sure I'm not missing anything.


data corruption, since fsync on the host is essentially a noop. The VM fs thinks data is persistent on disk, but it’s not - the pod running on the VM thinks the same …


well, indeed -- we should have found the proper parameters to make etcd wait for quorum (again, I'm stressing that it's a single node cluster -- banging my head to understand who else needs to coordinate with the single node ...)


well, the actual issue (IMHO) is that this meta-orchestrator (karmada) needs quorum even for a single node cluster.

The purpose of the demo wasn't to show consistency, but to describe the policy-driven decision/mechanism.

What hit us in the first place (and I think this is what we should fix) is the fact that a brand new nuc-like machine, with a relatively new software stack for spawning VMs (incus / ZFS etc.) behaves so bad it can produce such hiccups for disk IO access...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: