Im currently looking for a Linux distribution to run in an immutable VM on macOS. It will only be used to run a web browser. A browser is the primary way your machine can get compromised.
Since it is immutable, restarting the VM will clear all files back to a clean slate.
macOS has app sandboxing built-in, but it is not as good as a VM.
I’m curious why this isn’t a more popular setup? Running a browser in an isolated VM seems like it should be a best practice. Does anyone else run a similar setup?
Security theatre. If your browser is attacked by an 0day of any sort, or malware or whatever, it will have access to the shared credentials and information inside your browser. This means any and all cookies, shared logins, account access, tab information — is all game. That's where all the actual value is; why would I care if it's inside a VM or not when I can get credentials to your mail provider from the browser itself and just exfiltrate? It doesn't matter if you restart the browser once or a million times; anytime it has sensitive information, it is a target. Unless you plan to literally restart/wipe after every interaction with every domain in a separate same-origin policy where any sensitive information exposure occurs.
But if you're that careful, what is the VM really doing for you, and why the hell are you even exposing yourself that much? Just use Lynx or something.
The real solution is this: Install Firefox, install noscript to nuke all javascript, install ublock too, and get a password manager. Selectively allow any webpage interactivity, as necessary. The world isn't a Tom Clancy novel so you don't actually need to do anything more than this to be very secure and on top of almost all active threats.
Ultimately to achieve what we all actually want (strong isolation guarantees that would prevent a full browser exploit from both A) your SSH keys from getting stolen and B) also your gmail spool from being attacked, and let's be honest, B is the worst case scenario) requires a rethinking of the fundamental software stack from OS to user-visible applications. No amount of Browsers-in-a-VM are a substitute.
> Security theatre. If your browser is attacked by an 0day of any sort, or malware or whatever, it will have access to the shared credentials and information inside your browser.
Yes running a VM won’t protect against this threat.
Running a VM will pretty much eliminate that 0-day from infecting the host OS, where it could become a persistent threat and have access to a range of sensitive data.
This is not security theatre. You just have incomplete threat modelling here.
The easiest way of getting security wrong is assuming it's all or nothing. This is a very common error I see here.
Security is about risk management and levels of protection. Saying locking your door is useless because someone might drive a car through is not going to help anything.
We need to take a step back and remember that Facebook paid for a 0 day exploit in the Tails OS to catch a criminal [1]. Note, I'm not commenting on the morality of this, I'm only commenting that one can never really be "secure" even if using the most secure of protocols. FWIW, this is an interesting read.
What protocol are you referring to? Tails makes a lot of great efforts to prevent undesired network access but at the end of the day it doesn't do any isolation by virtualisation and the bundled video player was able to ping a server directly. I don't believe this approach would be possible against Qubes OS.
That doesn't protect any active sessions; if you assume the attacker is capable of exploiting the browser, then they can just exfiltrate an active session for the user for any domain and bypass the login mechanism entirely. Done. This "reset state" approach can only protect against that if you completely wipe after every sensitive interaction on every unique same-origin policy, but at that point you're just doing all the hard work yourself.
Obviously I'm not saying 2FA isn't good, and doesn't mitigate some clearly related attacks like raw credential theft (whether or not the browser is exploited, obviously.) My position is just that browsers-in-VMs is a mostly roundabout threat model whose actual benefits (such as some semblance of filesystem isolation) can be achieved other ways. The things these approaches can not fix are otherwise systematic issues that require major redesigns to achieve.
> My position is just that browsers-in-VMs is a mostly roundabout threat model whose actual benefits (such as some semblance of filesystem isolation) can be achieved other ways.
What other ways would you recommend for filesystem isolation better than simply immutable VM running a web browser?
It's resource heavy to browse in a VM, particularly if you don't use a paravirtual GPU which would greatly increase attack surface, and highly inconvenient to have your browser cleared all the time. It's not unheard of though, Windows even has a built in functionality for this for high security enterprise users.
Qubes is super cool. It's just a pity that it also imposes a bit too many constraints (especially related to USB keyboard... For security reasons) and performance penalties to make it a realistic alternative for me
> WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
It’s also not an install most users are hunting down, it’s pretty much automagic. It mostly supports your point though, as this is why auto update exists.
But that's an enormous "if" that the vast vast vast majority of that big scary number getting patched don't actually have to deal with. The point isn't getting compromised is a minor inconvenience it's that what the VM protects against is such a rare major inconvenience it's multitudes of times more inconvenient over time to constantly deal with the smaller inconvenience of running browsing through a VM instead.
YOLO works until it doesn't. It's usually self-correcting. In business, it may be corrected by involuntary regulation.
Browser VMs are not the only option. Regular OS wipe/install is another, e.g. rotate between two dedicated browsing devices with native performance. One indicator of a compromised device is a reduction in perceived performance.
HP SureClick or MS AppGuard Edge is another level of complexity: every network connection and browser tab is a separate stateless micro-VM whose output is dynamically composited into a single display, with optional analytics of traffic and malware within each isolated micro-VM.
As for "I'm not important enough to be a target", some humans are on education or career paths to change that calculation. Some adversaries may see value in early access to up-and-coming targets. As the cost of targeting falls, the bar for "important enough" also falls.
You can't just say it could be bad one day therefore everyone should do <x> now - that's just fear mongering not supportive reasoning. For instance it could be everyone falls victim to a hypervisor security bug so nobody should trust VM browsing. It could be everyone falls victim to a firmware big so nobody should trust reusing a device. At some point you have to accept that having the possibility of a bad scenario isn't enough on its own, it needs to be actually weighed and compared.
Sure, there are e.g. certain high security businesses or certain high risk individuals that should consider higher security options (or in some cases regulation therefore). That it's certain conditions is precisely why it isn't for the vast majority though, if it were you wouldn't need to specify corner cases.
Security is about judging how to stay as far up the curve as you can without it costing you more than you'd realistically lose to do so. It is not about closing every conceivable hole in your attack surface to achieve minimal risk.
I'd also add there is a counter to the always increasing cost/reward ratio of targeting: the always decreasing amount of complexity of implementing the security mitigations for the "next level" of security. In a decade browsing via VM may be commonplace for the average user (though probably more persistently for that use case) and not require a thought to use. That doesn't make it any different for today but it points out there is more than "threats have increased" that can change what's a reasonable place to be on the security curve.
> You can't just say it could be bad one day therefore everyone should do <x> now
Who said "everyone" or "one day"? It's bad today, especially for those who assume they are not affected, even though they have never done forensics to test that assumption.
An example: most software incorporates other software as dependencies. As a developer, if a downstream consumer of your software is regulated, your software business could be regulated as a dependency. This also applies to open-source projects. If your software becomes regulated, then the dev/build environment for that software may be regulated. The details are being worked out now, this is not some distant future. https://fossa.com/blog/cybersecurity-executive-order-softwar...
The time will come when more endpoint devices will not be able to connect to sensitive services, because of missing security properties of the endpoint. The definition of sensitive services could be regulated, e.g. CI/CD system. As a software developer, that could mean your dev workstation (including browser configuration) cannot be used to change/publish code without clearing a security bar. https://docs.microsoft.com/en-us/security/compass/privileged...
> there is more than "threats have increased" that can change what's a reasonable place to be on the security curve.
Yes, there is also "damages have increased", so more stakeholders have an interest in consensus definitions and enforcement of reasonable, in specific contexts.
We're a couple layers deep now but the question that started the chain was:
> I’m curious why this isn’t a more popular setup?
If we're no longer talking about that but saying general security implementations and requirements will be tighter at some point in the future then sure, full agree. If we're talking about VM based browsing and why people aren't using it today then I'm not sure how any of this applies outside a tiny fraction of a percentage of machines browsed from.
That's a weird opinion just like all extreme opinions. It's not meant to be perfect - it's a layer of security that mitigates some issues and hopefully exposes as few new ones as possible. Virtualisation kills almost all local IPC / filesystem / shared memory possibilities of privilege escalation through other services. It even mitigates most kernel level exploits, because after that you'll still need to break out of the VM itself.
TLDR: P(non-root-vm-breakout-not-requiring-app-breakout) < P(app-breakout) and P(non-root-vm-breakout) < P(local-pe | system-service-exploit)
The simple solution is to provision a VM with your browser of choice and take a snapshot of it. Every time you use it, restore to the "vanilla" snapshot to revert its state.
If using a Linux VM, your hypervisor's checkpoint capabilities should suffice. If you want to go a step further (albeit with the caveat of using Windows), then Deep Freeze by Faronics will revert your Windows VM during each boot.
Speaking of sandbox is there a way on MacOS to contain corpocrap(TM) that insists on running installation wizards that require admin privileges on 2022?
My employer requires a certain unpopular remote access client suite that installs unnecessary background services running as root. The reliance on a certain unstable audio streaming plugin for skype calls makes everything harder to work on a VM.
I do this, kvm/libvirt on Linux, Linux browsers, SPICE/virt-viewer. I don't rollback the VMs like you're suggesting, although that does seem like a good idea to start doing. In addition to the VM-based isolation, the VMs are running on a completely separate machine. One of the other major features this gets me is that my router sends traffic different places depending on what VM it's coming from. Casual web browsing goes out a rotating cloud IP (need to move this to EU sometime), bank and other surveillance based authentication sites go out my uplink directly (fuckers), torrent traffic goes out a commercial VPN, embedded device configuration gets no WAN.
Performance is acceptable, even for videos and the like. I'm sure it's considerably slower, but it works for me. I also see adding a bit of a speed bump that mentally distances the web from my main computing environment as a benefit.
There is a built-in feature, known as Application Guard, on Windows 10/11 that gives you exactly this out of the box, with minimal configuration. Biggest downside is that it only works with Edge.
This seems to be an artifical limitation in Windows Sandbox, as WSL2 and Edge Application Guard both use separate VMs and you can run them all at once.
"Krypton" is the name of the isolated microVMs in Hyper-V, but they don't really document it at all.
I do a fair amount of upload/download of docs and images. And some of the "cookie pre-fills stuff for me" is useful. I know you can work around all that, but I'm lazy. I suspect more lazy people like me is the primary reason it's not popular.
Because most public web sites, especially with ads, execute untrusted code on your local device, requiring firmware, OS, browser security sandbox and web-page access control contortions for constantly-attacked web browser engines.
> WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
I’ve been a WordPress dev since 3.0.
It is a tool. And a very good one. Its market share dominance it not because it is proprietary (it is not) or because of vendor lock in (there is none). It is because it works very well, it is lightweight, it runs pretty much on any host, and is friendly to devs and end users.
Part of being a good dev is knowing which tool to use (and when to not use a tool) and how to get the most out of the tool.
Wordpress is lightweight? 5.9 download is +19.7M zipped and that's without any plugins. The lock in to the Wordpress framework. Not sure whether that qualifies as "vendor" lock in but it's pretty big dep that's impossible to break out of if you go beyond simple blog.
You can easily get locked in to some theme builder or UI toolkit. half the WP folks I know swear by X (genesis theme, etc) and will just rewrite everything in theme X if they take over a project. You're not 'locked in' to WP, you're 'locked in' to themes and plugins.
For what it's worth, I'm running the absolute cheapest DigitalOcean droplet money can buy ($5/mo) and that comes with 25GB of NVME storage as standard.
I have become convinced that words like “lightweight” and “fast” have lost all meaning in software circles. You’ll get two projects describing themselves in similar terms because they’re comparing themselves to wildly different things. (For example: Hexo and Zola both have a heading on their front pages titled “blazing fast”, but for equivalent loads Zola will be much faster than Hexo, probably a minimum of 5–10×, and Zola isn’t even particularly optimised, just written in Rust.)
The WordPress download is 20MB compressed, 63MB uncompressed, containing almost half a million lines of PHP spread across over a thousand files (~16MB; around 55% code, the remainder blank or comments), over half a million lines of JavaScript in over 500 files (~25MB; around 67% code), a couple of hundred thousand of CSS in 700 files (~7.5MB; around 75% code), and the odd spot of other languages. (This is in languages where more code fairly directly means slower, even if faster code can claw that back; whereas in compiled languages, more code just means slower compilation, and runtime performance is comparatively uncorrelated.)
That’s not particularly lightweight.
Conceptually WordPress isn’t lightweight, either. I’ve never used WordPress myself, but I’ve seen three people using it, and in each case they were being drowned in choices and data fields, using two or three fields out of the thirty or more fields it was shoving in their faces; and the themes and plugins and such made matters considerably worse and more complex; and a lot of that complexity didn’t seem well-structured, though my observation has only ever been brief.
If you'd like to share some of your wisdom as someone who has worked with it for such a long time, i posted another question here in regards to actually utilizing WordPress for CRUD: https://news.ycombinator.com/item?id=30179205
Since it is immutable, restarting the VM will clear all files back to a clean slate.
macOS has app sandboxing built-in, but it is not as good as a VM.
I’m curious why this isn’t a more popular setup? Running a browser in an isolated VM seems like it should be a best practice. Does anyone else run a similar setup?