The low level API of process isolation on Windows is Job Objects, that provide the necessary kernel APIs for namespacing objects and controlling resource use.
AppContainers, and Docker for Windows (the one for running dockerized windows apps, not running linux docker containers on top of WSL) is using this API, these high-level features are just the 'porcelain'
Windows containers are actually quite nice once you get past a few issues. Perf is the biggest as it seems to run in a VM in windows 11.
Perf is much better on Windows server. It's actually really pleasant to get your office appliances (a build agent etc) in a container on a beefy Windows machine running Windows server.
With a standard windows server license you are only allowed to have a two hyperv virtual machines but unlimited "windows containers". The design is similar to Linux with namespaces bolted onto the main kernel so they don't provide any better security guaranies than Linux namespaces.
Very useful if you are packaging trusted software don't want to upgrade your windows server license.
Because latency matters when gaming in a way which doesn't matter with AI inference?
Plus cloud gaming is always limited in range of games, there are restrictions on how you can use the PC (like no modding and no swapping savegames in or out).
Yep those are exactly the same considerations. LLM providers will have inconsistent latency and throughput due to batching across many users, while training with cloud GPU servers can have inconsistent bandwidth and delay for uploading mass training data. LLM providers are always limited in how you can use them (often no LoRAs, finetuned models, prompt restrictions)
AFAIK it's not really possible to implement Everything in Linux because Everything relies on reading the entire file list at once from the NTFS metadata, allowing it to index at incredible speed. On Linux, there are dozens of filesystems which likely make it impossible to achieve the same.
That said, I do wonder why Linux gone search is always so slow even on indexed files.
It's incredibly obnoxious when people type "in my country" as if we're all supposed to just... know where they live. It's also incredibly common. Why do people do this?
The actual country is not relevant, the important part is that countries exist where this is the case. Mentioning the specific country invites potential bias that means people may not take the concern seriously, thinking their country wouldn't do the same.
Asking where somebody's from and having them respond with the state is not unreasonable -- you can already tell they’re American from the accent. The US is huge, about half of its states have more land area than half of the countries in the world. Asking where someone is from and receiving "the US" in response is about as informative as someone from Europe replying "Europe". Like yeah, obviously, I could tell by your accent, but where in Europe?
Funny thing is that americans do that all the time, even in international settings like a coworking space full of expats. Everybody introducing themselves with a "hi, I'm from this country", except americans telling their state or city. Are they expecting us to be familiar with their geography, or just unaware of alternative geographical frames of reference?
I don't think that is strange at all. If you can reasonable assume the person you are talking to is aware of e.g. England, Minnesota, Scotland, Tasmania, Sicily or what not you can go straight for that?
I'd think passive recognition of a fair few states would be a pretty low bar for relatively educated, English-speaking people. It's a pretty low bar, just placing a region with its country. People also regularly just assume that level of knowledge for globally- or culturally-relevant cities.
Maybe I think too highly of people, but I'd also imagine most would be able to get say... 6/10 right, for which countries the following list is from:
With something like a N100- or N150-based single board computer (perhaps around $200) running any number of open source DNS resolvers, I would expect you can average around 30 ms for cold lookups and <1 ms for cache hits.
The root servers aren't the problem. They are heavily anycasted and i'm sure there are many in .nz. If that was the issue you could simply serve the root zone yourself, at least some of them allow axfr. [0] This info is also easy cacheable, they have big TTLs and you only have to do it once for each tld. The authoritative name server of the domain you want to access on the other hand are often just in the US or Europe and are the main issue.
Thank you for the correction, I did get that wrong. To be clear, there was no easy solution to get reliable, low latency DNS responses from my own resolver without breaking keepalive by forcibly caching entries longer?
Not that I know of except from having a big cache and many users that keeps it warm. As I said you could run a local root zone but that only saves you the one time lookup every week+ of the tld name servers and the root servers are generally very close to you. There is a map of all root servers. There are 12 in .nz alone. A few cc tlds are providing their zone via axfr [1] so you could add that to your resolver to save some roundtrips but I don't think having .ch or .se locally will make a big difference and they are 1.2GB each and you would need to download them daily.
I was going to reply about how New Zealand is as far from almost everywhere else as the US, but I found out something way more interesting: Other than servers in Australia and New Zealand itself, the closest ones actually are in the US, just 3,000km north in American Samoa. Basically right next door. (I need to go back to work before my boss walks by and sees me screwing around on Google Maps, but I'm pretty sure the next closest are in French Polynesia.)
Well that's the experience I had. Obviously caching was enabled (unbound), but most DNS keepalive times are so short as to be fairly useless for a single user.
Even if a root server wasn't in the US, it will still be pretty slow for me. Europe is far worse. Most of Asia has bad paths to me, except for Japan and Singapore which are marginally better than the US. Maybe Aus has one...?
Um, so, how are Mozilla supposed to get the hundreds of millions of dollars a year it costs to pay engineers to maintain an evergreen browser without Google's funding?
How did they survive without their funding before they got it? And don't say that web standards are much more complex nowadays - yes, they are, because it is in Google's interest to make them such. Will it hurt? Yes. Will Firefox survive? I hope so. Is it a bad idea? No.
> How did they survive without their funding before they got it?
They were initially Netscape, a commercial company, so they had money from their customers.
After the browser code base was handed over from Netscape/AOL to the Mozilla Foundation in 2003, they got donations from AOL, IBM, Red Hat, etc. which kept them going for a few more years.
The Mozilla Foundation signed the deal with Google two years later, in 2005.
In short, they survived first on commercial revenues and then from donations, neither of which are substantial now.
Informative. AOL also sued Microsoft for iirc $1 billion for their illegal Netscape shenanigans. That kept Mozilla going for several years when they really didn't have a product, and probably would have been shut down otherwise.
"Emulator" is the wrong word, but the answer is yes. The word you actually meant was "re-implementation" - writing a completely new, clean-room program which reads Source data files (levels, assets, scripts) and allows the user to play a Source game is perfectly legal.
It is necessary to avoid distributing any copyrighted material, so the user must provide the game assets from a legitimate copy for using the program to be legal. In addition, the 'clean-room' must be maintained by ensuring that no contributors to the re-implementation have ever seen the source code for Source, or they become tainted with forbidden knowledge.
Indeed, it's quite common for beloved old games to be re-implemented on new codebases to allow easy play on modern OS's and at high resolution, etc.
A somewhat notorious example of "never having seen the proprietary code" was the whole Mono and Rotor fiasco. Rotor was a source-available implementation of .Net (Framework, Core didn't exist), with a highly restrictive license. If memory serves, someone had read the Rotor source and contributed to Mono: causing a legal nightmare.
reply