Hacker Newsnew | past | comments | ask | show | jobs | submit | ripdog's commentslogin

Windows has containers?

Yes.

There are AppContainers. Those have existed for a while and are mostly targeted at developers intending to secure their legacy applications.

https://learn.microsoft.com/en-us/windows/win32/secauthz/app...

There's also Docker for Windows, with native Windows container support. This one is new-ish:

https://learn.microsoft.com/en-us/virtualization/windowscont...


The low level API of process isolation on Windows is Job Objects, that provide the necessary kernel APIs for namespacing objects and controlling resource use.

AppContainers, and Docker for Windows (the one for running dockerized windows apps, not running linux docker containers on top of WSL) is using this API, these high-level features are just the 'porcelain'


Windows containers are actually quite nice once you get past a few issues. Perf is the biggest as it seems to run in a VM in windows 11.

Perf is much better on Windows server. It's actually really pleasant to get your office appliances (a build agent etc) in a container on a beefy Windows machine running Windows server.


> Perf is the biggest as it seems to run in a VM in windows 11.

Doesn’t “virtualization-based security” mean everything does, container or no? Or are they actually VMs even with VBS disabled?


With a standard windows server license you are only allowed to have a two hyperv virtual machines but unlimited "windows containers". The design is similar to Linux with namespaces bolted onto the main kernel so they don't provide any better security guaranies than Linux namespaces.

Very useful if you are packaging trusted software don't want to upgrade your windows server license.


I'd argue that's more because the average person has no interest in installing a new OS, or even any idea what an OS is.

Most people just keep the default. When the default is Linux (say, the Steam Deck), most people just keep Linux.


Because latency matters when gaming in a way which doesn't matter with AI inference?

Plus cloud gaming is always limited in range of games, there are restrictions on how you can use the PC (like no modding and no swapping savegames in or out).


Yep those are exactly the same considerations. LLM providers will have inconsistent latency and throughput due to batching across many users, while training with cloud GPU servers can have inconsistent bandwidth and delay for uploading mass training data. LLM providers are always limited in how you can use them (often no LoRAs, finetuned models, prompt restrictions)


AFAIK it's not really possible to implement Everything in Linux because Everything relies on reading the entire file list at once from the NTFS metadata, allowing it to index at incredible speed. On Linux, there are dozens of filesystems which likely make it impossible to achieve the same.

That said, I do wonder why Linux gone search is always so slow even on indexed files.


> which likely make it impossible

Huge exaggeration.

It just needs to be ext4, and like 90% would be covered b you by that.

For remaining 10%, (and only if you care), XFS and btrfs would be more than enough.


It's incredibly obnoxious when people type "in my country" as if we're all supposed to just... know where they live. It's also incredibly common. Why do people do this?


The actual country is not relevant, the important part is that countries exist where this is the case. Mentioning the specific country invites potential bias that means people may not take the concern seriously, thinking their country wouldn't do the same.


I usually say that to let people know: I'm not from the US and i'm not comfortable letting people know which country i am living in.


Image asking someone where they’re from only to be told a US state, and only the state.


Asking where somebody's from and having them respond with the state is not unreasonable -- you can already tell they’re American from the accent. The US is huge, about half of its states have more land area than half of the countries in the world. Asking where someone is from and receiving "the US" in response is about as informative as someone from Europe replying "Europe". Like yeah, obviously, I could tell by your accent, but where in Europe?


Funny thing is that americans do that all the time, even in international settings like a coworking space full of expats. Everybody introducing themselves with a "hi, I'm from this country", except americans telling their state or city. Are they expecting us to be familiar with their geography, or just unaware of alternative geographical frames of reference?


I don't think that is strange at all. If you can reasonable assume the person you are talking to is aware of e.g. England, Minnesota, Scotland, Tasmania, Sicily or what not you can go straight for that?


Do you assume everybody is able to recognize Americans or Europeans "from their accent" ?


Americans? Honestly, yes. If not, what good is this cultural imperialism after all?


I'd think passive recognition of a fair few states would be a pretty low bar for relatively educated, English-speaking people. It's a pretty low bar, just placing a region with its country. People also regularly just assume that level of knowledge for globally- or culturally-relevant cities.

Maybe I think too highly of people, but I'd also imagine most would be able to get say... 6/10 right, for which countries the following list is from:

- Flanders

- Nova Scotia

- Brandenburg

- Guangzhou

- Tasmania

- Minas Gerais

- Catalonia

- Chechnya

- West Bengal

- Bali


Apart from Georgia, I don't see how this could be a problem


> Image asking someone where they’re from only to be told a US state, and only the state.

Atlanta or Tbilisi?


>Or just run a resolver yourself.

I did this for a while, but ~300ms hangs on every DNS resolution sure do get old fast.


Ouch. What resolver? What hardware?

With something like a N100- or N150-based single board computer (perhaps around $200) running any number of open source DNS resolvers, I would expect you can average around 30 ms for cold lookups and <1 ms for cache hits.


Not a hardware issue, but a physics problem. I live in NZ. I guess the root servers are all in the US, so that's 130ms per trip minimum.


The root servers aren't the problem. They are heavily anycasted and i'm sure there are many in .nz. If that was the issue you could simply serve the root zone yourself, at least some of them allow axfr. [0] This info is also easy cacheable, they have big TTLs and you only have to do it once for each tld. The authoritative name server of the domain you want to access on the other hand are often just in the US or Europe and are the main issue.

Edit: How to serve the root zone locally with unbound. https://old.reddit.com/r/pihole/comments/s43o8j/where_does_u...

[0] dig axfr . @k.root-servers.net


Thank you for the correction, I did get that wrong. To be clear, there was no easy solution to get reliable, low latency DNS responses from my own resolver without breaking keepalive by forcibly caching entries longer?


Not that I know of except from having a big cache and many users that keeps it warm. As I said you could run a local root zone but that only saves you the one time lookup every week+ of the tld name servers and the root servers are generally very close to you. There is a map of all root servers. There are 12 in .nz alone. A few cc tlds are providing their zone via axfr [1] so you could add that to your resolver to save some roundtrips but I don't think having .ch or .se locally will make a big difference and they are 1.2GB each and you would need to download them daily.

[0]: https://root-servers.org/ [1]: https://github.com/jschauma/tld-zoneinfo


They are not all in the US.


I was going to reply about how New Zealand is as far from almost everywhere else as the US, but I found out something way more interesting: Other than servers in Australia and New Zealand itself, the closest ones actually are in the US, just 3,000km north in American Samoa. Basically right next door. (I need to go back to work before my boss walks by and sees me screwing around on Google Maps, but I'm pretty sure the next closest are in French Polynesia.)


Well that's the experience I had. Obviously caching was enabled (unbound), but most DNS keepalive times are so short as to be fairly useless for a single user.

Even if a root server wasn't in the US, it will still be pretty slow for me. Europe is far worse. Most of Asia has bad paths to me, except for Japan and Singapore which are marginally better than the US. Maybe Aus has one...?


According to [0], there is at least one in Auckland. No idea about the veracity of that site, though.

[0] https://dnswatch.com/dns-docs/root-server-locations


Cloudflare actually runs one of the root servers (https://blog.cloudflare.com/f-root/).


>DNS keepalive times are so short as to be fairly useless

Incompetent admins. dnsmasq at least has an option to override it (--min-cache-ttl=<time>)


Um, so, how are Mozilla supposed to get the hundreds of millions of dollars a year it costs to pay engineers to maintain an evergreen browser without Google's funding?


How did they survive without their funding before they got it? And don't say that web standards are much more complex nowadays - yes, they are, because it is in Google's interest to make them such. Will it hurt? Yes. Will Firefox survive? I hope so. Is it a bad idea? No.


> How did they survive without their funding before they got it?

They were initially Netscape, a commercial company, so they had money from their customers.

After the browser code base was handed over from Netscape/AOL to the Mozilla Foundation in 2003, they got donations from AOL, IBM, Red Hat, etc. which kept them going for a few more years.

The Mozilla Foundation signed the deal with Google two years later, in 2005.

In short, they survived first on commercial revenues and then from donations, neither of which are substantial now.


Informative. AOL also sued Microsoft for iirc $1 billion for their illegal Netscape shenanigans. That kept Mozilla going for several years when they really didn't have a product, and probably would have been shut down otherwise.


>If I have users that use OpenAI through my API keys am I responsible?

Yes. You are OpenAI's customer, and they expect you to follow their ToS. They do provide a moderation API to reject inappropriate prompts, though.


"Emulator" is the wrong word, but the answer is yes. The word you actually meant was "re-implementation" - writing a completely new, clean-room program which reads Source data files (levels, assets, scripts) and allows the user to play a Source game is perfectly legal.

It is necessary to avoid distributing any copyrighted material, so the user must provide the game assets from a legitimate copy for using the program to be legal. In addition, the 'clean-room' must be maintained by ensuring that no contributors to the re-implementation have ever seen the source code for Source, or they become tainted with forbidden knowledge.

Indeed, it's quite common for beloved old games to be re-implemented on new codebases to allow easy play on modern OS's and at high resolution, etc.

See https://github.com/Interkarma/daggerfall-unity, https://openrct2.io/, https://github.com/AlisterT/openjazz


A somewhat notorious example of "never having seen the proprietary code" was the whole Mono and Rotor fiasco. Rotor was a source-available implementation of .Net (Framework, Core didn't exist), with a highly restrictive license. If memory serves, someone had read the Rotor source and contributed to Mono: causing a legal nightmare.


Then just turn it off. qBT isn't windows, it doesn't demand autoupdate.

That said, you really shouldn't be running outdated torrent clients, like any network-connected programs. Case in point - the topic of this thread.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: