> They refused to invest in packaging to the extent that a separate company (astral) had to do it for them
uv didn't just happen in a vacuum, there has been lots of investment in the Python packaging ecosystem that has enabled it (and other tools) to try and improve the shortcomings of Python and packaging.
There's PEP 518 [1] for build requirements, PEP 600 [2] for manylinux wheels, PEP 621 [3] for pyproject.toml, PEP 656 [4] for musl wheels platform identifiers, PEP 723 [5] for inline script metadata.
Without all this uv wouldn't be a thing and we would be stuck with pip and setuptools or a bunch of more bandaid hacks on top making the whole thing brittle.
Obviously, but writing PEPs is not enough. Read through the comments under any Python thread here from the late 2010s to early 2020s. Just ~two years ago you couldn't talk about anything Python-related without discussion veering far offtopic to complain about packaging.
That's the thing, you don't have to :) While I think uv is a great tool and highly recommend it, you are more than welcome to use any of the other build backends or package management tools that fit your workstyle. By having these packaging PEPs (amongst) others, the ecosystem has been able to try out different approaches and most likely over time will consolidate on specific ones that work better than the others.
Anecdata, but uv served as a very good packaging mechanism for a Python library I had to throw on an in extremis box, one that is not connected to the Internet in any way, and one where messing with the system Python was verboten and Docker was a four-letter word.
There shouldn't be any difference between those two values. I'm not saying you are wrong and it didn't break but it's definitely surprising a parser would choke on that vs YAML itself being the problem.
Don't get me wrong I can empathise with whitespace formatting being annoying and having both forms be valid just adds confusion it's just surprising to see this was the problem.
& has no special behaviour in strings, backticks and $ on the other hand do. For example "&Some String&" and '&Some String&' are all the literal value `&Some String&`. Backticks and $ are special in double quoted strings as they are the escape character and variable reference chars respectively.
per-minute is really just a way to express the cost in a human friendly name. Doing per-hour, per-second, per-day could all result in the same total value just at a different number. If anything per-minute is better than per-hour as you won't be charge for minutes you don't use.
But why not make it "per GB Logs ingested" or "per triggered job" (or both)? These should reflect the points where GitHub also has costs - but not per minute.
> I don't knock it out of my head by having the wire catching on something
> Dealing with the cable and having to pack it back up when I'm done
> It auto connects to both my phone and laptop 99% of the time
> It easily swap between the 2 as I change the focus
Now they aren't perfect, charging can be a bit fiddly over time but they certainly are nicer than the normal headphones. Maybe you just aren't the target audience but clearly they are popular enough for most people.
This seems a bit pedantic, while you may be correct (I honestly don't know what standard this is referring to) the UTF-8 BOM is a thing that some tools do know about. Even then in the context of OP's question the BOM with UTF-8 isn't the specific problem but rather how the shebang interpreter reads the actual ASCII byte sequences so a UTF-16 with a BOM "text" file would also fail.
That's the realm side which should be upper case. The comment reference was for hostname themselves which I've always just done as lower case and have never seen a reason to make it upper case. The krb5.conf has a [domain_realm] section which can map a DNS name/suffix to the actual realm
The problem I have with using a gMSA outside of Windows is you need a Kerberos principal and credential for that principal in the first place to allow retrieving the gMSA details. Why not just use that principal and avoid adding this next step.
It would be great if Linux had a mechanism where the host itself could act as the principal to retrieve the gMSA like on Windows but the GSSAPI worker model just works differently there and runs in process. A similar problem exists for using Kerberos FAST/armouring where Windows uses the hosts' ticket to wrap the client request but on Linux there is no privileged worker process that protects this ticket so the client needs to have full access to it.
The closest thing I've seen is gssproxy [1] which tries to solve the problem where you want to protect host secrets from a client actually seeing the secrets but can still use them but I've not seen anything from there to support gMSAs for armouring for client TGT requests.
No idea if the POSIX subsystem used NTFS or some other filesystem but if it was NTFS it probably just used the same reparse data buffer. It's just that Windows only added a symlink buffer structure in Vista/2008. You can manually use the same data buffer in older Windows versions it just won't know what to do with them just like all the other reparse data structures.
The subsystem in question would be the one to handle the logic for the syscall. So the POSIX subsystem would use the reparse data buffer as needed. It's just that the Win32 subsystem added its own symlink one in Vista/2008.
This is all a guess, the POSIX subsystems were a bit before my time and I've never actually used them. I just know how symlinks work on Windows/NTFS and when they were added.
There are so many features that .Net 5+ brings to the table. Even if features aren’t important the performance improvements you get with the newer versions should be enough to justify moving to it.
I agree the support side is annoying but honestly the support side is really just “security” fixes with security being a very hard thing to describe here and gives MS a lot of wiggle room to not actually support it.
uv didn't just happen in a vacuum, there has been lots of investment in the Python packaging ecosystem that has enabled it (and other tools) to try and improve the shortcomings of Python and packaging.
There's PEP 518 [1] for build requirements, PEP 600 [2] for manylinux wheels, PEP 621 [3] for pyproject.toml, PEP 656 [4] for musl wheels platform identifiers, PEP 723 [5] for inline script metadata.
Without all this uv wouldn't be a thing and we would be stuck with pip and setuptools or a bunch of more bandaid hacks on top making the whole thing brittle.
[1] https://peps.python.org/pep-0518/ [2] https://peps.python.org/pep-0600/ [3] https://peps.python.org/pep-0621/ [4] https://peps.python.org/pep-0654/ [5] https://peps.python.org/pep-0723/
reply