Mount-points were key to early history of the split. Nowadays it's more about not breaking shebangs.
Nearly every shell script starts with "#!/bin/sh", so you can't drop /bin. Similarly, nearly every python script starts with "#!/usr/bin/env python", so you can't drop /usr/bin.
> initrd seems like an enormous kludge that was thrown together temporarily and became the permanent solution.
Eh, kinda. That's where "essential" .ko modules are packed into - those that system would fail to boot without.
Alternative is to compile them into kernel as built-ins, but from distro maintainers' perspective, that means including way too many modules, most of which will remain unused.
If you're compiling your own kernel, that's a different story, often you can do without initrd just fine.
Claude spits that very regularly at the end of the answer, when it's clearly out of it's depth, and wants to steer discussion away from that blind-spot.
Perhaps being more intentional about adding a use case to your original prompts would make sense if you see that failure mode frequently? (Practicing treating LLM failures as prompting errors tends to give the best results, even if you feel the LLM "should" have worked with the original prompt).
When deployed on a popular server, one bit of "IP intelligence" this detector itself can gather is keep database of lowest-seen RTT per given source IP, maybe with some filtering - to cut out "faster-than-light" datapoints, gracefully update when actual network topology changes, etc.
That would establish a baseline, and from there, additional end-to-end RTT should become much more visible.
I imagine any big CDN implementing something like this could keep a database of all of this, combined with the old kind of IP intelligence and collecting not only RTT on other protocols like TLS, HTTP, IP (aka ping, and traceroutes too), TCP fingerprint, TLS fingerprint, HTTP fingerprint...
And with algorithms that combine and compare all these data points, I think very accurate models of the proxy could be made. And for things like credit card fraud this could be quite useful.
Larger storage structures are easier to (thermally) insulate. Because geometry.
But going with larger structures probably means aggregation (fewer of them are built, and further apart). Assuming homes to be heated are staying where they are, that requires longer pipes. Which are harder to insulate. Because geometry.
I can't help but wonder how the efficiency compares to generating electricity, running that over wires, and having that run heat pumps.
The conversion to electricity loses energy, but I assume the loss is negligible in transmission, and then modern heat pumps themselves are much more efficient.
And the average high and low in February in 26°F and 14°F according to Google, while modern heat pumps are more energy-efficient than resistive heating above around 0°F. So even around 14–26°F, the coefficient of performance should still be 2–3.
So, in your scenario (heat->electricity conversion, then transmission, then electricity->heat conversion), overall efficiency is going to be 50% * 50% = 25%, assuming no transmission losses and state-of-art conversion on both ends.
25% efficiency (a.k.a. 75% losses) is pretty generous budget to work with. I guess one can cover a small town or a city's district with heat pipes and come on top in terms of efficiency.
We've got lots of heating districts around the world to use as examples. They only make sense in really dense areas. The thermal losses and expense of maintaining them make them economically impractical for most areas other than a few core districts in urban centers... Unless you have an excess of energy that you can't sell on the grid.
Geothermal heat is also not that functional in cities, you'd need so many wells so close together that you'd most likely cool down the ground enough in winter so your efficiency tanks.
I don't understand, what am I missing? The heat pump increases efficiency by having COP 2-4 right? Assuming air to air and being in, say, Denmark.
Heat (above 100C, say, burning garbage) to electricity: 50% (theoretical best case)
Electricity to heat (around 40C): 200%-400%
Net win?
The surplus energy comes from air or ground temperatures..
Yes you cannot heat back to the temperature you started with but for underfloor heating 40C is plenty. And you can get COP 2 up to shower water of 60C as well.
If the heat is stored at high temperature, but the demand (for heating buildings, say) is at lower temperature, it could make sense to generate power, then use that power to drive heat pumps. You could end up with more useful heat energy than you started with, possibly even if you didn't use the waste heat from the initial power generation cycle.
Alternately, if you are going to deliver the heat at low temperature to a district heating system, you might as use a topping cycle to extract some of the stored energy as work and use the waste heat, rather than taking the second law loss of just directly downgrading the high temperature heat to lower temperature.
High temperature storage increases the energy stored per unit of storage mass. If the heating is resistive, you might as well store at as high a temperature as is practical.
Gas-fired heat pumps have been investigated for heating buildings; they'd have a COP > 1.
I am interested if there are any cheap small scale external combustion engines available (steam? stirling? ORC?)
It can be anything between easy and impossible depending on the temperature difference. 200 C steam is easy with a commercially available turbine, but 50 C is really hard. There are things like Sterling engines that can capture waste heat but they've never really been commercially viable.
There's no way around it: We have to respect entropy.
Isn't that the unfortunate status quo? At least hard requirement for JS, that is.
Google's homepage started requiring this recently. Linux kernel's git, openwrt, esp32.com, and many many others now require it too, via dreaded "Making sure you're not a bot" thing:
"Wiring", which constitutes Arduino's primary API surface, was taken wholesale from Hernando Barragán's 2003 master's thesis project. It was a fork of processing for microcontrollers and was not written by the Arduino team: Massimo Banzi, David Cuartielles, David Mellis, Gianluca Martino, and Tom Igo.
Yeah, the software side is basically only an IDE, a build system and a package manager an another system API (basically an alternative to libc). Which is useful for C++, but far from being non-replaceable.
...except current peak in demand is mostly driven by build-out of AI capacity.
Both inference and training workloads are often bottlenecked on RAM speed, and trying to shoehorn older/slower memory tech there would require non-trivial amount of R&D to go into widening memory bus on CPU/GPU/NPUs, which is unlikely to happen - those are in very high demand already.
Even if AI stuff does really need DDR5, there must be lots of other applications that would ideally use DDR5 but can make do with DDR3/4 if there's a big difference in price
I mean, AI is currently hyped, so the most natural and logical assumption is that AI drives these prices up primarily. We need compensation from those AI corporations. They cost us too much.
Nearly every shell script starts with "#!/bin/sh", so you can't drop /bin. Similarly, nearly every python script starts with "#!/usr/bin/env python", so you can't drop /usr/bin.
Hence symlink.
reply