Hacker Newsnew | past | comments | ask | show | jobs | submit | kees99's commentslogin

Mount-points were key to early history of the split. Nowadays it's more about not breaking shebangs.

Nearly every shell script starts with "#!/bin/sh", so you can't drop /bin. Similarly, nearly every python script starts with "#!/usr/bin/env python", so you can't drop /usr/bin.

Hence symlink.


> initrd seems like an enormous kludge that was thrown together temporarily and became the permanent solution.

Eh, kinda. That's where "essential" .ko modules are packed into - those that system would fail to boot without.

Alternative is to compile them into kernel as built-ins, but from distro maintainers' perspective, that means including way too many modules, most of which will remain unused.

If you're compiling your own kernel, that's a different story, often you can do without initrd just fine.


Do you see "What's your use-case" too?

Claude spits that very regularly at the end of the answer, when it's clearly out of it's depth, and wants to steer discussion away from that blind-spot.


Perhaps being more intentional about adding a use case to your original prompts would make sense if you see that failure mode frequently? (Practicing treating LLM failures as prompting errors tends to give the best results, even if you feel the LLM "should" have worked with the original prompt).

Hm, use CC daily, never seen this.

never ever saw that "What's your use-case" in Claude Code.

Very clever, I like it.

When deployed on a popular server, one bit of "IP intelligence" this detector itself can gather is keep database of lowest-seen RTT per given source IP, maybe with some filtering - to cut out "faster-than-light" datapoints, gracefully update when actual network topology changes, etc.

That would establish a baseline, and from there, additional end-to-end RTT should become much more visible.


First of all, thanks!

I imagine any big CDN implementing something like this could keep a database of all of this, combined with the old kind of IP intelligence and collecting not only RTT on other protocols like TLS, HTTP, IP (aka ping, and traceroutes too), TCP fingerprint, TLS fingerprint, HTTP fingerprint...

And with algorithms that combine and compare all these data points, I think very accurate models of the proxy could be made. And for things like credit card fraud this could be quite useful.


Exactly. It's not like backup certificate have validity starting at a future date.


Yes the backup certificate can have validity starting at a future date. You just need to wait till that future date to create it.


Larger storage structures are easier to (thermally) insulate. Because geometry.

But going with larger structures probably means aggregation (fewer of them are built, and further apart). Assuming homes to be heated are staying where they are, that requires longer pipes. Which are harder to insulate. Because geometry.


I can't help but wonder how the efficiency compares to generating electricity, running that over wires, and having that run heat pumps.

The conversion to electricity loses energy, but I assume the loss is negligible in transmission, and then modern heat pumps themselves are much more efficient.

And the average high and low in February in 26°F and 14°F according to Google, while modern heat pumps are more energy-efficient than resistive heating above around 0°F. So even around 14–26°F, the coefficient of performance should still be 2–3.


> heat pumps themselves are much more efficient.

For electricity-to-heat conversion, heap pumps are indeed much more efficient relative to resistive heating, yes. About 4 times more efficient.

In absolute terms, though - that is still only 50% of "Carnot cycle" efficiency.

https://en.wikipedia.org/wiki/Coefficient_of_performance

Similarly, heat-to-electricity conversion is about 50% efficient in best case:

https://en.wikipedia.org/wiki/Thermal_efficiency

So, in your scenario (heat->electricity conversion, then transmission, then electricity->heat conversion), overall efficiency is going to be 50% * 50% = 25%, assuming no transmission losses and state-of-art conversion on both ends.

25% efficiency (a.k.a. 75% losses) is pretty generous budget to work with. I guess one can cover a small town or a city's district with heat pipes and come on top in terms of efficiency.


We've got lots of heating districts around the world to use as examples. They only make sense in really dense areas. The thermal losses and expense of maintaining them make them economically impractical for most areas other than a few core districts in urban centers... Unless you have an excess of energy that you can't sell on the grid.


Geothermal heat is also not that functional in cities, you'd need so many wells so close together that you'd most likely cool down the ground enough in winter so your efficiency tanks.


I don't understand, what am I missing? The heat pump increases efficiency by having COP 2-4 right? Assuming air to air and being in, say, Denmark.

Heat (above 100C, say, burning garbage) to electricity: 50% (theoretical best case)

Electricity to heat (around 40C): 200%-400%

Net win?

The surplus energy comes from air or ground temperatures..

Yes you cannot heat back to the temperature you started with but for underfloor heating 40C is plenty. And you can get COP 2 up to shower water of 60C as well.


Yes, this is exactly why I asked. You need to include the COP in the calculations.


If the heat is stored at high temperature, but the demand (for heating buildings, say) is at lower temperature, it could make sense to generate power, then use that power to drive heat pumps. You could end up with more useful heat energy than you started with, possibly even if you didn't use the waste heat from the initial power generation cycle.

Alternately, if you are going to deliver the heat at low temperature to a district heating system, you might as use a topping cycle to extract some of the stored energy as work and use the waste heat, rather than taking the second law loss of just directly downgrading the high temperature heat to lower temperature.

High temperature storage increases the energy stored per unit of storage mass. If the heating is resistive, you might as well store at as high a temperature as is practical.

Gas-fired heat pumps have been investigated for heating buildings; they'd have a COP > 1.

I am interested if there are any cheap small scale external combustion engines available (steam? stirling? ORC?)


It can be anything between easy and impossible depending on the temperature difference. 200 C steam is easy with a commercially available turbine, but 50 C is really hard. There are things like Sterling engines that can capture waste heat but they've never really been commercially viable.

There's no way around it: We have to respect entropy.


I think the big cost difference is the geothermal generators to convert the heat back into electricity. More of a cost issue versus efficiency.


Existing district heating systems can be large.

I live in Denmark the powerplant that heats my home is about 30km away. There are old powerplants in between that can be powered in an emergency.

Yes, building district heating systems that large is difficult and expensive, it wasn't built yesterday, more like 50 years of policies.


> bloated, buggy JavaScript framework

Isn't that the unfortunate status quo? At least hard requirement for JS, that is.

Google's homepage started requiring this recently. Linux kernel's git, openwrt, esp32.com, and many many others now require it too, via dreaded "Making sure you're not a bot" thing:

https://news.ycombinator.com/item?id=44962529

If anything, github is (thankfully) behind the curve here - at least some basics do work without JS.


> So now I am wondering what will be available once the AI investment implodes.

Memory/RAM. See also:

https://news.ycombinator.com/item?id=45934619


Probably some usable GPUs too.


> Probably some usable GPUs too.

They will find another bullshit to use them. Just like crypto to AI transition.


Also, didn't early Arduino heavily borrow from another open-source project, "Processing"?

Processing was/is graphics-centered, so that's where Arduino's term "sketch" come from, if you ever wondered.

https://en.wikipedia.org/wiki/File:Processing_screen_shot.pn...

https://en.wikipedia.org/wiki/File:Arduino_IDE_-_Blink.png


"Wiring", which constitutes Arduino's primary API surface, was taken wholesale from Hernando Barragán's 2003 master's thesis project. It was a fork of processing for microcontrollers and was not written by the Arduino team: Massimo Banzi, David Cuartielles, David Mellis, Gianluca Martino, and Tom Igo.


I have to dig around, I think I still have one of the original wiring boards from around 2006 (maybe)?


Yeah, the software side is basically only an IDE, a build system and a package manager an another system API (basically an alternative to libc). Which is useful for C++, but far from being non-replaceable.


> produce more mature technology ... DDR3/4

...except current peak in demand is mostly driven by build-out of AI capacity.

Both inference and training workloads are often bottlenecked on RAM speed, and trying to shoehorn older/slower memory tech there would require non-trivial amount of R&D to go into widening memory bus on CPU/GPU/NPUs, which is unlikely to happen - those are in very high demand already.


Even if AI stuff does really need DDR5, there must be lots of other applications that would ideally use DDR5 but can make do with DDR3/4 if there's a big difference in price


I mean, AI is currently hyped, so the most natural and logical assumption is that AI drives these prices up primarily. We need compensation from those AI corporations. They cost us too much.


It is still an assumption.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: