Hacker Newsnew | past | comments | ask | show | jobs | submit | bjackman's commentslogin

I was a bit skeptical at first coz my experience so far has been that Nix's weak typing quite rarely causes me problems.

(Likely because NixOS options tend to be how I communicate between pieces of code, and those have a good enough type system).

But actually I think, as something that you just use occasionally like in your JSON example, this could be really cool. I wouldn't wanna use it as "gradual typing" in the sense of aiming to gradually type a whole body of code I think I would be more interested in "selective typing".

Just occasionally when you get that itch of "ugh this will fail really ungracefully if the inputs are bad" then you just apply this where it's needed.


Getting Google Docs to be a Word alternative was an order of magnitude easier than getting GCP to be an AWS competitor.

Now that AWS has two serious competitors (and some non serious ones), privately funding another one just seems impossible to me. Who is gonna chip in tens of billions of dollars to fund "that, but European, and 15 years from now"?

I think the only ways we can get serious Euroclouds is some combination of:

1. EU intervention (nasty regulations and expensive subsidies).

2. People using non-equivalent products (Europeans have to use lower-level infra and do a lot more ops in-house). This part would have its upsides anyway TBH.


> I think the only ways we can get serious Euroclouds is some combination of

Just mandate EU countries' public administration to rely exclusively on EU cloud solutions. That doesn't need to be done at once.

This would create enough of a captive market to start the homegrown industry.

> Europeans have to use lower-level infra and do a lot more ops in-house

To be honest, every large enough company would benefit from doing a little bit of that.


> Just mandate EU countries' public administration to rely exclusively on EU cloud solutions.

This happens already in some areas and it is not cheaper or better. The EU funds national clouds where public institutions use them. What does it mean? VMware with Tanzu or OpenStack. And then some services thrown in to offer some S3 like buckets and that's it. The rest has to be built by the beneficiaries. Servers? Brand names like Lenovo/HP/Dell. Storage? Brand names like NetApp, HP, Dell, Lenovo, 3Par, IBM and the list goes on. Networking? Cisco (mostly), HP/Juniper. Firewalls? Cisco/Fortinet/PaloAlto/CheckPoint/etc.

Basically an enterprise setup masquerading as a cloud offering.

And even if there would be EU wide offerings for such cloud, there's too much money at stake to let institutions from one country buy services from another.


> not cheaper or better

Yeah. The EuroCloud will always be dramatically more expensive and much much worse. Anyone who's claiming otherwise is living a fantasy. The only argument that makes sense is "but it's worth it".

(One detail: it will be much worse at the margins where current clouds actually compete. But actually I suspect only a small number of our customers actually exist at that margin. I think a lot of people are just massively overpaying for their cloud platform and so they might be fine with a EuroCloud anyway. This is why you hear stories today like "we switched to Hetzner, halved our bill, and it works exactly as well as the AWS products we used to use).

> And then some services thrown in to offer some S3 like buckets and that's it. The rest has to be built by the beneficiaries.

Ditto, a fully featured EuroCloud is not gonna happen. Again, it has to be worth this cost.

> Brand names like NetApp, HP, Dell, Lenovo, 3Par, IBM and the list goes on.

This is the only part where I disagree. I think it's OK if the EuroCloud is built out of US hardware (like how the AmeriCloud is very far from free of Chinese hardware). Obviously presents a significant risk re supply chain security but still, the _really_ important thing is actually sovereign operations. The most important thing is who has the keys to the DC.


> This happens already in some areas and it is not cheaper or better.

The goal is not for it to be cheaper or better. The goal is to have money spent on domestic actors. Will some of them provide a dogshit service? Sure. Just like it happened in the US. Time will sort things out.

> What does it mean? VMware with Tanzu or OpenStack. And then some services thrown in to offer some S3 like buckets and that's it. The rest has to be built by the beneficiaries. Servers? Brand names like Lenovo/HP/Dell. Storage? Brand names like NetApp, HP, Dell, Lenovo, 3Par, IBM and the list goes on. Networking? Cisco (mostly), HP/Juniper. Firewalls? Cisco/Fortinet/PaloAlto/CheckPoint/etc.

You need to cut this purity bullshit where Europe must own all the stack from the foundry, or do nothing. What you're building is an unwinnable battle.


And it's all basically US tech made in China. The irony.

> Who is gonna chip in tens of billions of dollars to fund "that, but European, and 15 years from now"?

Dieter Schwarz might. At least he has the money and is trying 'something' with stackit. But he probably won't see the result in 15 years.


I have a PiKVM attached to my PC at home, so at some point I'm thinking of setting up a crazy demand-scaling scheme where when my underpowered homelab nodes can power up the PC when they need to run a heavy workload.

You can do this easier with Wake on LAN. See https://danielpgross.github.io/friendly_neighbor/howto-sleep... for prior art.

WoL is easier if it works. My experience has been that with consumer hardware it usually doesn't. Debugging it is more hassle than it's worth IMO. I think if you don't have a proper mobo with a BMC then just throwing in a KVM is easier on average.

WoL is reliable when waking from sleep/suspend. I have yet to see consumer HW that can do it from poweroff. But if suspend is fine, all you have to do is configure your firmware to turn on after power loss (so you always boot into your OS) and in your OS enable WoL and configure suspend on inactivity as you wish. It should be reliable and failures would default to "on", not off.

This sounds like a fun idea to explore!

I have lately taken to this approach when I raise bugs:

1. Fully human-written explanation of the issue with all the info I can add

2. As an attachment to the bug (not a PR), explicitly noted as such, an AI slop fix and a note that it makes my symptom go away.

I've been on the receiving end of one bug report in this format and I thought it was pretty helpful. Even though the AI fix was garbage, the fact that the patch made the bug go away was useful signal.


I think the spirit of the policy also allows you to write your own words in your own language and have an AI translate it.

(But also, for a majority of people old fashioned Google Translate works great).

(Edit: it's actually a explicit carveout)


There's more to it than just coding Vs building though.

For a long time in my career now I've been in a situation where I'd be able to build more if I was willing to abstract myself and become a slide-merchant/coalition-builder. I don't want to do this though.

Yet, I'm still quite an enthusiastic vibe-coder.

I think it's less about coding Vs building and more about tolerance for abstraction and politics. And I don't think there are that many people who are so intolerant of abstraction that they won't let agents write a bunch of code for them.


My experience is that basic generic agents are useless but an agent with extensive prompting about your usecase is extremely valuable.

In my case using these prompts:

https://github.com/masoncl/review-prompts

Took things from "pure noise" to a world where, if you say there's a bug in your patch, people's first question will be "has the AI looked at it?"

FWIW in my case the AI has never yet found _the_ bug I was hunting for but it has found several _other_ significant bugs. I also ran it against old commits that were already reviewed by excellent engineers and running in prod. It found a major bug that wasn't spotted in human review.

Most of the "noise" I get now just leads me to say "yeah I need to add more context to the commit message". E.g the model will say "you forgot to do X" when X is out of scope for the patch and I'm doing it in a later one. So ideally the commit messages should mention this anyway.


Yeah I also regularly bring a razorblade (for my old fashioned safety razor). I have got caught once but it's worth the risk of wasting a few minutes.

If this was really about security, it would be set up so that just deliberately breaking the rules for the sake of minor convenience actually had some consequences.

If I wanted to blow up a plane with liquid explosives I would just... Try a few times. If you get caught, throw the bottle away, get on the plane, and try again next week.


I don't think it's possible? You could imagine some sort of certificate scheme where the govt issues a thing that says to a 3rd party "we certify this person is 18 but in a way that doesn't reveal who they are". You could also implement that in a way where, even if the 3rd party reports the details of an authorisation to the govt, the govt can't say who was involved in that auth.

But in the latter case, the system is wildly open to abuse coz nobody can detect if every teenager in the country is using Auth Georg's cert. The only way for that to be possible is if the tokens let you psuedonymise Georg at which point it's no longer private.

The answer is to leave this shit to parents. It's not the government's job. It's not the government's business.


> The answer is to leave this shit to parents.

See Australia. Many parents helped their children evade the ban.

https://www.crikey.com.au/2025/12/04/social-media-ban-parent...


That should be the parent’s choice, no?

That's what got us in to the current public health emergency. It is a luxury we cannot afford if we are to stand a chance to get out. https://www.bmj.com/content/392/bmj.s125

If the parents don’t see it as an issue then the state should not be forcing its way in, especially considering the harm to privacy and free speech. This is an area where reasonable people can disagree as to what the correct parenting approach is, so the state should not enforce a particular approach. If anything they should focus on making it easier for parents to set their own limits at the device level.

...except when the harm spreads far beyond the family.

"We have reached an inflection point. We are facing nothing short of a societal catastrophe caused by the fact that so many of our children are addicted to social media." says the Lord proposing the UK ban.


Same moral panic that we had over TV, video games, and Pokemon cards.

The fact this time we have a ban says you're wrong.

No, it says that the government is overreaching in a desperate attempt to regain control over public opinion. (They will fail.)

> It is a luxury we cannot afford

Privacy is a luxury we cannot afford?

When it was a luxury we couldn't afford because of "terrorism" I was doubtful. Now that it's a luxury we cannot afford because of the "public health" effects of teenagers using TikTok, I am starting to struggle to identify a good-faith argument.


No, parents' choice is the luxury we cannot afford.

Wait, but you get 63MB/s down from steam?

My internet is pretty good, I can easily saturate my (rather dated) WiFi at about 30MB/s. But Steam downloads are extremely slow for me (can't remember the numbers but much less).

I always assumed Valve themselves were just stingy with bandwidth. Something else funny going on?


Peering between your ISP and Valve is likely saturated.

Considering Valve has an incentive to make downloads fast (= more revenue), it's likely your ISP is being stingy in this case.


And ISPs in most of the western world have no incentives to fix it, instead trying to scam Valve and others to get paid twice.

The business model of consumer ISPs is "take the money and don't deliver" so this tracks.

I would have assumed valve use some third party CDN but yeah this would make sense I guess

It might have detected the wrong country/city for you. Check Settings -> Downloads -> Region

Otherwise it's just your WiFi being patchy. I think Steam is doing "friendly" bulk download, it slows down before the connection is saturated, to avoid disconnecting your wife/mum/siblings watching Youtube or on a videoconference.


> Wait, but you get 63MB/s down from steam?

I usually (but not always) saturate my downlink with Steam downloads... even back when I was a Comcast customer and paying for ~180MB/s (~1500mbit/s) asymmetric service.

I believe that I have noticed that smaller games (~a few hundred MB or maybe a GB or two) will download quite a bit slower than large games, but I'm not very confident in that observation.


Comcast has Steam server(s) colocated within their network in my immediate area. I’ve observed that less popular downloads tend to connect to external servers in the next state over.

> I believe that I have noticed that smaller games (~a few hundred MB or maybe a GB or two) will download quite a bit slower than large games, but I'm not very confident in that observation.

You can see that on the HellDivers screenshot, it takes 20 seconds to reach 500 Mbps, because TCP takes a while to adjust the bandwidth and is very conservative. TCP and home computers are not designed to make use of gigabit connections.


> ...it takes 20 seconds to reach 500 Mbps, because TCP takes a while to adjust the bandwidth and is very conservative. TCP and home computers are not designed to make use of gigabit connections.

I very much doubt that that is an artifact of TCP. I can go from nothing to 10gbit/s symmetric in 100->200ms when running Iperf3 over TCP against another one of my LAN hosts.

And, back when I had a 1.5gbit/s Internet downlink, it took far, far less than 20 seconds to reach > 500mbps for big Steam downloads and other such well-provisioned things.


For me it varies a lot. Sometimes I get 800 Mbit and sometimes like 80 Mbit. Mostly closer to 800, tho.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: