Hacker Newsnew | past | comments | ask | show | jobs | submit | virtualritz's commentslogin

The two main issues with Lego sets I have are:

The huge amount of specialized parts that are pretty useless for basic building. They're basically adornments.

Prices are insane. I.e. quantity and often quality (as in: how good is the set play-/ construction time wise) is just shite.

I usually buy competitors. Here in Germany Blue e.g. bricks have opened some stores so that's where I take my nephew.

Their sets are much more like the Lego I grew up with. Using more basic parts that exist in creative ways so the specialized/adornment meaning is derived from context, not the part itself/its shape.

Which also requires more imagination from the kids playing with this.


I feel like the special parts have eased off, it was pretty bad with the bionicle stuff (which ironically are apparently what saved lego from financial difficulties), but I'd say all of the recent sets I've got (I get one a year for Christmas) going back at least 5 years have been made up of relatively generic parts, with the odd little special bit for flair.

I'm so curious why this was downvoted. Please tell me!

Unless I miss something I think that this describes box filtering.

It should probably mention that that this is only sufficient for some use cases but not for high quality ones.

E.g. if you were to use this e.g. for rendering font glyphs into something like a static image (or a slow rolling title/credits) you probably want a higher quality filter.


What type of filter do you mean? Unless I'm misunderstanding/missing something, the approach described doesn't go into the details of how coverage is computed. If the input image is only simple lines whose coverage can be correctly computed (don't know how to do this for curves?) then what's missing?

I'd be interested how feasible complete 2D UIs using dynamically GPU rendered vector graphics are. I've played with vector rendering in the past, using a pixel shader that more or less implemented the method described in the OP. Could render the ghost script tiger at good speeds (like 1-digit milliseconds at 4K IIRC), but there is always an overhead to generating vector paths, sampling them into line segments, dispatching them etc... Building a 2D UI based on optimized primitives instead, like axis-aligned rects and rounded rects, mostly will always be faster, obviously.

Text rendering typically adds pixel snapping, possibly using byte code interpreter, and often adds sub-pixel rendering.


> What type of filter do you mean? […] the approach described doesn’t go into the details of how coverage is computed

This article does clip against a square pixel’s edges, and sums the area of what’s inside without weighting, which is equivalent to a box filter. (A box filter is also what you get if you super-sample the pixel with an infinite number of samples and then use the average value of all the samples.) The problem is that there are cases where this approach can result in visible aliasing, even though it’s an analytic method.

When you want high quality anti-aliasing, you need to model pixels as soft leaky overlapping blobs, not little squares. Instead of clipping at the pixel edges, you need to clip further away, and weight the middle of the region more than the outer edges. There’s no analytic method and no perfect filter, there are just tradeoffs that you have to balance. Often people use filters like Triangle, Lanczos, Mitchell, Gaussian, etc.. These all provide better anti-aliasing properties than clipping against a square.


> If the input image is only simple lines whose coverage can be correctly computed (don't know how to do this for curves?) then what's missing?

Computing pixel coverage accurately isn't enough for the best results. Using it as the alpha channel for blending forground over background colour is the same thing as sampling a box filter applied to the underlying continuous vector image.

But often a box filter isn't ideal.

Pixels on the physical screen have a shape and non-uniform intensity across their surface.

RGB sub-pixels (or other colour basis) are often at different positions, and the perceptual luminance differs between sub-pixels in addition to the non-uniform intensity.

If you don't want to tune rendering for a particular display, there are sometimes still improvements from using a non-box filter

An alternative is to compute the 2D integral of a filter kernel over the coverage area for each pixel. If the kernel has separate R, G, B components, to account for sub-pixel geometry, then you may require another function to optimise perceptual luminance while minimising colour fringing on detailed geometries.

Gamma correction helps, and fortunately that's easily combined with coverage. For example, slow rolling tile/credits will shimmer less at the edges if gamme is applied correctly.

However, these days with Retina/HiDPI-style displays, these issues are reduced.

For example, MacOS removed sub-pixel anti-aliasing from text rendering in recent years, because they expect you to use a Retina display, and they've decided regular whole-pixel coverage anti-aliasing is good enough on those.


7 USD/day? That's ~200/month -- isn't that just very expensive? I am probably missing something.

E.g. a Terragonlabs subscription is 25/month for 3 concurrent tasks and 50/month for 10.


You can optimize things. I have a github action that starts stops a fast google cloud vm for our builds. It only gets used about 3 minutes per build. We maybe have a few dozen builds per month. So that's a few hours of run time. The rest of the time the vm is stopped and not billed (except for storage, which is cents per month at most). It's a simple debian vm so it boots in about 20 seconds.

VMs are expensive if you leave them running 24/7 but the logic to start/stop them is pretty easy. There's no need.

Anyway, you need to balance this against the payoff. Agentic coding is useful enough that it beats spending your own time. And that includes waiting time for the relatively slow/underpowered containerized environments that some tools would use by default. I use codex web and codex cli (with a qemu vm so I can use the --yolo flag). Codex web is a bit limited with memory and CPU. Some of my slower builds are taking forever there. To the point where most of the time it consumes is just waiting for these builds to happen.

With a bit of plumbing, you can do things like the author describes pretty easily. IMHO this needs to be better integrated into tools. With Github you have the option to run your own runners. I don't think codex/claude web have similar options currently. But with the cli versions, you can get more creative if you know your tools. And if you don't, use LLMs to drive them for you. It's mostly just about expressing what you want and how you want it.


If you are already paying $200-500/m… and you are doing the work of 10 people… I can totally see the value.

I’ll check the Terragonlabs option.

Lots of options for startups right now, selling pickaxes! I’m waiting for a better terminal experience, personally. I can’t deal with 30+ poorly named windows. I need to be able to search for that one thread I was working on yesterday…


> I’m waiting for a better terminal experience, personally.

Same! Even colored tabs would go a long way for me.


What people do not understand is that this really depends on what language you target. So if I write Rust then you sound like an AI hype booster but if I write TS or Python maybe not so much.

From my experience Opus is only good at writing Rust. But it's great at something like TS because the amount of code it has been trained on is probably orders of magnitude bigger for the latter language.

I still use Codex high/xhigh for planning and once the plan is sound I give it to Opus (also planning). That plan I feed back to Codex for sign-off. It takes an average additional 1-2 rounds of this before Opus makes a plan that Codex says _really_ ticks all the boxes of the plan it made itself and which we gave to Opus to start with ...

That tells you something.

Also when Opus is "done" and claims so I let Codex check. Usually it has skipped the last 20% (stubs/todos/logic bugs) so Codex makes a fixup plan that then again goes to through the Codex<->Opus loop of back and forth 2-3 rounds before Codex gives the thumbs up. Only after that has Opus managed to do what the inital plan said that Codex made in the first place.

When I have Opus write TS code (or Python) I do not have to jump through those hoops. Sometimes one round of back and forth is needed but never three, as with Rust.


TLDR; Not only is the new choice of font unfortunate at best -- the formatting reveals another level of amateurishness so very unbefitting to the ernestness one might assume the sender of the letter wanting to convey. ;)

Basic typography: a paragraph starts with indentation if there is no blank line of any height after the previous one. And if it is not the first paragraph. In short: indent or put vertical space. Never both.

The old TNR version gets this right: if you put blank lines between paragraphs, you don't indent.

Then the date -- dangling god-knows-where, aligned with nothing.

In the old version the only formatting faux-pas is the alignment of 'Sincerely' and if you're picky the outdent of the seal in the top left is a tad much (outside optical axis).


I just found out that https://annas-archive.li/ is masked by my German internet provider (SIM.de/Drillisch). I usually use a VPN but I had it switched off temp. to watch Fallout (Prime Video won't let you watch through a VPN). Only when I switched Mullvad back on could I open the site.

I didn't know German providers do this.


Yeah this is actually quite nefarious, as it is a private organization that decides what sites get blocked, with no legal oversight.

- https://de.wikipedia.org/wiki/Clearingstelle_Urheberrecht_im...

- https://netzpolitik.org/2024/cuii-liste-diese-websites-sperr...

Its a DNS based block, so overriding your default DNS server is enough to circumvent it. I think Dns over Https also works.


Pretty sure this was a thing in the past, but that currently it has to be a court order.


The wikipedia article seems to concur with you, although this seems to be a voluntary policy by CUII, the members could still decide to not wait for any court orders and block whatever they want.


https://netzpolitik.org/2025/die-cuii-gibt-auf-fuer-netzsper...

It's not voluntary anymore, it's required.


> Daher habe die Bundesnetzagentur die CUII gebeten, die Überprüfung mutmaßlich urheberrechtsverletzender Seiten künftig gerichtlich vornehmen zu lassen.

this is not a requirement, they're just asking. The BNetzA just wants to not deal with it apparently.

See also the recent CCC talk: https://media.ccc.de/v/39c3-cuii-wie-konzerne-heimlich-webse...


I think it's a DNS level block. I've been using NextDNS (free plan) and one side effect (besides auto ad block) is that it doesn't have those blocks. Highly recommend - there are alternative services as well, just saw NextDNS recommended here.

Alternative: https://archive.ph/2025.12.21-050644/https://annas-archive.l...


Someone compiled a list of blocked domains (by probing different DNS servers):

https://cuiiliste.de/

This is also how, for example, RT is blocked in Germany.


In that vein, I am trying to find out why searching for

    alextud popcorntime
which should trivially yield http://github.com/alextud/PopcornTimeTV results in anything but that one particular URL in every search engine: Google, Kagi, DuckDuckGo, Bing

They even find a fork of that particular repo, which in turn links back to it, but refuse to show the result I want. Have't found any DMCA notices. What is going on?


They have marked the repo as noindex (or GitHub is forcing a noindex header).

Its returning a noindex flag so every serp is correctly doing what the repo has been asked.

That is... except for brave! I checked on my searx instance and it still showed up in brave's results


Try Yandex search, trust me later.

It has 0 censorship - regarding pirated content at least.


Very interesting. The security page does show up on kagi at #6.

I wonder if GitHub flags it to not be indexed or something.


Also true in the Netherlands, I hate these copyright freaks constantly trying to restrict access.


Was also shocked to see that (Berlin, Telekom here).


They also block some foreign "news" like Russia Today last time I checked.


As someone who has written C and C++ for 35 years and Rust for a decade now, the last five years professionally, I don't see the point.

Even when I need what C does "well" from the author's pov I can just wrap my code in a big unsafe {} in Rust.

I still get a bunch of things from the language that are not safety- but ergonomics-related and that I would't want to miss anymore.


> One practical problem I ran into early on is that Google Maps is surprisingly bad at categorising cuisines. A huge share of restaurants are labelled vaguely (“restaurant”, “cafe”, “meal takeaway”)

It's not only that; cuisines are also difficult to label as certain countries simple do not exist for Google when it comes to that.

I recall last year I wanted to change the type of "Alin Gaza Kitchen", my ex (closed now, unfortunately) fav. falafel place in Berlin from the non-descript "Middle-Eastern" to "Palaestinian" category.

I assumed this was available for any country/cuisine, like "German", "Italian" or "Israeli". But "Paleastinian" didn't exist as a category.


The vague categorisation is likely on purpose, done by the business owner thinking that it would attract more clients.

You can change it yourself and Google will accept it but if the owner is adamant they will change it back.


of course not. There is not such country after all.


Gotta love an “I personally know better than pretty much everybody else on earth” post

https://en.wikipedia.org/wiki/International_recognition_of_P...


By that logic, 'Basque', 'Cantonese', 'Cajun', and 'Tex-Mex' shouldn't exist either


Yeah, and because of this for example Claude Code is down too because the auth goes through CF. F*cking marvelus, the decentralized web ...


Typing any letter into the search field makes it loose focus. So every letter typed there requires another click first to re-focus.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: