thanks for the HN community - the video is how I ended up here and its one of the few social media-esque sites I bother visiting. Taught me a pile of things about coding and CS that weren't in my mechanical engineering degree.
I thought this video was a lot better than the Veritasium video. The Veritasium video was awkward. I think they tried to follow the formula from the (excellent) blue led video that performed so well, but it just didn't work.
Disagree, I thought the Veritasium video was fantastic. You understand how the machine works in depth, the history of its development and challenges it encountered, and hear from people actively working on it. It’s a science lesson and history lesson. Like usual, they keep the video engaging and focused on the story, while still keeping a lot of depth with the science. It’s a great format
The whole “exploding tiny drops of metal” in the middle of this is just Loony Toons. This machine is literally insane and two of the companies I am long-long on would be completely fucked without it.
IIRC from the Veritasium video[0] there is actually some hydrogen gas flowing at quite a high speed though the laser chamber to carry away the tin debris so that it does not accumulate on the mirrors.
Seeing this news story made me briefly fear that they’d found a way to replace this glorious mechanism. Thankfully not. In fact, they’re going to shoot more droplets, more often!
Yes it was crazy when I first heard about it "wait what? they shoot it in mid-air?" and that was before I found out they did that like 30k times a second.
But now 100k times a second apparently. Humans are amazing.
You have a machine that’s basically a clean room inside and one of the parts is essentially electrosputtering tin but then throwing all the tin away and using the EM pulse from the sputter to do work.
Oh and can you build it so it can run hundreds or thousands of hours before being cleaned? Thanks byyyyyyyyeeeeee!
The thing I didn't understand after watching that video was why you need such an exotic solution to produce EUV light. We can make lights no problem in the visible spectrum, we can make xray machines easily enough that every doctors office can afford one, what is it specifically about those wavelengths that are so tricky.
The efficiency of X-ray tubes is proportional to voltage, and is about 1% at 100kV voltage. This is the ballpark for the garden variety Xray machines. But the wavelength of interest for lithography corresponds to the voltage of only about 100V, so the efficiency would be 10 parts per million.
The source in the ASML machine produces something like 300-500W of light. With an Xray tube this would then require an electron beam with 50 MW of power. When focused into a microscopic dot on the target this would not work for any duration of time. Even if it did, the cooling and getting rid of unwanted wavelengths would have been very difficult.
A light bulb does not work because it is not hot enough. I suppose some kind of RF driven plasma could be hot enough, but considering that the source needs to be microscopic in size for focusing reasons, it is not clear how one could focus the RF energy on it without also ruining the hardware.
So, they use a microscopic plasma discharge which is heated by the focused laser. It "only" requires a few hundred kilowatts of electricity to power and cool the source itself.
The issue isn't in generating short wavelength light, it's in focusing it accurately enough to print a pattern with trillions of nanoscale features with few defects. We can't really use lenses since every material we could use is opaque to high energy photons so we need to use mirrors, which still absorb a lot of the light energy hitting them. Now this only explains why we need all the crazy stuff that asml puts in it's EUV machines to use near x-ray light, but not why they don't use x-ray or higher energy photons. I believe the answer to this is just that the mirrors they can use for EUV are unacceptably bad for anything higher, but I'm not sure
Photoresist too. XRays are really good at passing through matter, which is a bit of a problem when the whole goal is for them to be absorbed by a 100 nanometer thick film. They tend to ionize stuff, which is actually a mechanism for resist development, but XRay energies are high enough that the reactions become less predictable. They can knock electrons into neighboring resist regions or even knock them out of the material altogether.
It really is the specific wavelength. Higher or lower is easier. But euv has tricky properties which make it feasible for Lithography (although just barely it you have a look at the optics) but hard to produce with high intensities.
Specifically, what makes x-rays easy to generate are these: https://en.wikipedia.org/wiki/Characteristic_X-ray In essence, smashing electrons into atoms allows you to ionize the inner shell of an atom and when an electron drops down from an outer shell, the excess energy is shed as high-energy photons. This constrains the energy range of X-ray tubes ("smash electron into metal") to wavelengths well below 13.5nm.
(These emission lines are also what is being used in x-ray spectroscopy to identify elements)
There are no normal x-ray mirrors. The only way to focus them is to use special grazing mirrors where the x-rays hit them almost parallel to the surface.
As I understand it, primarly because due to the high energy level of x-rays, light x-ray interacts very differently with materials[1]. Primarily they get absorbed, so very difficult to make mirrors or lenses, which are crucial for litography to redirect and focus the light on a specific miniscule point on the wafer.
The primary method is to rely grazing angle reflection, but that per definition only allows you a tiny deflection at a time, nothing like a parabolic mirror or whatnot.
All of these problems or equivalent still exist in EUV. Litho industry had to kind of rethink the source and scanner because it went from all lenses to all mirrors in EUV. This is also why low NA and high NA EUV scanners were different phases.
As I hear it, the decision had large economic component related to Masks and even OPC.
100%. EUV barely works. XRay litho takes all the issues with EUV and cranks them up to 11. It will take comparable effort to EUV, if not more, to get XRay litho up and running, and I'm not aware of anyone approaching this to anywhere near the level of investment that ASML (and others) have pumped into developing EUV tech. We may get there eventually as a species, but we're a ways off.
If you think it barely works now, you should've seen it when we first started. Availability of a machine was "fuck you"% and the whole system was held together by duct tape, bubblegum and hope. Compared to that the current system is entirely controllable.
Oh, for sure, via herculean effort and investment we have created ourselves a functioning and economical process!
We do actually have functioning processes for XRay litho today, but we'll need that same level (or more) of investment and effort to make it economical.
Stochastic effects become a bigger and bigger problem. At some point (EUV) a single photon has enough energy to ionize atoms, causing a cascade that causes effects to bloom outside of the illumination spot.
> The key advancements in Monday's disclosure involved doubling the number of tin drops to about 100,000 every second, and shaping them into plasma using two smaller laser bursts, as opposed to today's machines that use a single shaping burst.
This is covered in that video. Did they let him leak their Q1 plans?
That has been covered before in other videos[0] that this is their roadmap to higher power, so I'm also not sure what they have announced now that wasn't previously announced.
From the first video I thought they had already shipped this, but it sounds like they were describing what their new model was.
This seems like a product with a very very long sales pipeline, so I wonder if they work on pre-orders with existing customers but announce delivery milestones only as they come?
A personal finance app called “Predictable” that takes chaotic sloshes of money and turns them into steady streams of cash. You tell it “I receive this much money weekly/monthly/on the first and fifteenth/when Mercury is in retrograde, and I have these expenses at other various intervals” and it evens everything out into a constant weekly flow of cash by, essentially, buffering. Any overflow or underflow goes to a “margin” bucket which basically tells you how much you could spend right now and still have enough for all your recurring expenses.
Currently making it just for myself but curious if anyone else would find it useful.
I'd love something like that, with the added ability to basically split the margin bucket into multiple buckets (one for me, one for the wife).
The main issue I've had with budgeting apps continues to be pulling in up-to-date transaction data, which is necessary to know how much I can spend right now. There always seems to be problems with the data syncing. Apple Card is the worst, as you can only pull transactions via wallet on device.
I wish we could just use a single bank account at the Fed. The banking network is absolutely shit and there's basically a 1% tax on everything that goes to the rich for no good reason.
Budgeting was soooo much easier with cash – it's maddening all the data is there for real-time personal finance but it can't be accessed.
I don't see how this article could possibly support the argument that C# is slower than GDScript
It compares several C# implementations of raycasts, never directly compares with GDScript, blames the C# performance on GDScript compatibility and has an strike-out'ed section advocating dropping of GDScript to improve C# performance!
Meanwhile, Godot's official documentation[1] actually does explicitly compare C# and GDScript, unlike the the article which just blames GDScript for C#'s numbers, claiming that C# wins in raw compute while having higher overhead calling into the engine
My post could have been a bit longer. It seems to have been misunderstood.
I use GDScript because it’s currently the best supported language in Godot. Most of the ecosystem is GDScript. C# feels a bit bolted-on. (See: binding overhead) If the situation were reversed, I’d be using C#. That’s one technical reason to prefer GDScript. But you’re free to choose C# for any number of reasons, I’m just trying to answer the question.
At least in my case, I got curious about the strength of /u/dustbunny's denouncement of Godot+C#.
I would have have put it as a matter of preference/right tool with GDScripts tighter engine integration contrasted with C#'s stronger tooling and available ecosystem.
But with how it was phrased, it didn't sound like expressing a preference for GDScript+C++ over C# or C#++, it sounded like C# had some fatal flaw. And that of course makes me curious. Was it a slightly awkward phrasing, or does C# Godot have some serious footgun I'm unaware of?
Makes sense! I think dustbunny said it best: C# is “not worth the squeeze” specifically in Godot, and specifically if you’re going for performance. But maybe that’ll change soon, who knows. The engine is still improving at a good clip.
I tried it for a while, it was fun explaining it to people, but didn’t actually help much. I ended up blocking all time wasters except HN, which is almost monochrome anyway.
I set grayscale in my phone's quiet hours settings. It's to help me sleep, rather than to reduce my phone usage. It means if I wake in the night and look at my phone, I'm not blasted by colors. Or if I stay up a bit later than usual. I find it beneficial although probably not revolutionary.
I did try setting my phone to grayscale during the day but didn't see much if any benefits there.
This was my experience as well. While it is less straining on the eyes, that was the only benefit I saw. I didn't see a reduction in screen time or less anxiety about notifications.
They mean the dependencies. If you’re testing system A whose sole purpose is to call functions in systems B and C, one approach is to replace B and C with mocks. The test simply checks that A calls the right functions.
The pain comes when system B changes. Oftentimes you can’t even make a benign change (like renaming a function) without updating a million tests.
Tests are only concerned with the user interface, not the implementation. If System B changes, that means that you only have to change your implementation around using System B to reflect it. The user interface remains the same, and thus the tests can remain the same, and therefore so can the mocks.
I think we’re in agreement. Mocks are usually all about reaching inside the implementation and checking things. I prefer highly accurate “fakes” - for example running queries against a real ephemeral Postgres instance in a Docker container instead of mocking out every SQL query and checking that query.Execute was called with the correct arguments.
> Mocks are usually all about reaching inside the implementation and checking things.
Unfortunately there is no consistency in the nomenclature used around testing. Testing is, after all, the least understood aspect of computer science. However, the dictionary suggests that a "mock" is something that is not authentic, but does not deceive (i.e. not the real thing, but behaves like the real thing). That is what I consider a "mock", but I'm gathering that is what you call a "fake".
Sticking with your example, a mock data provider to me is something that, for example, uses in-memory data structures instead of SQL. Tested with the same test suite as the SQL implementation. It is not the datastore intended to be used, but behaves the same way (as proven by the shared tests).
> checking that query.Execute was called with the correct arguments.
That sounds ridiculous and I am not sure why anyone would ever do such a thing. I'm not sure that even needs a name.
If something is difficult or scary, do it more often. Smaller changes are less risky. Code that is merged but not deployed is essentially “inventory” in the factory metaphor. You want to keep inventory low. If the distance between the main branch and production is kept low, then you can always feel pretty confident that the main branch is in a good state, or at least close to one. That’s invaluable when you inevitably need to ship an emergency fix. You can just commit the fix to main instead of trying to find a known good version and patching it. And when a deployment does break something, you’ll have a much smaller diff to search for the problem.
There's a lot of middle ground between "deploy to production 20x a day" and "deploy so infrequently that you forget how to deploy". Like, once a day? I have nothing against emergency fixes, unless you're doing them 9-19x a day. Hotfixes should be uncommon (neither rare nor standard practice).
I’m excited to see how this turns out. I work with Go every day and I think Io corrects a lot of its mistakes. One thing I am curious about is whether there is any plan for channels in Zig. In Go I often wish IO had been implemented via channels. It’s weird that there’s a select keyword in the language, but you can’t use it on sockets.
Wrapping every IO operation into a channel operation is fairly expensive. You can get an idea of how fast it would work now by just doing it, using a goroutine to feed a series of IO operations to some other goroutine.
It wouldn't be quite as bad as the perennial "I thought Go is fast why is it slow when I spawn a full goroutine and multiple channel operations to add two integers together a hundred million times" question, but it would still be a fairly expensive operation. See also the fact that Go had fairly sensible iteration semantics before the recent iteration support was added by doing a range across a channel... as long as you don't mind running a full channel operation and internal context switch for every single thing being iterated, which in fact quite a lot of us do mind.
(To optimize pure Python, one of the tricks is to ensure that you get the maximum value out of all of the relatively expensive individual operations Python does. For example, it's already handling exceptions on every opcode, so you could win in some cases by using exceptions cleverly to skip running some code selectively. Go channels are similar; they're relatively expensive, on the order of dozens of cycles, so you want to make sure you're getting sufficient value for that. You don't have to go super crazy, they're not like a millisecond per operation or something, but you do want to get value for the cost, by either moving non-trivial amount of work through them or by taking strong advantage of their many-to-many coordination capability. IO often involves moving around small byte slices, even perhaps one byte, and that's not good value for the cost. Moving kilobytes at a time through them is generally pretty decent value but not all IO looks like that and you don't want to write that into the IO spec directly.)
> One thing I am curious about is whether there is any plan for channels in Zig.
The Zig std.Io equivalent of Golang channels is std.Io.Queue[0]. You can do the equivalent of:
type T interface{}
fooChan := make(chan T)
barChan := make(chan T)
select {
case foo := <- fooChan:
// handle foo
case bar := <- barChan:
// handle bar
}
Obviously not quite as ergonomic, but the trade off of being able to use any IO runtime, and to do this style of concurrency without a runtime garbage collector is really interesting.
Odin doesn't (and won't ever according to its creator) implement specific concurrency strategies. No async, coroutines, channels, fibers, etc... The creator sees concurrency strategy (as well as memory management) as something that's higher level than what he wants the language to be.
Which is fine by me, but I know lots of people are looking for "killer" features.
There's a GC library around somewhere, but I doubt anyone uses it. Manual memory management is generally quite simple, as long as you aren't using archaic languages.
At least Go didn't take the dark path of having async / await keywords. In C# that is a real nightmare and necessary to use sync over async anti-patterns unless willing to re-write everything. I'm glad Zig took this "colorless" approach.
Where do you think the Io parameter comes from? If you change some function to do something async and now suddenly you require an Io instance. I don't see the difference between having to modify the call tree to be async vs modifying the call tree to pass in an Io token.
Synchronous Io also uses the Io instance now. The coloring is no longer "is it async?" it's "does it perform Io"?
This allows library authors to write their code in a manner that's agnostic to the Io runtime the user chooses, synchronous, threaded, evented with stackful coroutines, evented with stackless coroutines.
Except that now your library code lost context on how it runs. If you meant it to be sync and the caller gives you an multi threaded IO your code can fail in unexpected ways.
This is exactly the problem, thread safety. The function being supplied with std.Io needs to understand what implementation is being used to take precautions with thread safety, in case a std.Io.Threaded is used. What if this function was designed with synchrony in mind, how do you prevent it taking a penalty guarding against a threaded version of IO?
The function being called has to take into account thread safety anyway even if it doesn't do IO. This is an entirely orthogonal problem, so I can't really take it seriously as a criticism of Zig's approach. Libraries in general need to be designed to be thread-safe or document otherwise regardless of if the do IO, because a calling program could easily spin up a few threads and call it multiple times.
> What if this function was designed with synchrony in mind, how do you prevent it taking a penalty guarding against a threaded version of IO?
You document it and state that it will take a performance penalty in multithreaded mode? The same as any other library written before this point.
One of the harms Go has done is to make people think its concurrency model is at all special. “Goroutines” are green threads and a “channel” is just a thread-safe queue, which Zig has in its stdlib https://ziglang.org/documentation/master/std/#std.Io.Queue
A channel is not just a thread-safe queue. It's a thread-safe queue that can be used in a select call. Select is the distinguishing feature, not the queuing. I don't know enough Zig to know whether you can write a bit of code that says "either pull from this queue or that queue when they are ready"; if so, then yes they are an adequate replacement, if not, no they are not.
Of course even if that exact queue is not itself selectable, you can still implement a Go channel with select capabilities in Zig. I'm sure one exists somewhere already. Go doesn't get access to any magic CPU opcodes that nobody else does. And languages (or libraries in languages where that is possible) can implement more capable "select" variants than Go ships with that can select on more types of things (although not necessarily for "free", depending on exactly what is involved). But it is more than a queue, which is also why Go channel operations are a bit to the expensive side, they're implementing more functionality than a simple queue.
> I don't know enough Zig to know whether you can write a bit of code that says "either pull from this queue or that queue when they are ready"; if so, then yes they are an adequate replacement, if not, no they are not.
Thanks for giving me a reason to peek into how Zig does things now.
Zig has a generic select function[1] that works with futures. As is common, Blub's language feature is Zig's comptime function. Then the io implementation has a select function[2] that "Blocks until one of the futures from the list has a result ready, such that awaiting it will not block. Returns that index." and the generic select switches on that and returns the result. Details unclear tho.
Getting a simple future from multiple queues and then waiting for the first one is not a match for Go channel semantics. If you do a select on three channels, you will receive a result from one of them, but you don't get any future claim on the other two channels. Other goroutines could pick them up. And if another goroutine does get something from those channels, that is a guaranteed one-time communication and the original goroutine now can not get access to that value; the future does not "resolve".
Channel semantics don't match futures semantics. As the name implies, channels are streams, futures are a single future value that may or may not have resolved yet.
Again, I'm sure nothing stops Zig from implementing Go channels in half-a-dozen different ways, but it's definitely not as easy as "oh just wrap a future around the .get of a threaded queue".
By a similar argument it should be observed that channels don't naively implement futures either. It's fairly easy to make a future out of a channel and a couple of simple methods; I think I see about 1 library a month going by that "implements futures" in Go. But it's something that has to be done because channels aren't futures and futures aren't channels.
(Note that I'm not making any arguments about whether one or the other is better. I think such arguments are actually quite difficult because while both are quite different in practice, they also both fairly fully cover the solution space and it isn't clear to me there's globally an advantage to one or the other. But they are certainly different.)
> channels aren't futures and futures aren't channels.
In my mind a queue.getOne ~= a <- on a Go channel. Idk how you wrap the getOne call in a Future to hand it to Zig's select but that seems like it would be a straightforward pattern once this is all done.
I really do appreciate you being strict about the semantics. Tbh the biggest thing I feel fuzzy on in all this is how go/zig actually go about finding the first completed future in a select, but other than that am I missing something?
I think the big one is that a futures based system no matter how you swing it lacks the characteristic that on an unbuffered Go channel (which is the common case), successfully sending is also a guarantee that someone else has picked it up, and as such a send or receive event is also a guaranteed sync point. This requires some work in the compiler and runtime to guarantee with barriers and such as well. I don't think a futures implementation of any kind can do this because without those barriers being inserted by either the compiler or runtime this is just not a guarantee you can ever have.
To which, naturally, the response in the futures-based world is "don't do that". Many "futures-based worlds" aren't even truly concurrently running on multiple CPUs where that could be an issue anyhow, although you can still end up with the single-threaded equivalent of a race condition if you work at it, though it is certainly more challenging to get there than with multi-threaded code.
This goes back to, channels are actually fairly heavyweight as concurrency operations go, call it two or three times the cost of a mutex. They provide a lot, and when you need it it's nice to have something like that, but there's also a lot of mutex use in Go code because when you don't need it it can add up in price.
Thanks for taking the time to respond. I will now think of Channels as queue + [mutex/communication guarantee] and not just queue. So in Go's unbuffered case (only?) a Channel is more than a 1-item queue. Also, in Go's select, I now get that channels themselves are hooked up to notify the select when they are ready?
This is not a "true Scotsman" argument. It's the distinctive characteristic of Go channels. Threaded queues where you can call ".get()" from another thread, but that operation is blocking and you can't try any other queues, then you can't write:
select {
case result := <-resultChan:
// whatever
case <-cxt.Done():
// our context either timed out or was cancelled
}
or any more elaborate structure.
Or, to put it a different way, when someone says "I implement Go channels in X Language" I don't look for whether they have a threaded queue but whether they have a select equivalent. Odds are that there's already a dozen "threaded queues" in X Language anyhow, but select is less common.
Again note the difference between the word "distinctive" and "unique". No individual feature of Go is unique, of course, because again, Go does not have special unique access to Go CPU opcodes that no one else can use. It's the more defining characteristic compared to the more mundane and normal threaded queue.
Of course you can implement this a number of ways. It is not equivalent to a naive condition wait, but probably with enough work you could implement them more or less with a condition, possibly with some additional compiler assistance to make it easier to use, since you'd need to be combining several together in some manner.
I’m no Google fan, but deprecating XSLT is a rare opportunity to shrink the surface area of the web’s “API” without upsetting too many people. It would be one less thing for independent browsers like Ladybird to worry about. Thus actually weakening Google’s chokehold on the browser market.
We are largely the nerds that other nerds picked on for being too nerdy. I’d bet that a hugely disproportionate share of all the people in the world who care about this subject at all are here in these conversations.
Actual normies don’t think of the Internet at all. They open Facebook The App on their iPads and smartphones and that’s the internet for them.
Passionate nerds giving a shit can build a far more rosy world than whatever that represents, so I don’t see why anyone should give a damn if this happens to be somewhat niche.
reply