A free electron laser (FEL) uses free electrons (electrons not attached to a nucleus) as a lasing medium to produce light. The light would shine through a mask and expose photoresist more or less just like the light from ASML’s tin plasma contraption, minus the tin plasma. FELs, in principle, can produce light over a very wide range of wavelengths, including EUV and even shorter.
That DARPA thing is a maskless electron beam lithography system: the photoresist is exposed by hitting it directly with electrons.
Electrons have lots of advantages: they have mass, so much less kinetic energy is needed to achieve short wavelengths. They have charge, so they can be accelerated electrically and they can be steered electrically or magnetically. And there are quite a few maskless designs, which saves the enormous expense of producing a mask. (And maskless lithography would let a factory make chips that are different in different wafers, which no one currently does. And you need a maskless technique to make masks in the first place.) There were direct-write electron-beam research fabs, making actual chips, with resolution comparable to or better than the current generation of ASML gear, 20-30 years ago, built at costs that were accessible to research universities.
But electrons have a huge, enormous disadvantage: because they are charged, they repel each other. So a bright electron beam naturally spreads out, and multiple parallel beams will deflect each other. And electrons will get stuck in electrically nonconductive photoresists, causing the photoresist to (hopefully temporarily) build up a surface charge, interfering with future electron beams.
All of that causes e-beam lithography to be slow. Which is why those research fabs from the nineties weren’t mass-producing supercomputers.
What bandwidth limitations are you referencing? My understanding is that deep euv lithography is limited by chromatic aberration, so the narrow bandwidth of a single beam FEL would be an advantage. If you need more bandwidth, you can chirp it. Is the bandwidth too high?
They mean bandwidth as in rate at which one can expose a mask using an electron beam, because they’ve confused two different technologies. See my other reply.
P.S. Can you usefully chirp an FEL? I don’t know whether the electron sources that would be used for EUV FELs can be re-tuned quickly enough, nor whether the magnet arrangements are conducive to perturbing the wavelength. But relativistic electron beams are weird and maybe it works fine. Of course, I also have no idea why you would want to chirp your lithography light source.
I don't think it's strictly chirping, but there are methods to achieve that sort of time/ bandwidth trade-off with FELs. I've seen references to it pop up in high speed imaging, though the details of anything that fast and small are quite outside my expertise. Wasn't sure why you would want high bandwidth either, hence my confusion.
From what I understand, this does not actually affect Google. They were already amortizing their R and D expenses.
Over long time scales (and big company revenue streams), this is sort of a wash. I think this hurts startups a bit more due to the long timescales involved which eats up much needed cash in the short term.
Microsoft had/has the Natick project which was an undersea data center testbed which allegedly had a bunch of benefits. That doesn't seem to have gone anywhere - or at least isn't really scaling up. I'd imagine the ongoing operational costs of space are worse than the ocean?
To me, the cost estimates seem a bit off and conflate capital with running costs.
The main benefit for space at the moment seems to be sidestepping terrestrial regulations.
> Microsoft had/has the Natick project which was an undersea data center testbed which allegedly had a bunch of benefits. That doesn't seem to have gone anywhere - or at least isn't really scaling up.
I think at the core of this there's a risk analysis. At one point I briefly worked in a team in charge of a company's servers, and there were plenty of stories of things gone wrong enough that someone had to drive or fly to the datacenter. These company's datacenters were named after the closest airport for this reason, iirc. A little optimization in case things went very wrong; you always knew where you'd have to fly in to.
Even if an undersea data center could potentially yield cost benefits, it's also significantly riskier in case something goes wrong. How long would it take to physically access a machine? Do you have to bring down other machines to access it? And at scale, things tend to always go wrong.
To comment on the original post, needless to say this is even more complicated, costly and untimely in space.
It doesn't have to be in the middle of the ocean to be less accessible than an on-ground data-center.
Even if it's in a giant pool next to an any existing data-center— accessing a machine that's underwater probably takes longer and is more likely to affect the operation of other machines than if it were on dry land.
There are funds that trade on the rebalancing and entrances/exits of individual stocks from the indexes. While this may offer some yield, you can still get pulled under the bus by large scale movement in the markets... as seen recently.
While I'm not a fan of the "dark pools", if your "grandma" is a buy and hold anyway, the price of the asset should be ballpark correct most of the time since presumably the people doing the trades in the dark room are rational? I suspect that this setup is more useful if you need short term stability in the price to set up a complex deal.
While you have documentation about migrating to your platform, you don't seem to have any documented promises around export and leaving your service.
Also, it seems a bit odd to me that the "balance sheet" ability is two non-free pricing levels deep into your service. Isn't that a baseline expectation?
Physical libraries also tend to be the defacto life help desk for a lot of people out there.
reply