The "spiral" type declaration syntax from C is hard to parse, both for humans and machines. That's probably why even C++ is moving away from it:
C modern C++
"int foo[5]" -> "array<int,5> foo"
It's easy to criticize simple examples like the one above, since the C++ (or Rust) version is longer than the C declaration, but consider something like this:
char *(*(**foo[][8])())[];
and the idiomatic Rust equivalent:
let foo: Vec<[Option<fn() -> Vec<String>>; 8]> = Vec::new();
The later can be parsed quite trivially by descending into the type declaration. It's also visible at a glimpse, that the top-level type is a Vec and you can also easily spot the lambda and it's signature.
Another ergonomic aspect of the Rust syntax is that you can easily copy the raw type, without the variable name:
Vec<[Option<fn() -> Vec<String>>; 8]>
While the standalone C type looks like this:
char *(*(**[][8])())[]
which is quite a mess to untangle ;)
Also, I think C# is generally closer to Rust than to C when it comes to the type syntax. A rough equivalent to the previous example would be:
var foo = new List<Func<List<string>>?[]>();
I can't deny that "?" is more ergonomic than Rust's "Option<T>", but C# has also a way less expressive type system than Rust or C++, so pick your poison.
Companies being forced to overhaul their interview processes is certainly an unexpected side-effect of the insurgence of LLMs.
On the other hand, encouraging employees to adopt "AI" in their workflows, while at the same time banning "AI" on interviews, seems a bit hypocritical - at least from my perspective.
One might argue that this is about dishonesty, and yes, I agree. However, AI-centric companies apparently include AI usage in employee KPIs, so I'm not sure how much they value the raw/non-augmented skill-set of their individual workers.
Of course, in all other cases, not disclosing AI usage is quite a dick move.
Linux on mobile is fun, but really I want AOSP and its superior security model and SDK.
Now I hate Google as much as the next person, but I also hate all the other Android manufacturers who just don't do better.
Ideally, major manufacturers would all contribute to AOSP to make sure that it runs well with their devices. And then we could install the "AOSP distro" we want, be it GrapheneOS or LineageOS or whatever the fuck we want.
> does anyone know if Huawei is following along with this in their fork?
They suck like all the other manufacturers: they forked as a quick solution, and then decided to go with their own proprietary codebase. If nobody else contributes, why would they make it open source?
What I see from the Linux experience is that the only way it works is to have a copyleft licence and a multitude of contributors. That way it belongs to everybody, and it moves too fast for one single entity to write a proprietary competitor on their own. But AOSP is not that: first it's a permissive licence, and only Google meaningfully contributes to it.
> Ideally, major manufacturers would all contribute to AOSP to make sure that it runs well with their devices. And then we could install the "AOSP distro" we want, be it GrapheneOS or LineageOS or whatever the fuck we want.
I was under the impression that we got that with GSI, including that Google required a device to support GSIs in order to be certified or something like that. Am I misremembering?
Personally, I usually try to pick motherboards that give you access to everything you need via the serial port (UEFI, boot selection, etc).
That's why solutions like this seem a little bit backward to me.
On top of that, all server/desktop OSes I'm familiar with, provide better remote control options after boot (that respect UAC) - but maybe I'm simply not the target demographic here.
LaTeX is quite underrated these days. Even though alternatives like Typst are popping up, LaTeX is also pretty convenient and powerful if you get past the crude syntax and obscure compilation errors.
I sill remember my disbelieve when I found out that I can change my article into a presentation just by changing the document class to "beamer".
These days I usually default to pandoc's markdown, mostly because the raw text is very readable.
Please nobody actually do this. Good presentation slides have almost zero overlap with the corresponding article since they serve completely different purposes. In my field, seeing beamer slides is a huge red flag for an imminent terrible presentation. Slides are an extremely visual medium, and WYSIWYM is a huge hindrance for designing appealing slides.
I disagree. LaTeX is very good at layouting test, and can also (reluctantly) put figures into the text. Anything else is a huge hack (like TikZ), and one constantly runs into crazy limitations such as the fixed-point math and the lack of a decent visual editor. Slides should never have paragraphs of text on them, so the layouting is not very useful, but the other limitations are very annoying.
TikZ and Asymptote are more or less the only general-purpose modular illustration markup languages we have around. Anything better is welcome, but graphical editors are not an alternative in some cases.
The whole point is to focus on the data and how it is connected, not on the rendering. You want the ability to model your figures as a program which can be tweaked and extended as needed, used as a framework throughout the deck.
Drawing every single figure by hand with a graphical tool is not at all something I'd even consider.
I wouldn't say underrated. Literally every single research article in maths and cs, every PhD dissertation and master thesis in these fields too, are written in LaTeX.
Most students, and many researchers use Overleaf nowadays, though.
> I wouldn't say underrated. Literally every single research article in maths and cs, every PhD dissertation and master thesis in these fields too, are written in LaTeX.
Usage level is not correlated to "rate". Sometimes people use stuff because they have to, not only because they like it. See the Microsoft Word case.
I'd agree that LaTeX has fell a bit in popularity this days against Typst - but not much in its usage. It is still the de facto standard of scientific and technical document typesetting.
One reason is that many journals supply LaTeX templates. And I find them easier to apply compared to their Word templates. I wonder how much support Typst has from these publishers, considering its relatively young age.
KeenWrite basically transforms Markdown -> X(HT)ML -> TeX -> PDF, although it uses ConTeXt instead of LaTeX for typesetting because ConTeXt makes separating content from presentation a lot easier.
I'd even argue that Anubis is universally superior in this domain.
A sufficiently advanced web scraper can build a statistical model of fingerprint payloads that are categorized by CF as legit and change their proxy on demand.
The only person who will end up blocked is the regular user.
There is also a huge market of proprietary anti-bot solvers, not to mention services that charge you per captcha-solution. Usually it's just someone who managed to crack the captcha and is generating the solutions automatically, since the response time is usually a few hundred milliseconds.
This is a problem with every commercial Anti-bot/captcha solution and not just CF, but also AWS WAF, Akamai, etc.
As someone who has a lot of experience with (not AI related) web scraping, fingerprinting and WAFs, I really like what Anubis is doing.
Amazon, Akamai, Kasada and other big players in the WAF/Antibot industry will charge you millions for the illusion of protection and half-baked javascript fingerprint collectors.
They usually calculate how "legit" your request is based on ambiguous factors, like the vendor name of your GPU (good luck buying flight tickets in a VM) or how anti-aliasing is implemented on you fonts/canvas. Total bullshit. Most web scrapers know how to bypass it. Especially the malicious ones.
But the biggest reason why I'm against these kind of systems is how they support the browser mono-culture. Your UA is from Servo or Ladybird? You're out of luck.
That's why the idea choosing a purely browser-agnostic way of "weighting the soul" of a request resonates highly with me.
Keep up the good work!
Thanks! I'm going out of my way to make sure smaller browsers like Pale Moon aren't locked out when I add reputation into the equation. One of my prototypes that would work in concert with other changes works in links too :)
It's quite shocking to me how many people already told me to disable the annoying "single line" AI-completion if I ever were to try out a JetBrains IDE.
> AI services are expensive to provide, because they tend to be processor-intensive, but competition between vendors is a likely reason for JetBrains introducing a free tier earlier this month
If it's so expensive, why do they force it on everyone? Sure, a lot of folks want support for this, but enabling it by default is just annoying for their long-time users. Not to mention the costs of full AI-completion, I hope they don't get the idea of also enabling that by default.
Another ergonomic aspect of the Rust syntax is that you can easily copy the raw type, without the variable name:
While the standalone C type looks like this: which is quite a mess to untangle ;)Also, I think C# is generally closer to Rust than to C when it comes to the type syntax. A rough equivalent to the previous example would be:
I can't deny that "?" is more ergonomic than Rust's "Option<T>", but C# has also a way less expressive type system than Rust or C++, so pick your poison.