As long as you can get the Rust code to compile it's about the same speed. The issue is that rustc is only available on limited platforms (and indeed lack of rustc has killed off entire hardware architectures in popular distros in a bit of tail wagging the dog), rustc changes in breaking ways (adding new features) every 3 months, current rust culture is all bleeding edge types so any rust code you encounter in the wild will require curl rust.up | sh rather than being able to use the 1 year old rust toolchain from your repos.
What good is speed if you cannot compile? c has both. Maybe in another decade rust will have settled down but now wrangling all the incompatible rust versions makes c the far better option. And no, setting cargo versions doesn't fix this. It's not something you'd run into writing rust code within a company but it's definitely something you run into trying to compile other people's rust code.
I've never run into this issue in the wild. It sounds like a hypothetical. Upgrading your Rust toolchain is ridiculously easy, and using a year old outdated toolchain is more or less a philosophical hang up than a technical one.
... the import of any foreign made drone parts is also blocked. This includes things like ESCs and flight controllers. Not just items that actually transmit radio signals like camera modules and so are traditionally regulated by the FCC re: import.
The best coverage of the FCCs over-reach attempting to regulate all parts, and then their subsequent very tiny walking of it back is Joshua Bardwell's video: https://youtu.be/Dyr87--SDuc (9m47s)
Almost all the new exceptions are for government users. The only thing relevant to human persons is the back-stepping change that as long as the components of a drone are 60% made in the USA the entire thing can be considered domestic and imported. Or US retail importers can take the risk of saying that a tx'ing camera module has alternate uses, like as a security camera, and try importing it regardless of the ban.
>Italy insists a shadowy, European media cabal should be able to dictate what is and is not allowed online. That, of course, is DISGUSTING
What's equally disgusting is that one corporation has managed to put itself in the position to dictate these things instead. Cloudflare has literally been running a denial of service on congress.gov (any many other important domains) for at least 3 years if you aren't running latest chrome or latest firefox or similar.
Like a broken clock, he's not wrong. But it's the pot calling the kettle black.
Smart phones are not personal computers. They're shopping/government/etc terminals. You don't and never have controlled them, even with root (re: tight integration of the baseband computer which only the telco has a license for, not you). Their best use re: computing is acting as wifi hotspot for their cell telco CNAT connection. The time to stop using them as computers is now, not when your local government passes these laws. Apple is already forcing it and Google has shown it's cards even if walked it back temporarily.
You don't own your PC either. All modern PCs have a Trusted Platform Module that the authorities can and will use to lock down PCs eventually. Multiplayer games are already using hardware attestation on PC for anti-cheat.
I don't run any OS or games that would require such a thing. The two modern AMD cpus do have an fTPM but they are certainly not enabled in my UEFI firmware. My 3 other desktop computers including the one I'm typing to you on have no TPM and indeed this computer doesn't even have an Intel Management Engine (ME). And in my other old intel CPUs that do have ME I disable it and coreboot.
I can do whatever I want to my PC hardware and my software remains under my control. This is quite different than cell phone based computer platforms.
So, it's not locked down now. I won't lock my existing PCs I hand assembled down in the future, and I'd never buy any hardware that was locked down. In fact, I've never bought or used a smartphone because of this.
you are right, but you are misplacing the blame. it's not that you dont own your phone, it's that you dont own your bank account and the bank can dictate how you access it
I see your point and it's valid in this context. But both ends of non-ownership contribute. One doesn't own the smartphone and one doesn't own the bank account.
The National Credit Union Federation of Korea (NACUFOK) represents over 800 member-owned unions (https://www.cu.co.kr/english/main.do), and then there is the even larger Saemaul Geumgo (MG) network which operates as community credit cooperatives with millions of members. These people ostensibly own their "bank" accounts.
My issue with this is not that they're getting rid of buggy applications they don't want to support. It's that GTK 2 itself is not buggy and has no problems. There are still plenty of people using GTK 2 applications and I personally wrote a handful of new GTK 2 applications over the last year. GTK 3 wasn't a replacement for GTK 2. Just like GTK 4 isn't a replacement for GTK 3. They're separate things.
Droppping perfectly functional GTK 2 itself: not okay.
Other distros like Arch have well supported unofficial repos that still provide the GTK 2 package when it is needed. Debian does not. It does not hurt Debian at all to keep packaging GTK 2 itself and making it available. It is stable software and there have been no changes for decades besides a handful of compiler args to deal with changing compilers.
And GTK 2 does not need to support HiDPI or native Wayland. Just like all the Wayland programs do not support running on xorg or even other wayland compositors not sharing their wayland protocol extensions used. This is not actually a show stopper problem. It is consistent with other software's incompatibility with waylands and would only apply to those actually using GTK 2 applications and those demographics likely aren't wayland adopters.
I get the appeal of “it still works, so what’s the problem,” but from a distro’s point of view an unmaintained C toolkit with a big ABI surface is a problem. Even if GTK2’s code hasn’t changed, it still has to keep building across new compilers, hardening flags, toolchain transitions, security scans, etc.
Arch can shove that into community repos and say “you’re on your own.” Debian’s promise is different: if it’s in the archive, someone’s implicitly on the hook for it for years. At some point it’s more honest to drop it from main and let people who really want GTK2 own it via containers/Flatpaks/OBS, instead of making everyone else carry an orphaned toolkit forever.
That's just because they wanted to eliminate variables. But they are adding snowy/icy places now.
And given how hard it is for humans to drive safely in the snow/ice, I wouldn't be surprised if they outperform humans in the snow just like they do in their current markets. Especially given that their radar sensors can "see" better in the snow than a human.
I've followed the research a little bit. The general sense I get is that, specifically vehicle control at the edge of traction, software in the lab has far outperformed normal humans for over a decade. The problem is that delivering the "boring" point A to B reliably in all conditions is still unsolved. Relative safety is also a moving target because all the advances in the first bucket are directly applicable to human-driven cars as driver aids.
Yeah, my non-autonomous Toyota can already see the lane lines better than me in the rain. However, that's not too beneficial when no other driver can see the lanes and everyone is just driving to not crash into each other.
It gets real hard when the entire road surface is covered with snow for weeks at a time (like after small 1" snowfalls that might not get cleared immediately like a heavy snow would). Or when snow buildup on the road edges change road edge location and cars parked on the side project well into the "nominal" GPS derived lanes. Lanes which human drivers won't be using. They'll be using emergent lanes defined by flocking behavior. I haven't seen any evidence that autonomous vehicles can detect or navigate such emergent, non-marked except by tracks and human gut feeling lanes.
Being "better" than human in these situations will cause crashes. The real goal is to drive like a human.
> A series of 13 cameras, six radar sensors, and four lidar sensors dot the Ojai's exterior, and are fitted with onboard heaters to reduce ice buildup and small wipers and fluid to clear away dirt. These features will be critical as Waymo expands beyond warm-weather cities and into gnarlier climates in the northeast United States.
The whole point of this article is that the new vehicle is better suited to more climates.
All the cities above already have service, the expansion in the title refers to the new markets that should (hopefully) be unlocked with this new vehicle.
Glad to hear it and I wish them luck! I was just clearing up the current status of self-driving vehicles re: the headline. For now they do not work in areas where road surfaces and edges get covered by and location changed by snow. They do not work "Around the US".
That sounds reasonable enough. If we can do this in areas that are less likely to suffer inclement weather, does it not make sense to realize the benefit?
People in SF have more money (on average) and so those selling things there price higher. Compare the average salary for $jobdescription in SF vs Minneapolis. The entire (very small) region has been a bubble of higher income and prices for a long time and is getting worse. It feeds on itself like a ratchet.
I'm in the Minneapolis area, and $14 for a burrito seems completely normal. I might look at that price differently if my spacious, modern 2 bed apartment cost more than $2400.
I did. Did you notice that in the article they're talking about how the price (89%) spanned a decade at least? This seems to align with what I've said in my above coment.
>That's an 89% increase over a decade, and a 32% jump in just two years. San Francisco has the highest indexed concert ticket prices in the nation, roughly 29% above the norm, according to analysis from Tickethold.
Of course prices are increasing everywhere across the USA. I am just pointing out that San Francisco has always been significantly more expensive than everywhere else even before the last handful of years. And the reason for this is very clear: SF people have been paid more on average. Probably because the demand for the small area (so nice climate wise, job availability wise, etc) exceeds availability for those things and drives up other prices. And then the ratcheting like I said.
It's the cost. The high cost of labor, due to the high cost of real estate, due to the limited availability. This is ultimately the cause of booth what you've pointed out (the average high salaries in the area) AND the high cost of local goods and services. If we had more affordable real estate, the average income of residents wouldn't be so high, and the costs of other things also wouldn't be so high.
Sure. There's that. But it doesn't progressively enhance. It doesn't even fail gracefully. It's just... nothing without JS. That's bad accessibility. For for-profit and institutional use cases that's fine. But if you're a human person and want to make a website that all human persons around the world can read, it's a bad fit.
Thanks for pointing out a mitigation. I'm confused though. How does "htmx sends a request header HX-Request: true with every request." happen without javascript? And does this imply you need a backend server that understands whatever this header is for the graceful fallback? Ie, it wouldn't work with just nginx...
> How does "htmx sends a request header HX-Request: true with every request." happen without javascript?
It doesn't. If JavaScript is disabled, this header is not sent.
> you need a backend server that understands whatever this header is for the graceful fallback
Yes, as I mentioned in my blog post linked earlier: 'the backend server can use a fairly simple heuristic to figure out that it should respond with a fragment:...The request has a header HX-Request...The request does not have a header HX-History-Restore-Request...If these two conditions are fulfilled, it can respond with a fragment. Otherwise, it can respond with a full page ie <!DOCTYPE html>... and so on.'
But...htmx is not really meant to work with just Nginx or other static web servers, it is meant to work with a BFF (backend-for-frontend) that specifically knows how to serve and handle the app in question.
From the criteria you have mentioned so far:
- Works without JavaScript
- Works with a static web server like Nginx
I can only conclude that you are talking about serving static sites with no dynamic interactivity. That's not really what htmx is about. Htmx is more like a simplified way to do SPA-like things.
> Htmx is more like a simplified way to do SPA-like things.
Okay. Then we agree. "Htmx is power tools for using Javascript to alter HTML".
With the implicit premise of starting from javascript you see that as "Power Tools for HTML". Without this premise I see it as "Power Tools for Javascript".
I think this distinction is not only important because I question the premise, but because many people who've talked to me about Htmx are confused about what Htmx is and believe that it works without javascript. This is not Htmx's fault, of course, but it could be made clearer by avoiding easily misinterpreted headlines like this.
>htmx gives you access to AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext
I hope you see how the full context can make it sound like it's HTML based and not javascript based, even if, yes, AJAX and WebSockets are JS things.
This headline does not match the article at all. It claims all AI videos are harmful then immeidately goes on to throw this claim out and say old confused people might be fooled by the videos. This is a very narrow subset of AI videos.
It's not just a pedantic issue. Making these kinds of extremely bold and all covering claims is par for the course for those who want to find and exploit attention from others for profit. If that's not what the author is doing I think the most charitable interpretation I can come up with is that they just have no experience with AI videos and are going off sensational press stories.
They should buy an nvidia 3060 12GB ($200~ second hand) and try out wan2.2 on their home computer. It's really fun to be able to make these kind of videos on a whim. And they'll get some real lived experience with the subject.
What good is speed if you cannot compile? c has both. Maybe in another decade rust will have settled down but now wrangling all the incompatible rust versions makes c the far better option. And no, setting cargo versions doesn't fix this. It's not something you'd run into writing rust code within a company but it's definitely something you run into trying to compile other people's rust code.
reply