They could have done better. They chose the path of least resistance, putting in the least amount of effort, spending the least amount of resources into accomplishing a task.
There's nothing "good" about electron. Hell, there are even easier ways of getting high performance cross platform software out there. Electron was used because it's a default, defacto choice that nobody bothered with even researching or testing if it was the right choice, or even a good choice.
"It just works". A rabid raccoon mashing its face on a keyboard could plausibly produce a shippable electron app. Vibe-bandit development. (This is not a selling point.) People claiming to be software developers should aim to do better.
> They could have done better. They chose the path of least resistance, putting in the least amount of effort, spending the least amount of resources into accomplishing a task
You might as well tell reality to do better: The reality of physics (water flows downhill, electricity moves through the best conductor, systems settle where the least energy is required) and the reality of business (companies naturally move toward solutions that cost less time, less money, and less effort)
I personally think that some battles require playing within the rules of the game. Not to wish for new rules. Make something that requires less effort and resources than Electron but is good enough, and people will be more likely to use it.
Shaming the use of electron? I'll do that every day and twice on sunday. Same with nonsense websites that waste gigabytes on bloat, spam users with ads, and feed the adtech beast. And I'll lay credit for this monument to enshittification we call the internet at the feet of Google and Facebook and Microsoft.
Using electron and doing things shittily is a choice. If you're ever presented with a choice between doing something well and not, do the best you can. Electron is never the best choice. It's not even the easiest, most efficient choice. It's the lazy, zero effort, default, ad-hoc choice picked by someone who really should know better but couldn't be bothered to even try.
It might be a strange thing to say, but Java is still viable alternative route. You can build a nice and fast cross-platform desktop application on it today. The language was designed for this kind of things. The entry barrier is quite high though.
As a recent toe-dipper into linux (now running Arch on a powerful minipc and KDE plasma) I'm shocked at how little progress has been made on the native UI side.
As far as I can tell after a quick Google, you can't share your Qt UI with the browser version of your app. Considering that "lite" browser-based versions of apps are a very common funnel to a more featureful desktop version, it makes sense to just use the UI tools that already work and provide a common experience everywhere.
The same search incidentally turned up that Qt requires a paid license for commercial projects, which is surprising to me and obviously makes it an even less attractive choice than Electron. Being less useful and costing more isn't a great combo.
> you can't share your Qt UI with the browser version of your app
You can with WASM (but you shouldn't).
> Qt requires a paid license for commercial projects
It doesn't, it requires paid license if you don't want to abide with (L)GPL license, which should be fair deal, right? You want to get paid for your closed-source product, so you should not have any reservations about paying for their product that enables you to create your product, right? Or is it "money for me, but not for thee"?
> Being less useful and costing more isn't a great combo.
Very nice, but now explain why you are talking about using Qt to create apps, whereas grandparent talks about experience with apps created with Qt.
I looked up the WASM Qt target and it renders to a canvas, which hampers accessibility. The docs even call out that this approach barely works for screen readers [0], and that it provides partial support by creating hidden DOM elements. This creates a branch of differing behavior between your desktop and browser app that doesn't have to exist at all with Electron.
It should go without saying that the requirements of the LGPL license are less attractive than the MIT one Electron has, fairness doesn't really come into it. Beyond the licensing hurdles that Qt devotes multiple pages of its website to explaining, they also gate commercial features such as "3D and graphs capabilities" [1] behind the paid license price, which are more use cases that are thoroughly covered by more permissively licensed web projects that already work everywhere.
On your last point I'm completely lost; it's late here so it might be me but I'm not sure what distinction you're making. I guess I interpreted dmix' comment generally to be about the process of producing software with either approach given that my comment above was asking for details on alternatives from the perspective of a developer and not a user. I don't have any personal beef with using apps that are written with Qt.
Please do continue to waste energy on doing something that will do nothing but allow you to feel superior about yourself. In fact, you will probably waste more energy than Electron ever has.
I agree with you, I even think it's shameful. When I saw it was elctron, I sighted so long I almost choked.
Can't even cmd+g nor shift+cmd+f to search, context menu has nothing. Can't even swipe, no gestures etc.
ELctron is better than nothing, and I'm grateful, but it tastes bitter.
As for performance, somebody if I remember correctly, once asked here "what's the point of 512GB RAM on the mac Studio?"
And then someone replied "so you can run two electron apps".
This, for sure - if there were any risk of quantum computers with 64 or 128 functional qubits, expansion would be a matter of engineering - the development of real, actual, functional quantum computing is on the order of nuclear weapons development. The US government would make it secret, take it over, and scale it up to 1024 bits for immediate and near total cyber dominance; something like pre-emptive strikes on bank accounts, total pwning of adversaries' secure systems, planting command and control malware everywhere, grabbing intelligence from anywhere the administration saw as valuable. There are a ton of dead drop encryptions. They could move btc from Satoshis wallets and wreck crypto value.
Quantum computing research you hear about is "neat lab experiment" fluff, or a demonstration of corporate technical acumen and research capabilities. You won't hear about real quantum computing until well after it's been used in geopolitical conflicts.
Industrial robots have killed a whole lot of people. Automation without intelligence means that robots which mindlessly repeat tasks got built, resulting in people getting crushed when they're in the way of moving arms and apparatus.
Ironically, adding intelligence will probably result in robots that are far safer and kill fewer people.
Robots don’t keep humans safe. Humans keep humans safe. An industrial machine stays put in its context and humans can be trained to work around it, and at least notionally consent to be in its presence. A roaming machine means every human nearby needs to be constantly vigilant, and none of us may revoke consent.
In the second case, machine intelligence is supposed to keep us safe. That intelligence is controlled by people or companies that may or may not have the public benefit as motive for their actions. The typical response to that is to legislate, or publicly advocate for change. But what if the entity that controls the robots also controls the laws? That means there’s no way for regular people to revoke consent to the presence of dangerous robots.
So, cleanly, what if the CEO of a self-driving car company donates money to a government that provides it immunity from the actions of its robots? Who do we trust in that case?
I still prefer a world where we solve the robot problem early, with clubs and fire.
You shouldn't expect privacy in public spaces. That's the nature of public spaces. In the US, freedom of press means anywhere public means you have no expectation of privacy, and should comport yourself as such; don't do anything or wear anything in public you wouldn't want to be recorded.
This is why paparazzi exist and how they operate. It's the dirty, dingy cost of having a free press, freedom of travel, freedom to hold public officials accountable, subject to the same laws you are; you can't waffle or restrict or grant exceptions, because those inevitably, invariably get abused by those in power.
Actually useful AR needs cameras, of course, so the technology has legitimate use cases, but you'd have to be a real asshole to wear them to a bar, or a restaurant, etc. Maybe we mandate that the glasses have to have a base station dongle, and if they're more than 10 feet from the dongle, recording doesn't work without incredibly obvious annoying lights indicating that recording is on?
A cultural convention that lets people make honest mistakes, but turn it off when someone says "hey, you're recording" seems like a good solution. Just need to make it easily visible and obvious to others - you can run around in public with a big news camera on your shoulder or a tripod and you usually won't get hassled. It's just the idea of being covertly recorded, even while in public, that gets creepy.
Maybe if we weigh legitimate use cases against privacy and end up deciding that the privacy is more important, then we just don't accept those use cases?
That is: we invent new awesome life-changing technology and we just... don't use it?
Like we could have navigational AR-glasses. The wearer sees arrows on the floor where to walk. And we could choose to not let anyone wear them in public even though what they do is useful, and there aren't any real privacy issues. But people around the wearer don't know that. That's the privacy concern.
Part of it is civics. You have the right to record in public. Being in a public space means you are consenting to being recorded; it's in public. It's not always moral, classy, correct, or good, but the alternative is the erosion of the principles of free press, freedom of expression, etc.
The form factor of the camera doesn't matter. We do have different constraints, but those are pretty solidly filled out in case law. I don't believe making recording glasses illegal to wear in public would withstand constitutional scrutiny. Mandating a visible notification with a conventional color, signaling things like "on" "passive" and "recording" would be constitutional and wouldn't infringe. That said, surreptitious use would likely be legal, e.g. aftermarket modification to allow recording with no lights; first amendment issues have a high bar and all sorts of secret camera precedents being legal. This is how corrupt politicians and cops and officials get caught, all the time, and it's highly unlikely to be smart glasses that gets the people and courts to flip on 1A.
> Being in a public space means you are consenting to being recorded;
I think we all know the black/white of: you can record all you want. You can not use my face in a commercial. That idea has served us well for a century. The problem is never the recording, it's what happens to the recorded material. If you just keep it for personal use, use it for journalism? That was never a problem.
I'd argue that using the recording to send to a different continent which is then processed by humans and/or computers and this then affects me or others (E.g. changes which ad the person next to me on the train sees on their phone) is now straddling a line somewhere between the black and white of "being recorded" and "being used in a commercial". That recording wasn't just this person observing or recording in a public space. It wasn't just used for "personal use" or journalism. It's something else.
I think it's this gray area that just needs to be cleared up. What if "transmitting a recording of someone to a commercial entity that potentially does commercial things with it" was classed just as "putting my face in the TV ad"?
The GDPR (and similar) also might help (where applicable).
ElementaryOS is supposed to be a very clean transition environment for mac refugees. AI makes everything so much easier, Windows and Mac both have far more friction and hassle in contrast. Good luck!
I'm rocking cachyos(arch based though) wayland+kde and https://github.com/RedBearAK/toshy. it's great to keep the keyboard shortcuts that I'm so used to from the mac almost seamlessly. kde lets you configure pretty much everything how a mac was if you want it though it did take a month or two to get everything the way I like it. I've found that it is nice to have an operating system that is mine and not the whims of some company trying to make money off me. I don't think I'll go back unless I'm forced to for a job.
Some sort of software like ComfyUI with variable application of model specific personality traits would be great - increase conscientiousness, decrease neuroticism, increase openness, etc. Make it agentic; have it do intermittent updates based on a record of experiences, and include all 27 emotional categories, with an autonomous update process so it adapts to interactions in real time: https://www.pnas.org/doi/10.1073/pnas.1702247114
Could be very TARS like, lol.
It'd also be interesting to do a similar rolling record of episodic memory, so your agent has a more human like memory of interactions with you.
Another thing to consider about LLMs is that the nature of the training and the core capability of transformers is to mimic the function of the processes by which the training data was produced; by training on human output, these LLMs are in many cases implicitly modeling the neural processes in human brains which resulted in the data. Lots of hacks, shortcuts, low resolution "good enough" approximations, but in some cases, it's uncovering precisely the same functions that we use in processing and producing information.
> Another thing to consider about LLMs is that the nature of the training and the core capability of transformers is to mimic the function of the processes by which the training data was produced; by training on human output, these LLMs are in many cases implicitly modeling the neural processes in human brains which resulted in the data. Lots of hacks, shortcuts, low resolution "good enough" approximations, but in some cases, it's uncovering precisely the same functions that we use in processing and producing information.
I would argue this is deeply false, my classic go-to examples being that neural networks have almost no real relations to any aspects of actual brains [1] and that modeling even a single cortical neuron requires an entire, fairly deep neural network [2]. Neural nets really have nothing to do with brains, although brains may have loosely inspired the earliest MLPs. Really NNs are just very powerful and sophisticated curve (manifold) fitters.
> Could be very TARS like, lol.
I just rewatched Interstellar recently and this is such a lovely thought in response to the paper!
I am making the case that this is distinctly and specifically true, for these types of models. They're eliciting many of the underlying functions and processes that brought about the data; transformers are able to model the higher degrees of abstraction that previous neural architectures could not. This was one of the major features of transformers that make them so powerful.
It's comparable to the idea that if you trained a model to output human sounding speech, many of the functions that shape the voice will correspond to the physical attributes that affect the sound of actual human voices. Volume of the mouth, shape of the lips, position of teeth, what the tongue does, etc - some of those things will be captured, others will be mashed into "good enough" , and others will be captured as an optimization possible in silicon but not for flesh and blood. It's not a one to one correspondence, but capturing process semantics and abstractions is why we have ChatGPT with transformers and not CNNs (although RNNs could have pulled it off back in the 90s, see: RWKV)
Anyway - the training methods, the paradigm of next token generation (in contrast to things like diffusion) and other aspects of LLMs restrict them to a subset of human capabilities, but it's reasonable to make the claim that many of the same functions that operate in Werncke's area and Broca's area in the human brain are resident in transformers. Many of the same associations between language and emotion and those abstract correlations - unspecified, implicit context that exists in the training data, but only as a deep subtext, sometimes even distributed across many texts, like cultural trends and so forth - are modeled by LLMs, not as an explicit feature of the data, but an implicit feature or function of the processes which produced the data.
Plus, there seems to be some support for the idea that for intelligent systems, modeling the world will result in comparable structures, networks, and features for similar concepts and knowledge - because you're modeling consistent, persistent things using modalities that are shared, or overlap, the way in which things are modeled converges on "universal" forms, simply due to constraints of utility and efficiency.
Hopefully it's not just benchmark maxxing - models are getting small enough to run on phones and standard consumer laptops. Things like AirLLM and other tricks allow for much larger models to be run at slower speeds, too. You might use one of these small models to drive an agent, and escalate to slower, more powerful models run locally when required.
Phenomenal that they're releasing the base models as well as the tuned ones. The US really needs to step up the OSAI game, we're getting utterly trounced.
There's nothing "good" about electron. Hell, there are even easier ways of getting high performance cross platform software out there. Electron was used because it's a default, defacto choice that nobody bothered with even researching or testing if it was the right choice, or even a good choice.
"It just works". A rabid raccoon mashing its face on a keyboard could plausibly produce a shippable electron app. Vibe-bandit development. (This is not a selling point.) People claiming to be software developers should aim to do better.
reply