This is ultimately a business model problem. There are FOSS projects out there; some of them are even listed in the article. The problem is that making it "Just Work(tm)" is hard; it's a lot of work, requiring a large number of engineers. And funny thing, engineers prefer food with their meals.
Saying "FOSS world" we need XXX is pretty useless. As a FOSS mainatinaer, the answer I always give people demanding their favorite pet feature is, "clean, maintainable patches are appreciated", ChromiumOS is free software; someone could take it use it as the basis for something like ChromeOS. But the author of the article has basically admitted that the derivitives of Chromium would quickly fail if Chromium stoped being something that they could free ride off of. Shouldn't that tell you everything about what the problem is with this picture?
Programmers do stuff for free, all the time. People also donate to foundations so some programmers can be paid to make free stuff. It's how we got the Linux desktop as we know it.
Programmers often stop worrying about the details once they're satisfied, though, and there are a lot of details. Yes we could add thumbnails to the file picker, but we also need to support Nvidia hardware and the archive application is missing a few much more interesting compression formats, plus there are a bazillion logged bugs that actually crash software rather than being a minor inconvenience.
In theory someone could set up a collective of programmers, designers, and translators to create a usable Linux desktop for the Windows 10 refugees, but it'll require a lot of effort from a lot of volunteers and a lot of people who happen to share the same principles and preferences.
ChromeOS works because there's a single entity that can hire a small group of designers to tell an army of programmers what they want everything to look and work like.
There are actually projects and companies that fork ChromeOS, but they're pretty small and once Google stops maintaining the OS they're going to collapse pretty quickly.
> ChromeOS works because there's a single entity that can hire a small group of designers to tell an army of programmers what they want everything to look and work like.
ChromeOS works because it's pre-installed on selected hardware devices where it's been configured and tested before any user even get it into their hands which would solve the majority of the problems no matter what OS you're using. Then they also take all power away from the user do make any changes to those systems which helps solve the rest. Ultimately though, most people who know anything at all about computers want the power to make changes to what's installed on them..
Everyone I know who got a chromebook did it first and foremost because they were inexpensive and every one of them at one point or another found themselves frustrated by the limitations. A privacy respecting linux system won't be able to use massive amounts of data collection to lower the price of the hardware like Google can.
People will do a small amount of work for free. But if something requires dozens or hundreds of programmers working full time, the chances of this happening is much less.
And if you want something which just works, it means you need a huge number of people doing grunt work, and that's the kind of thing that people are less likely to want to do in their free time.
SteamOS is the closest thing to making this work. Valve has somehow worked their way into being a software store that not everybody hates.
Companies should be happy that there’s a FOSS community that lets them build up on top of it for free. It is a great deal! I wonder why there isn’t an EpicOS yet.
Single malt was the currency of choice when bribing and/or placating SRE's and hwops folks. For example, if a SWE botched a rollout that caused a multiple SRE's to get paged at 3am, a bottle of single malt donated to the SRE bar was considered a way of apologizing.
There was definitely a certain amount of "I told you so" vibes, but I don't blame the author. It appears that he was attacked by a lot of Ello founders and fans for raising some cautionary notes. And as it turns out, he was right and they were wrong.
We would all like to have a model where users don't get charged money, and yet are not the product. But I haven't seen a model that works to date. In some cases, I don't mind my personal date getting sold; in other cases I pay money because the service is valuable. But I certainly make backups since I don't assume that even when I pay $$$, that the company might not go poof in the night....
Sure you have. Amazon grew without giving stuff away for free. Customers paid (just below market rate) from day 1. This demonstrated the -convenience- of ecommerce. It had revenues from the first sale. Yes, it spent mountains of VC money on marketing and development, but -not- on just buying stuff for you so you think it'll be free forever.
Uber is the same, although it's less clear that users will pay gor what a ride really costs. (And their margin makes it attractive for competition)
In both cases though there us revenue from customers from day 1. You can wind prices up. It's really hard to "go from free to paid".
It's just that nobody wants to work on protocols anymore. Ever since the world's richest was suddently a computer guy, no one wants to work on anything without a business model that includes taking complete control over what is built. A product, if you will.
In the background, there's always some geeks slaving away with new protocols and federated models. That will not become mainstream, not in our current society. But societies change over time. There is always hope.
But one of the four freedoms is being able to modify/tweek things, including the model. If all you have is the model weights, then you can't easily tweak the model. The model weights is hardly the preferred form for making changes to update the model.
The equivalent would be someone which gives you only the binary to Libreoffice. That's perfectly fine for editing documents and spreadsheets, but suppose you want to fix a bug in Libreoffice? Just having the binary is going to make it quite difficult to fix things.
Simiarly, suppose you find that the model has a bias in terms of labeling African Americans as criminals; or women as lousy computer programmers. If all you have is the model weights of the trained model, how easily can you fix the model? And how does that compare with running emacs on the Libreoffice binary?
If all you have are the model weights, you can very easily tweak the model. How else are all these "decensored" Llama2 showing up on Hugging Face? There's a lot of value in a trained LLM model itself and it's 100% a type of openness to release these trained models.
What you can't easily do is retrain from scratch using a heavily modified architecture or different training data preconditioning. So yes, it is valuable to have dataset access and compute to do this and this is the primary type of value for LLM providers. It would be great if this were more open — it would also be great if everybody had a million dollars.
I think it's pretty misguided to put down the first type of value and openness when honestly they're pretty independent, and the second type of value and openness is hard for anybody without millions of dollars to access.
Well, by that argument it's trivially easy to run emacs on a binary and change a pathname --- or wrap a program with another program to "fix a bug". Easy, no?
And yet, the people who insist on having source code so they can edit the program and recompile it have said that for programs, having just the binary isn't good enough.
>suppose you find that the model has a bias in terms of labeling African Americans as criminals; or women as lousy computer programmers. If all you have is the model weights of the trained model, how easily can you fix the model?
That's textbook fine-tuning and is basically trivial. Adding another layer and training that is many orders of magnitude more efficient than retraining the whole model and works ~exactly as well.
Models are data, not instructions. Analogies to software are actively harmful. We do not fix bugs in models any more than we fix bugs in a JPEG.
Next step will be to ask for GPU time. Because even with data, model code and training framework you may have no resources to train. "The equivalent would be" someone gives you the code, but no access to mainframe which is required to compile. Which would make it not open source(?) There are other variations, like original compiler was lost, current compilers aren't backward compatible. Does that make old open source code closed now?
In other words there should be a reasonable line when model is called open source. In extreme view it's when the model, the training framework, and the data are available for free. This would mean open source model can be trained only on public domain data. Which makes class of open source models very, very limited.
More realistic is to make the code and the weights available. So that with some common knowledge new model can be trained, or old fine tuned, on available data. Important note: weights cannot be reproduced even if original training data is available. It will be always a new model with (slightly) different responses.
Down voted, hmm... I'll add bit more then. Sometimes it's even good that model cannot be easily reproduced. Original developers usually have some skills and responsibility. While 'hackers' don't. It's easy to introduce bias into the data , like removing selected criminal records, and then publish model with similar name. That would be confusing, some may mistake fake one for the real.
PS: If I ever make my models open I can't open the data anyway. License on images directly prohibits publishing them.
They don't have to be low-end. You can buy higher-end Chromebooks, but they cost more money. Do people remember the "netbooks" that were-super cheap Windows laptops with 10 inch screens? Even if you install Linux on it, with the 512MB or 1G (or maybe 2GB for the really highly spec'ed out netbooks), there was real limits to what they could do.
If you want something super-cheap, then perhaps it won't be useful 5 or 10 years later. You get what you pay for; this isn't unique for Chromebooks.
Unfortunately, the supply chain often goes 3 and 4 levels deep. And by the time you get to companies that far in the supply chain, (a) no one has ever heard of that company, so the trying to threaten them with reputational damage doesn't really work (it will be some random set of chinese characters for a company in Shenzhen, for example), and (b) it will turn out that the team that wrote the device driver for that particular subcomponent in the SOC was disbanded as soon as the part was released, and 4 years later, half are working for a different company, and half were died during the COVID pandemic.
Sure, if you could set the Wayback machine back in time, and require that device driver be upstreamed, with enough programming information so it's possible to maintain the device driver, maybe it would be possible to upgrade to a newer kernel that doesn't have eleven hundred zero-day vulnerabilities. But meanwhile, back in the real world, very often there's not a whole lot you can do. So this is why it's kind of sad when people insist on buying Nvidia video chips that have proprietary blobs because performance, or power consumption, or whatever, instead of the more boring alternative that doesn't have the same eye-bleeding performance, but which has an open source device driver. Our buying choices, and the product reviewers that only consider performance, or battery life, etc., drives the supply chain, and the products that we get. And this is why we can't have nice things.
It might be worth taking a look at Bensonwood (https://bensonwood.com). They do some very impressive, high-end pre-fab homes, and they solve the "must fit in highway lanes" by shipping walls that have windows, electrical wiring, plumbing, etc., all already pre-installed in their factory in New Hampshire. When we investigated using them 3 years ago, they didn't support pre-installed CAT 5 wiring or Optical Fiber, but I wouldn't be surprised if they can do that now. :-)
This is all done using computer-controlled manufacturing equipment, much of which is imported from Europe, where they are much more advanced on this front than in the U.S. One of the advantages of having computer controlled nail guns and vaccuum operated "wall flippers" is that the construction tolerances are far tighter than if you have humans nailing in the shingles, sometimes while on a ladder 15 feet above the ground.
The downside, of course, is that they only today have their one factory in New Hampshire, and while the walls can be shipped trucks on highways, if you want to build a large, luxury pre-fab home in Arizona, the trucks have to travel a long way, and that adds to the cost. This hasn't stopped some of their customers, though. Take a look at their web site for some example houses that they have built --- it's a far cry from what most people think of when they hear about "pre-manufactured houses". These are not trailer park homes!
Cgroups are a lot more than just "namespaces". It is also the mechanism by which you can constrain how much CPU, Memory, Network Bandwidth, Storage IOPS or Throughput, etc., processes in a particular cgroup or container can use.
But I think the core point here is that, as with much of the Plan 9 design, by including a more elegant and powerful abstraction in the core design, the need for a much more powerful and much more complicated abstraction layer on was obviated, if not eliminated.
Disclaimer: I work for Google but nothing I say here is Google's opinion or relies on any Google internal information.
I'm not surprised that Workspace accounts weren't included in the initial rollout. Workspace setups have interesting requirements that aren't necessarily there for personal accounts. For example, under some circumstances, if an employee gets hit by a bus, and there is critical business data which is stored in the employee's account, an appropriately authorized Workspace admin is supposed to be able to gain access to the employee's account. But what is the right thing to do for passkey access? Especially if the user uses passkey to authenticate to some non-:Google resource like, say, Slack which has been set up for corporate use? Should the workspace admin be able to impersonate the corporate employee in order to gain access to non-Google resources via passkey? What about if the employee (accidentally) uses their corporate account to set up a passkey to a personal account, such as for example E*Trade? Maybe the Workspace admin should have a setting where passkey creation is disabled except for an allowlist of domains that are allowed for corporate workflows? It's complicated, and if I were the product manager, I'd want to take my time, understand all of the different customer requirements (where customer === the Workspace administrator who is paying the bills) before rolling out support for Workspace accounts.
I recall a story from a colleague who knew some folks who had worked on the game System Shock (this was in the early nineties). System shock was one of the first games that had an engine that implemented real 3D physics; so when you threw a grenade, it would describe a real parabola. And you can lean around a corner and sneak a peak without exposing your entire body to enemy fire, and when you did that, the 1st person shooter rendering would realistically reflect that. They had an experimental version of the game that was hooked to a virtual reality headset at the time, and gave up on it because, as one of them joked, it was "virtual reality, real nausea".
This was 30 years ago, and things haven't improved since then.
"Descent" goes back 28 years and I remember getting pretty disoriented and a little nausea, worse than I ever experienced flying a small airplane.
"Descent" was the game where you blast robots in a very 3-d mine.
One oddity of "VR" is it initially attracts people with excellent visuospatial analysis skills; the problem is the majority of the population is not good at it. It would be like implementing a user interface based on bench pressing 275 pounds of real world weights; it would be an incredibly popular fad among people already qualified to participate, then the general public would LOL and that's it. So that's the problem selling VR to the general public; most folks aren't very good at solving maze puzzles and drawing 3D CAD drawings in their heads so a UI based on that will be a hard sell.
When I started out in web design and development in 1995, a lot of companies were showing early "VR" and 3D interfaces, touting them as the next great thing. Somehow, people got the idea that reaching around in 3D space for everything was better than just picking from a menu, a list, or an index -- like we have done for 1,000 years.
I feel like all the 3D hype is just that. While it could be fun in games in a holodeck-type environment (but probably not outside of that, 'cause physics), I don't think the majority of everyday human interactions with information are better off in 3D. Why would anyone think so? We don't read in 3D. We don't write in 3D. We don't make pictures in 3D. Why would a 3D interface be better?
Saying "FOSS world" we need XXX is pretty useless. As a FOSS mainatinaer, the answer I always give people demanding their favorite pet feature is, "clean, maintainable patches are appreciated", ChromiumOS is free software; someone could take it use it as the basis for something like ChromeOS. But the author of the article has basically admitted that the derivitives of Chromium would quickly fail if Chromium stoped being something that they could free ride off of. Shouldn't that tell you everything about what the problem is with this picture?