Sure, but valuation should always exceed cash reserves. It's very odd when it does not. I think I recall that that SUNW (by then JAVA probably) was at one point valued below cash, prior to ORCL acquisition.
If Valve's secrecy is so good that they have (substantially more than) 30-500x cash stashed away in excess of their public valuation estimates, then perhaps I underestimate Valve's secrecy!
More likely, it was an obviously-humorous exaggeration, but I wasn't sure -- I am quite ignorant of the games industry. :)
Philosophically speaking, this is what people want though.
The only reason people make money off of stocks is because at the end, someone gives those companies money. And everyone has a choice to vote with their wallets and through their actions in general.
For example, take Elon and his actions over the past few years. What should have happened is that Tesla sales dropped across the board, and everyone who owns a Tesla should have been in a rush to sell it for fear of it getting scratched or them personally blacklisted from stuff and ridiculed for driving a Tesla. Tesla employees should also have faced the same public pressure and quit.
But people don't care - at the end of the day, politics on Capitol hill is mostly a meme, but what matters is their personal satisfaction. And you look ridiculous if you refuse to get in your coworkers Tesla. So now Elon is a trillionaire.
On the flip side, in a capitalistic sense, it honestly doesn't matter if people are rich, the thing that should matter is what power the money gives them. Rich people buying mansions is a good thing - thats money to workers for construction, staff for housekeeping, and so on. Rich people doing things like being able to buy whole media platforms and censoring things is definitely not good though.
The question is, is society headed in a direction where people have more and more apathy and eventually nothing will matter as things get progressively worse and everyone is just complacent, or is there a bottom line where people start paying attention enough and actually fighting for change when it gets bad enough.
> The only reason people make money off of stocks is because at the end, someone gives those companies money. And everyone has a choice to vote with their wallets and through their actions in general.
This ‘voting with your wallet’ argument always ignores the problem of collective action. Even if every individual would prefer the bad companies to not exist (or at least to not behave the way they do), the rational choice is still to purchase products from them if their product is superior and/or cheaper.
Take something like Walmart or Amazon. I think a majority of people dislike the way they do business (the way they treat their employees, the environment, and competitors), but the only choice a consumer has is “shop there and get the cheaper prices, and the company continues to exist like it does” or “don’t shop their, pay more money for things, and the company continues to exist like it does”
Me, as an individual customer, can’t make the company stop existing in its current form. They aren’t going to miss my business. It isn’t even a rounding error on their balance sheet. My only choice is to get the cheaper prices or not.
Even if the company would change if everyone stopped shopping, I don’t get to make that choice. Hell, if everyone else is stopping shopping there and they are going to go out of business, it is STILL in my best interest to shop their while I can and save the money… it isn’t like my business is going to SAVE them any more than it will kill them.
You can’t kill a business practice by voting with your wallet.
The only time consumer choice will work is when the individual consumer has a better alternative; if I get a better PERSONAL experience (either cheaper or a better product) by shopping somewhere else, then voting with your wallet makes sense and will work (because everyone will have the same incentive). Companies won’t be punished for their external costs by their customers, since the customers aren’t choosing to suffer those costs or not; they suffer the external costs no matter what, they only can choose to enjoy the benefits or not. Why suffer the external costs AND not even get to enjoy the benefits?
This is why change has to come from some binding collective action (like legislation or regulation or something), not individual consumer choice.
You are correct in the sense that a single person can't change anything, but the issue is a level deeper. The thing is, even if you dislike a company and don't use it, you aren't willing to bully people who do use it.
If as a society, we were more willing to police ourselves, you would see real change. In the end, acceptance of peers is the number one fundamental drive for a lot of people past basic needs.
>The only reason people make money off of stocks is because at the end, someone gives those companies money.
Small quibble. The reason why people make money off stocks is largely because people think people will give those companies in the future. People aren’t just trading on dividends, they’re trading on PE ratios.
Otherwise, companies like Tesla would be worth much less than Toyota (which gets more revenue, higher gross profit, and higher profit margins).
True, but a company has to be shown to actually make things that people want. If Tesla didn't sell cars or sold very little, they won't be hyped up as much as they are now.
I think if you look at the numbers, it doesn’t make sense. For Toyota to have a similar market cap at their current PE, they would need to sell something like 95% of the total cars worldwide. So unless people think Tesla will have a worldwide automotive monopoly, they are paying for something other than what they’re doing with cars.
Tesla has the best charging network - which you don't need a Tesla to access - and all other charging networks seem to be poorly incentivized by not having their own cars that they need to improve the user experience for, as those network's chargers are broken and poorly maintained
Tesla becomes a power and taxi company, diversified away from the car sales
I think what’s really a meme is the idea that withholding payment is the defacto way of creating accountability towards a stakeholder that you don’t like, since its not and also doesn't work
Its goofy and cringe that the left has made that their whole identity, only to spend half of their adult life slowly noticing it doesn't work while isolating their prior friends that dont participate in the process
> or is there a bottom line where people start paying attention enough and actually fighting for change when it gets bad enough.
As long as the majority of people are comfortable while being apathetic they will not care. They will happily maintain that status quo. When things get uncomfortable for the majority, then will there be action. Just make sure the people are fed and entertained just enough and you're safe.
> For example, take Elon and his actions over the past few years. What should have happened is that Tesla sales dropped across the board, and everyone who owns a Tesla should have been in a rush to sell it for fear of it getting scratched or them personally blacklisted from stuff and ridiculed for driving a Tesla.
I think you overestimate the number of people who make principled stands on their product buying decisions based on the actions of company executives. I don't think that's the same thing has "people don't care"; I think some people just don't think they should compromise on their needs/wants for what they see as unrelated reasons.
> Tesla employees should also have faced the same public pressure and quit.
In an ideal world, sure. But we live in a country where health insurance is tied to employment and employers, and it's not exactly a wonderful market for job-seekers these days. I expect there were many people at Tesla who were unhappy with Musk and wanted to leave, but felt that the risks of doing so were too high. Again, this is not the same as "people don't care".
> And you look ridiculous if you refuse to get in your coworkers Tesla.
As you should. Making a fuss over this sort of thing when you're all trying to get to an offsite business meeting, or just a group lunch, is eye-rollingly immature.
I fundamentally disagree with your assertion that people are apathetic and don't care. Like with many things, the situation is more complicated than that, and people have to weigh their needs and (personal, financial, etc.) security against whatever principles they may have. On top of that, there are so many things that we "need" to care about and consider, that if we tried to actually take everyone's pet outrage into account, we'd literally do nothing.
And some people don't think it's necessary to group people's politics or general malfeasance with their "art". (E.g., should people stop watching and enjoying movies produced, written, or directed by convicted sexual harassers or abusers? Maybe? But that's an individual decision, and I don't think there's a right or wrong answer there.)
I think there's also an aspect of "who's the loudest shithead today?" going on too. Musk seems like a truly reprehensible person, and I wouldn't buy a Tesla. But what about GM's leadership? What about the history of some German car makers supporting the Nazis back in the 1930s and 40s? Musk is loud about his shittiness, but others are quieter and sometimes their misdeeds go farther in the past.
You're conflating "what people want" with "only choice"
If you look at polling this is not what the majority want
Since much of this truth is merely rhetorical, socialized truth, not immutable physics, the fix is to propagate a new narrative about how the economy works, how politics work, and threaten the elders the way they threaten the youth. They are older and weaker naturally. End the one sided ageism
You have a point, but I think the situation can be framed as a question of "what people want" by modelling the decision better. (People under a dictator sometimes do revolt, but often don't.)
The choice people have is not between a dictator and no dictator, it's between a dictator and a period of instability and chaos, possibly including bloody fighting or even famine, the outcome of which is (a) completely uncertain even in the unlikely event that you know that ~everyone wants the dictator gone and (b) might be that the dictator's forces still come out on top, or that someone even worse is installed into power.
Nobody is winning in this area until these things run in full on single graphics cards. Which is sufficient compute to run even most of the complex tasks.
Lol its kinda suprising that the level of understanding around LLMs is so little.
You already have agents, that can do a lot of "thinking", which is just generating guided context, then using that context to do tasks.
You already have Vector Databases that are used as context stores with information retrieval.
Fundamentally, you can have the same exact performance on a lot of task whether all the information exists in the model, or you use a smaller model with a bunch of context around it for guidance.
So instead of wasting energy and time encoding the knowledge information into the model, making the size large, you could have an "agent-first" model along with just files of vector databases, and the model can fit in a single graphics cards, take the question, decide which vector db it wants to load, and then essentially answer the question in the same way. At $50 per TB from SSD not only do you gain massive cost efficiency, but you also gain the ability to run a lot more inference cheaper, which can be used for refining things, background processing, and so on.
You should start a company and try your strategy. I hope it works! (Though I am doubtful.)
In any case, models are useful, even when they don't hit these efficiency targets you are projecting. Just like cars are useful, even when they are bigger than a pack of cards.
If someone wants to fund me, Ill gladly work on this. There is no money in this though, because selling cloud service is much more profitable.
Its also not a matter of it working or not. It already works. Take a small model that fits on a GPU with a large context window, like Gemma 27b or smaller ones, give it a whole bunch of context on the topic, and ask it questions and it will generate very accurate results based on the context.
So instead of encoding everything into the model itself, you can just take training data, store it in vector DBs, and train a model to retrieve that data based on query, and then the rest of it is just training context extraction.
> There is no money in this though, because selling cloud service is much more profitable.
Oh, be more creative. One simple way to make money off your idea is:
(1) Get a hedge fund to finance your R&D.
(2) Hedge fund shorts AI cloud providers and other relevant companies.
(3) Your R&D pans out and the AI cloud providers' stock tanks.
(4) The hedge fund makes a profit.
Though I don't understand: wouldn't your idea work work when served from the cloud, too? If what you are saying is true, you'd provide a better service at lower cost?
From a functional pespective, it would provide somewhat identical performance to existing systems with a lower cost due to less dependence on compute and more dependence on storage. It would also allow more on-prem solutions.
However the issue with "funding" isn't as simple as that statement above. Remember, modern funding is not about value its about hype. There is a reason why CEOs like Jenson say that if they could go back in time, they would never start their companies knowing the bullshit they have to walk through.
Ive also had my fair share of experiences in trying to get startups off the ground - for example, back around 2018, I was working on a system that would take your existing AWS cloud setup, and move it all to EC2s with self hosted services, which saved people money in the long run. I had proof of concept working and everything. The issue that I ran into when trying to get funding to build this out into a full blown product/service that I didn't realize is that being on AWS services for companies was equivalent to a person wearing an expensive business suit to a sales meeting - it was fact that they would advertise because it was seen as industry standard and created "warm feelings" with their customers. So at most, I would get some small time customers, while getting paid much less.
Now I just work on stuff (and yes, I am working on the issue at hand with existing models), and publish it to github (not gonna share it cause don't want my HN account associated with it). If someone contacts me with a dollar figure Im all game.
I mean, there are lots of models that run on home graphics cards. I'm having trouble finding reliable requirements for this new version, but V3 (from February) has a 32B parameter model that runs on "16GB or more" of VRAM[1], which is very doable for professionals in the first world. Quantization can also help immensely.
Of course, the smaller models aren't as good at complex reasoning as the bigger ones, but that seems like an inherently-impossible goal; there will always be more powerful programs that can only run in datacenters (as long as our techniques are constrained by compute, I guess).
FWIW, the small models of today are a lot better than anything I thought I'd live to see as of 5 years ago! Gemma3n (which is built to run on phones[2]!) handily beats ChatGPT 3.5 from January 2023 -- rank ~128 vs. rank ~194 on LLMArena[3].
Why does that matter? They wont be making at home graphics cards anymore. Why would you do that when you can be pre-sold $40k servers for years into the future
We're around 35-40 orders of magnitude from computers now to computronium.
We'll need 10-15 years before handheld devices can run a couple terabytes of ram, 64-128 terabytes of storage, and 80+ TFLOPS. That's enough to run any current state of the art AI at around 50 tokens per second, but in 10 years, we're probably going to have seen lots of improvements, so I'd guess conservatively you're going to be able to see 4-5x performance per parameter, possibly much more, so at that point, you'll have the equivalent of a model with 10T parameters today.
If we just keep scaling and there are no breakthroughs, Moore's law gets us through another century of incredible progress. My default assumption is that there are going to be lots of breakthroughs, and that they're coming faster, and eventually we'll reach a saturation of research and implementation; more, better ideas will be coming out than we can possibly implement over time, so our information processing will have to scale, and it'll create automation and AI development pressures, and things will be unfathomably weird and exotic for individuals with meat brains.
Even so, in only 10 years and steady progress we're going to have fantastical devices at hand. Imagine the enthusiast desktop - could locally host the equivalent of a 100T parameter AI, or run personal training of AI that currently costs frontier labs hundreds of millions in infrastructure and payroll and expertise.
Even without AGI that's a pretty incredible idea. If we do get to AGI (2029 according to Kurzweil) and it's open, then we're going to see truly magical, fantastical things.
What if you had the equivalent of a frontier lab in your pocket? What's that do to the economy?
NVIDIA will be churning out chips like crazy, and we'll start seeing the solar system measured in terms of average cognitive FLOPS per gram, and be well on the way toward system scale computronium matrioshka brains and the like.
I appreciate your rabid optimism, but considering that Moores Law has ceased to be true for multiple years now I am not sure a handwave about being able to scale to infinity is a reasonable way to look at things. Plenty of things have slowed down in progress in our current age, for example airplanes.
Someone always crawls out of the woodwork to repeat this supposed "fact" which hasn't been true for the entire half-century it's been repeated. Jim Keller (designer of most of the great CPUs of the last couple decades) gave a convincing presentation several years ago about just how not-true it is: https://www.youtube.com/watch?v=oIG9ztQw2Gc Everything he says in it still applies today.
Intel struggled for a decade, and folks think that means Moore's law died. But TSMC and Samsung just kept iterating. And hopefully Intel's 18a process will see them back in the game.
During the 1990s (and for some years before and after) we got 'Dennard scaling'. The frequency of processors tended to increase exponentially, too, and featured prominently in advertising and branding.
I suspect many people conflated Dennard scaling with Moore's law and the demise of Dennard scaling is what contributes to the popular imagination that Moore's law is dead: frequencies of processors have essentially stagnated.
Yup. Since then we've seen scaling primarily in transistor count, though clock speed has increased slowly as well. Increased transistor count has led to increasingly complex and capable instruction decode, branch prediction, out of order execution, larger caches, and wider execution pipelines in attempt to increase single-threaded performance. We've also seen the rise of embarrassingly parallel architectures like GPUs which more effectively make use of additional transistors despite lower clock speeds. But Moore's been with us the whole time.
Chiplets and advanced packaging are the latest techniques improving scaling and yield keeping Moore alive. As well as continued innovation in transistor design, light sources, computational inverse lithography, and wafer scale designs like Cerebras.
Yes. Increase in transistor count is what the original Moore's law was about. But during the golden age of Dennard scaling it was easy to get confused.
Agreed. And specifically Moore's law is about transistors per constant dollar. Because even in his time, spending enough could get you scaling beyond what was readily commercially available. Even if transistor count had stagnated, there is still a massive improvement from the $4,000 386sx Dad somehow convinced Mom to greenlight in the late 80s compared to a $45 Raspberry Pi today. And that factors into the equation as well.
Of course, feature size (and thus chip size) and cost are intimately related (wafers are a relatively fixed cost). And related as well to production quantity and yield (equipment and labor costs divide across all chips produced). That the whole thing continues scaling is non-obvious, a real insight, and tantamount to a modern miracle. Thanks to the hard work and effort of many talented people.
The way I remember it, it was about the transistor count in the commercially available chip with the lowest per transistor cost. Not transistor count per constant dollar.
Wikipedia quotes it as:
> The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.
But I'm fairly sure, if you graph how many transistors you can buy per inflation adjusted dollar, you get a very similar graph.
Yes. I think you're probably right about phrasing. And transistor count per inflation adjusted dollar is the unit most commonly used to graph it. Similar ways to say the same thing.
LoAR shows remarkably steady improvement. It's not about space or power efficiency, just ops per $1000, so transistor counts served as a very good proxy for a long time.
There's been sufficiently predictable progress that 80-100 TFLOPS in your pocket by 3035 is probably a solid bet, especially if a fully generative AI OS and platform catches on as a product. The LoAR frontier for compute in 2035 is going to be more advanced than the limits of prosumer/flagship handheld products like phones, so theres a bit of lag and variability.
> What if you had the equivalent of a frontier lab in your pocket? What's that do to the economy?
Well, these days people have the equivalent of a frontier lab from perhaps 40 years ago in their pocket. We can see what that has done to the economy, and try to extrapolate.
>None of that makes any sense and there’s no obvious path forward.
The top end models with their high compute requirements probably don't but there is value in lower end models for sure.
After all, its the AWS approach. Most of AWS services is stuff you can easily get for cheaper if you just rent an EC2 and set it up yourself. But because AWS offers very simple setup, companies don't mind paying for it.
Except its not. Data science in python pretty much requires you to use numpy. So his example of mean/variance code is a dumb comparison. Numpy has mean and variance functions built in for arrays.
Even when using raw python in his example, some syntax can be condesed quite a bit:
groups = defaultdict(list)
[groups[(row['species'], row['island'])].append(row['body_mass_g']) for row in filtered]
It takes the same amount of mental effort to learn python/numpy as it does with R. The difference is, the former allows you to integrate your code into any other applicaiton.
> Numpy has mean and variance functions built in for arrays.
Even outside of Numpy, the stdlib has the statistics packages which provides mean, variance, population/sample standard deviation, and other statistics functions for normal iterables. The attempt to make Python out-of-the-box code look bad was either deliberately constructed to exaggerate the problems complained of, or was the product of a very convenient ignorance of the applicable parts of Python and its stdlib.
I dunno. Numpy has its own data types, its own collections, its own semantics which are all different enough from Python, I think it's fair to consider it a DSL on its own. It'd be one thing if it was just, operator overloading to provide broadcasting for python, but Numpy's whole existence is to patch the various shortcomings Python has in DS.
I dunno why people are surprised by this. This is what you get with text->text. Reasoning doesn't work text->text.
reply