Hacker Newsnew | past | comments | ask | show | jobs | submit | gradus_ad's commentslogin

The "world", implicit in your comment, is the liberal world order. This world order worked for America (ie Americans) until it became apparent that it did not, at least in a political and cultural sense (in a material sense it is of course still working perhaps even too well). The greatest champion of this world order, the EU bureaucratic class, views Americans' play for their sovereignty with bewilderment and casts it as renouncing its leadership in the world, yet leadership is not just blindly following a path to ruin but instead forging ahead down new and promising paths. In this sense the US is indeed still "leading" and it is the EU that stands firm in its intransigence and refuses to follow the leader. Yet it need not be this way and the EU very well could follow the US away from the excesses of hegemonic liberalism. There are signs of change in the air. This is politically interesting and the eventual outcome is not at all clear.

On the other hand, the emergence of China, India, and to a lesser extent Russia (as a puppet of the Chinese) upon the world stage as independent actors, out of the shadows of Western domination, is another way in which the US is "losing control" but this is much less politically interesting in the sense that it was an inevitable and expected outcome. There is nothing the US has done, is doing or could do that would diminish non-Western ambition and agitation for power.


Hegemonic liberalism is way, way, better than hegemonic illiberalism.

Hegemonic idealism is dangerous either way. We can pick and choose what works best.

Yes "we" can, but the difference is that in a liberal order "we" at least ostensibly represents the people, and in an illiberal order, "we" represents a naked power grab by whichever elite group currently has the reigns.

It's funny to think that no matter how our technology develops, cats will be right there along for the ride, completely ignorant of it all. It's humorously comforting to think of an interstellar civilization powered by fusion and AGI serving cats just as they're served now. Scratching posts on starships seems to be inevitable.

The domestication of cats happened because of the invention of farming.

If you store grain in a granary, it attracts a lot of insects, rodents, etc. Cats that could tolerate getting close to human settlements found a good food source. And humans like this, because the cats protect the grain without eating it. So you can see why ancient agrarian societies like the Egyptians held cats in high esteem.

And despite only having a few thousand years to adapt to each other, ends up cats and humans can understand each other and form emotional bonds pretty easily.

I imagine we'll see cats on spaceships of the future just like they were the norm on ships in the age of sail.


This seems like a book.

Humans extinct for a billion years, AGI and robots tasked to feed and "take care of the cats".

I imagine entire cities, houses built, all empty save cat and humanform robot.


I would recommend the two episodes "Three Robots" and "Three Robots: Exit Strategies" from the anthology series Love, Death and Robots if you like this kind of humor.

You might like the game Stray. Here's the trailer: https://www.youtube.com/watch?v=kJawWyRUOBM

It's about a cat that lives in a city of robots long after humans are extinct.


In the puzzle game series The Talos Principle, intelligent robots (who were made to outlive humanity after a species-ending global pandemic) seem to have the exact same kind of affinity for caring for cats that humans do. It's actually really sweet and cute.

This was a minor plot point in that one black mirror episode with the robots on a tourism trip to Earth, lol

You mean Love, Death and Robots?

I'm sorry, yes, you're right. I misremembered which series I was thinking about.

"There will come soft rains" Ray Bradbury

The proliferation of nondeterministically generated code is here to stay. Part of our response must be more dynamic, more comprehensive and more realistic workload simulation and testing frameworks.

I disagree. I think we're testing it, and we haven't seen the worst of it yet.

And I think it's less about non-deterministic code (the code is actually still deterministic) but more about this new-fangled tool out there that finally allows non-coders to generate something that looks like it works. And in many cases it does.

Like a movie set. Viewed from the right angle it looks just right. Peek behind the curtain and it's all wood, thinly painted, and it's usually easier to rebuild from scratch than to add a layer on top.


Exactly that.

I suspect that we're going to witness a (further) fork within developers. Let's call them the PM-style developers on one side and the system-style developers on the other.

The PM-style developers will be using popular loosely/dynamically-typed languages because they're easy to generate and they'll give you prototypes quickly.

The system-style developers will be using stricter languages and type systems and/or lots of TDD because this will make it easier to catch the generated code's blind spots.

One can imagine that these will be two clearly distinct professions with distinct toolsets.


I actually think that the direct usage of AI will reduce in the system-style group (if it was ever large there).

There is a non-trivial cost in taking apart the AI code to ensure it's correct, even with tests. And I think it's easy to become slower than writing it from scratch.


FWIW, I'm clearly system-style. Given that my current company has an AI product, I'm dogfooding it, and I've found good uses for it, mostly for running quick experiments, as a rubber duck, or for solving simple issues in config files, Makefiles, etc.

It doesn't get to generate much of the code I'm shipping, though.


I just wanted to say how much I like that similie - I'm going to knick it for sure

Code has always been nondetermistic. Which engineer wrote it? What was their past experience? This just feels like we are accepting subpar quality because we have no good way to ensure the code we generate is reasonable that wont mayyyybe rm-rf our server as a fun easter egg.

Code written by humans has always been nondeterministic, but generated code has always been deterministic before now. Dealing with nondeterministically generated code is new.

determinism v nondeterminism is and has never been an issue. also all llms are 100% deterministic, what is non deterministic are the sampling parameters used by the inference engine. which by the way can be easily made 100% deterministic by simply turning off things like batching. this is a matter for cloud based api providers as you as the end user doesnt have acess to the inferance engine, if you run any of your models locally in llama.cpp turning off some server startup flags will get you the deterministic results. cloud based api providers have no choice but keeping batching on as they are serving millions of users and wasting precious vram slots on a single user is wasteful and stupid. see my code and video as evidence if you want to run any local llm 100% deterministocally https://youtu.be/EyE5BrUut2o?t=1

That's not an interesting difference, from my point of view. The box m black box we all use is non deterministic, period. Doesn't matter where on the inside the system stops being deterministic: if I hit the black box twice, I get two different replies. And that doesn't even matter, which you also said.

The more important property is that, unlike compilers, type checkers, linters, verifiers and tests, the output is unreliable. It comes with no guarantees.

One could be pedantic and argue that bugs affect all of the above. Or that cosmic rays make everything unreliable. Or that people are non deterministic. All true, but the rate of failure, measured in orders of magnitude, is vastly different.


My man did you even check my video, did you even try the app. This is not "bug related" nowhere did i say it was a bug. Batch processing is a FEATURE that is intentionally turned on in the inference engine for large scale providers. That does not mean it has to be on. If they turn off batch processing al llm api calls will be 100% deterministic but it will cost them more money to provide the services as now you are stuck with providing 1 api call per GPU. "if I hit the black box twice, I get two different replies" what you are saying here is 100% verifiably wrong. Just because someone chose to turn on a feature in the inference engine to save money does not mean llms are anon deterministic. LLM's are stateless. their weights are froze, you never "run" an LLM, you can only sample it. just like a hologram. and depending on the inference sampling settings you use is what determines the outcome.....

Correct me if I'm wrong, but even with batch processing turned off, they are still only deterministic as long as you set the temperature to zero? Which also has the side-effect of decreasing creativity. But maybe there's a way to pass in a seed for the pseudo-random generator and restore determinism in this case as well. Determinism, in the sense of reproducible. But even if so, "determinism" means more than just mechanical reproducibility for most people - including parent, if you read their comment carefully. What they mean is: in some important way predictable for us humans. I.e. no completely WTF surprises, as LLMs are prone to produce once in a while, regardless of batch processing and temperature settings.

You can change ANY sampling parameter once batch processing is off and you will keep the deterministic behavior. temperature, repetition penalty, etc.... I got to say I'm a bit disappointed in seeing this in hacker news, as I expect this from reddit. you bring the whole matter on a silver platter, the video describes in detail how any sampling parameter can be used, i provide the whole code opensource so anyone can try it themselves without taking my claims as hearsay, well you can bring a horse to water as they say....

> generated code has always been deterministic

Technically you are right… but in principle no. Ask an LLM any reasonably complex task and you will get different results. This is because the mode changes periodically and we have no control over the host systems source of entropy. It’s effectively non deterministic.


Agreed. It's a new programming paradigm that will put more pressure on API and framework design, to protect vibe developers from themselves.

i've seen a lot of startups that use AI to QA human work. how about the idea of use humans to QA AI work? a lot of interesting things might follow

This feels a lot like the "humans must be ready at any time to take over from FSD" that Tesla is trying to push. With presumably similar results.

If it works 85% of the time, how soon do you catch that it is moving in the wrong direction? Are you having a standup every few minutes for it to review (edit) it's work with you? Are you reviewing hundreds of thousands of lines of code every day?

It feels a bit like pouring cement or molten steel really fast: at best, it works, and you get things done way faster. Get it just a bit wrong, and your work is all messed up, as well as a lot of collateral damage. But I guess if you haven't shipped yet, it's ok to start over? How many different respins can you keep in your head before it all blends?


A large percentage (at least 50%) of the market for software developers will shift to lower paid jobs focused on managing, inspecting and testing the work that AI does. If a median software developer job paid $125k before, it'll shift to $65k-$85k type AI babysitting work after.

It's funny that I heard exactly this when I graduated university in the late 2000s:

> A large percentage (at least 50%) of the market for software developers will shift to lower paid jobs focused on managing, inspecting and testing the work that outsourced developers do. If a median software developer job paid $125k before, it'll shift to $65k-$85k type outsourced developer babysitting work after.


No thanks.

Sounds inhuman.

As an industry, we've been doing the same thing to people in almost every other sector of the workforce, since we began. Automation is just starting to come for us now, and a lot of us are really pissed off about it. All of a sudden, we're humanitarians.

> Automation is just starting to come for us now

This argument is common and facile: Software development has always been about "automating ourselves out of a job", whether in the broad sense of creating compilers and IDEs, or in the individual sense that you write some code and say: "Hey, I don't want to rewrite this again later, not even if I was being paid for my time, I'll make it into a reusable library."

> the same thing

The reverse: What pisses me off is how what's coming is not the same thing.

Customers are being sold a snake-oil product, and its adoption may well ruin things we've spent careers de-crappifying by making them consistent and repeatable and understandable. In the aftermath, some portion of my (continued) career will be diverted to cleaning up the lingering damage from it.


Nah, sounds like management, but I am repeating myself. In all seriousness, I have found myself having to carefully rein some of similar decisions in. I don't want to get into details, but there are times I wonder if they understand how things really work or if people need some 'floor' level exposure before they just decree stuff.

Yes, but not like what you think. Programmers are going to look more like product managers with extra technical context.

AI is also great at looking for its own quality problems.

Yesterday on an entirely LLM generated codebase

Prompt: > SEARCH FOR ANTIPATTERNS

Found 17 antipatterns across the codebase:

And then what followed was a detailed list, about a third of them I thought were pretty important, a third of them were arguably issues or not, and the rest were either not important or effectively "this project isn't fully functional"

As an engineer, I didn't have to find code errors or fix code errors, I had to pick which errors were important and then give instructions to have them fixed.


> Programmers are going to look more like product managers with extra technical context.

The limit of product manager as "extra technical context" approaches infinity is programmer. Because the best, most specific way to specify extra technical context is just plain old code.


This is exactly why no code / low code solutions don’t really work. At the end of the day, there is irreducible technical complexity.

Yeah, don‘t rely on the LLM finding all the issues. Complex code like Swift concurrency tooling is just riddled with issues. I usually need to increase to 100% line coverage and then let it loop on hanging tests until everything _seems_ to work.

(It’s been said that Swift concurrency is too hard for humans as well though)


I don't trust programmers to find all the issues either and in several shops I've been in "we should have tests" was a controversial argument.

A good software engineering system built around the top LLMs today is definitely competitive in quality to a mediocre software shop and 100x faster and 1000x cheaper.


Nondeterministic isn't the right word because LLM outputs are deterministic and the tokens created from those outputs can also be deterministic.

I agree that non-deterministic isn't the right word, because that's not the property we care about, but unless I'm strongly missing something LLM outputs are very much non-deterministic, both during the inference itself and when projecting the embeddings back into tokens.

I agree it isn't the main property we care about, we care about reliability.

But at least in its theoretical construction the LLM should be deterministic. It outputs a fixed probability distribution across tokens with no rng involvement.

We then sample from that fixed distribution non-deterministically for better performance or we use greedy decoding and get slightly worse performance in exchange for full determinism.

Happy to be corrected if I am wrong about something.


Ah, I realize that I had misunderstood your earlier comment, my apologies and thanks for clarifying!

We're leaving my area of confidence, so take everything I write with a pinch of salt.

As far as I understand, indeed, each layer transforms a set of inputs into a probability distribution. However, if you wanted to compute entirely with probability distributions, you'd need the ability to compose these distributions across layers. Mathematically, it doesn't feel particularly complicated, but computationally, it feels like this adds several orders of magnitude of both space and time.


The industrial revolution is the most transformative event in this history of life since the Cambrian explosion. It's that significant.

It is also on track to be nearly as… impactful as the Permian extinction. That stuff cuts both ways unfortunately.

A hell of a lot of stuff survived the permian extinction.

Now I'm not saying it's gonna be fun, but I'd bet a lot of money on humans and the species we find most useful surviving the next/current one.


> It is also on track to be nearly as… impactful as the Permian extinction.

why do you say that?


It was also an extremely lucky coincidence.

Or maybe they're more excited to see the male caregivers? Or maybe the male caregivers are louder themselves so they copy them? Or maybe...

If you read the paper, it suggests Turkish female caregivers are more vocal with their cats and understand their vocal cues more intuitively/don't need telling twice.

Maybe

"Greeting Vocalizations in [These 31] Domestic Cats Are More Frequent with Male Caregivers"


Maybe

""

You couldn't write a book containing the context needed to qualify a factual statement about any of these cats. It seems even the article authors couldn't be bothered, writing only 5 pages after failing meet their nonsensical objective.


When I'm building out a new feature, I can churn through millions of tokens in Claude code. And that's just me... Now think about Claude code but integrated with Excel or datadog, or whatever app could be improved through LLM integration. Think about the millions of office workers, beyond just software engineers, who will be running hundreds of thousands or millions of tokens per day through these tools.

Let's estimate 200 million office workers globally as TAM running an average of 250k tokens. That's 50 trillion tokens DAILY. Not sure what model provider profit per token is, but let's say it's .001 cents.

Thats $500M per day in profit.


I find it absurd to pay for tokens I cant control, predict or even check in any reasonable way. It is literally amounts to "pay whatever random money the company asks you to pay" kind of contract.

I pay $100/mo for CC and have functionally unlimited tokens.

I find it irreplaceable.


I’m with you on the Claude Code example —- it matches my experience.

But I do think the important thing to look forward to is AI work which is totally detached from human intervention.


>When I'm building out a new feature, I can churn through millions of tokens in Claude code.

+

>Not sure what model provider profit per token is, but let's say it's .001 cents.

So you'd be willing to pay thousands for a new feature, right?


Currently there is no profit per token, quite a bit of loss per token, that's the problem. Your not going to make it up in volume.

Do you have a source for that? I'm especially interested in a source for Anthropic.

https://www.wsj.com/tech/ai/openai-anthropic-profitability-e...

Anthropic expects to break even in 2028. They’re all unprofitable now.


paywalled.

Are they unprofitable because they don't profit on inference, or because they reinvest all of the profit into scaling up?

Remember how long Amazon was unprofitable, by choice.


> Are they unprofitable because they don't profit on inference, or because they reinvest all of the profit into scaling up?

They are scaling up using VC money, not revenue. As far as profit on inference goes, it's hard to separate it out from training: they cannot, at any given time, simply stop training because that would kill any advantage they have 6 months down the line.

For all practical purposes, you can't look at their inference costs independent of the training cost; they need to keep spending on both if they want to continue doing inference.

> Remember how long Amazon was unprofitable, by choice.

That was a very different scenario - AMZ was not spending their revenue on land-grabbing, they were spending their revenue on long-lived infra, while AI companies now are spending VC investment, not revenue, on land-grabbing.

The difference between spending your revenue on short-lived infra (training a new model, acquiring GPUs) and long-lived infra is that with long-lived infra, at any time, even after 10+ years, you can stop expanding your infra and keep the resulting revenue as profit.

With short-lived infra (models, GPUs), you can't simply stop infra spending and collect profit from the revenue, because the infra reached end-of-life and needs to be replaced anyway.


Well the big labs certainly haven't intentionally tried to train away this emergent property... Not sure how "hey let's make the model disagree with the user more" would go over with leadership. Customer is always right, right?


The problem is asking for user preference leads to sycophantic responses


How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models? What hurt open source in the past was its inability to keep up with the quality and feature depth of closed source competitors, but models seem to be reaching a performance plateau; the top open weight models are generally indistinguishable from the top private models.

Infrastructure owners with access to the cheapest energy will be the long run winners in AI.


>How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

According to Google (or someone at Google) no organization has moat on AI/LLM [1]. But that does not mean that it is not hugely profitable providing it as SaaS even you don't own the model or Model as a Service (MaaS). The extreme example is Amazon providing MongoDB API and services. Sure they have their own proprietary DynamoDB but for the most people scale up MongoDB is more than suffice. Regardless brand or type of databases being used, you paid tons of money to Amazon anyway to be at scale.

Not everyone has the resource to host a SOTA AI model. On top of tangible data-intensive resources, they are other intangible considerations. Just think how many company or people host their own email server now although the resources needed are far less than hosting an AI/LLM model?

Google came up with the game changing transformer at its backyard and OpenAI temporarily stole the show with the well executed RLHF based system of ChatGPT. Now the paid users are swinging back to Google with its arguably more superior offering. Even Google now put AI summary as its top most search return results for free to all, higher than its paid advertisement clients.

[1]Google “We have no moat, and neither does OpenAI”:

https://news.ycombinator.com/item?id=35813322


Hosting a SOTA AI model is something that can be separated well from the rest of your cloud deployments. So you can pretty much choose between lots of vendors and that means margins will probably not be that great.


That quote from Google is 2.5 years old.


I also cringed a bit about seeing a statement that old being cited, but all the events since then only proved google right, I'd say.

Improvements seem incremental and smaller. For all I care, I could still happily use sonnet 3.5.


Have they said differently since?


undergrads at UC Berkeley are wearing vLLM t-shirts


This is exactly why the CEO of Anthropic has been talking up "risks" from AI models and asking for legislation to regulate the industry.


He's talking about completely different type of risks and regulation. It's about the job displacement risks, security and misuse concerns, and ethical and societal impact.

https://www.youtube.com/watch?v=aAPpQC-3EyE

https://www.youtube.com/watch?v=RhOB3g0yZ5k


> What hurt open source in the past was its inability to keep up with the quality and feature depth of closed source competitors

Quality was rarely the reason open source lagged in certain domains. Most of the time, open source solutions were technically superior. What actually hurt open source were structural forces, distribution advantages, and enterprise biases.

One could make an argument that open source solutions often lacked good UX historically, although that has changed drastically the past 20 years.


For most professional software, the open source options are toys. Is there anything like an open source DAW, for example? It's not because music producers are biased against open source, it's because the economics of open source are shitty unless you can figure out how to get a company to fund development.


> Is there anything like an open source DAW, for example?

Yes, Ardour. It’s no more a toy than KiCad or Blender.


People and companies trust OpenAI and Anthropic, rightly or wrongly, with hosting the models and keeping their company data secure. Don't underestimate the value of a scapegoat to point a finger at when things go wrong.


But they also trust cloud platforms like GCP to host models and store company data.

Why would a company use an expensive proprietary model on Vertex AI, for example, when they could use an open-source one on Vertex AI that is just as reliable for a fraction of the cost?

I think you are getting at the idea of branding, but branding is different from security or reliability.


Looking at and evaluating kimi-2/deepseek vs gemini-family (both through vertex ai), it's not clear open sources is always cheaper for the the same quality

and then we have to look at responsiveness, if the two models are qualitatively in the same ballpark, which one runs faster?


> Don't underestimate the value of a scapegoat to point a finger at when things go wrong.

Which is an interesting point in favour of the human employee, as you can only consolidate scape goats so far up the chain before saying "It was AIs fault" just looks like negligence.


Either...

Better (UX / ease of use)

Lock in (walled garden type thing)

Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name)


Or lobbing for regulations. You know. The "only american models are safe" kind of regulation.


> Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name.

Not Google, and not Amazon. Microsoft is a maybe.


People trust google with their data in search, gmail, docs, and android. That is quite a lot of personal info, and trust, already.

All they have to do is completely switch the google homepage to gemini one day.


The success of Facebook basically proves that public brand perception does not matter at all


Facebook itself still has a big problem with it's lack of youth audience though. Zuck captured the boomers and older Gen X, which are the biggest demos of living people however.


> Zuck captured the boomers and older Gen X, which are the biggest demos of living people however.

In the developed world. I'm not sure about globally.


I don't see what OpenAI's niche is supposed to be, other than role playing? Google seems like they'll be the AI utility company, and Anthropic seems like the go-to for the AI developer platform of the future.


Anthropic has RLed the shit out of their models to the extent that they give sub-par answers to general purpose questions. Google has great models but is institutionally incapable of building a cohesive product experience. They are literally shipping their org chart with Gemini (mediocre product), AI Overview (trash), AI Mode (outstanding but limited modality), Gemini for Google Workspace (steaming pile), Gemini on Android (meh), etc.

ChatGPT feels better to use, has the best implementation of memory, and is the best at learning your preferences for the style and detail of answers.


Gemini is not mediocre, have you used it lately?

https://www.vellum.ai/llm-leaderboard


RLed?


Reinforcement learning, I believe


> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

So a couple of things. There are going to be a handful of companies in the world with the infrastructure footprint and engineering org capable of running LLMs efficiently and at scale. You are never going to be able to run open models in your own infra in a way that is cost competitive with using their API.

Competition _between_ the largest AI companies _will_ drive API prices to essentially 0 profit margin, but none of those companies will care because they aren't primarily going to make money by selling the LLM API -- your usage of their API just subsidizes their infrastructure costs, and they'll use that infra to build products like chat gpt and claude, etc. Those products are their moat and will be where 90% of their profit comes from.

I am not sure why everyone is so obsessed with "moats" anyway. Why does gmail have so many users? Anybody can build an email app. For the same reason that people stick with gmail, people are going to stick with chatgpt. It's being integrated into every aspect of their lives. The switching costs for people are going to be immense.


Pure models clearly aren’t the monetizing strategy, use of them on existing monetized surfaces are the core value.

Google would love a cheap hq model on its surfaces. That just helps Google.


Hmmm but external models can easily operate on any "surface". For instance Claude Code simply reads and edits files and runs in a terminal. Photo editing apps just need a photo supplied to them. I don't think there's much juice to squeeze out of deeply integrated AI as AI by its nature exists above the application layer, in the same way that we exist above the application layer as users.


Gemini is the most used model on the planet per request.

All the facts say otherwise to your thoughts here.


It’s convenience - it’s far easier to call an API than deploy a model to a VPC and configure networking, etc.

Given how often new models come out, it’s also easier to update an API call than constantly deploying model upgrades.

But in the long run, I hope open source wins out.


> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

They won't. Actually, even if open models aren't competitive, they still won't. Hasn't this been clear since a while already?

There's no moat in models, investments in pure models has only been to chase AGI, all other investment (the majority, from Google, Amazon, etc.) has been on products using LLMs, not models themselves.

This is not like the gold rush where the ones who made good money were the ones selling shovels, it's another kind of gold rush where you make money selling shovels but the gold itself is actually worthless.


> Infrastructure owners with access to the cheapest energy will be the long run winners in AI.

For a sufficiently low cost to orbit that may well be found in space, giving Musk a rather large lead. By his posts he's currently obsessed with building AI satellite factories on the moon, the better to climb the Kardashev scale.


The performance bottleneck for space based computers is heat dissipation.

Earth based computers benefit from the existence of an atmosphere to pull cold air in from and send hot air out to.

A space data center would need to entirely rely on city sized heat sink fins.


For radiative cooling using aluminum, per 1000 watts at 300 kelvin: ~2.4m^2 area, ~4.8 liters volume, ~13kg weight. So a Starship (150k kg, re-usable) could carry about a megawatt of radiators per launch to LEO.

And aluminum is abundant in the lunar crust.


We are jumping pretty far ahead for a planet that can barely put two humans up there, but it is a great deal of my scifi dreams in one technology tree so I'll happily watch them try.


The grandfather comment is perhaps mixing up two things:

If launch costs are cheap enough, you can bring aluminum up from earth.

But once your in-space economy is developed enough, you might want to tap the moon or asteroids for resources.


And the presence of humans. Like with a lot of robotics, the devil is probably in the details. Very difficult to debug your robot factory while it's in orbit.

That was fun to write but also I am generally on board with humanity pushing robotics further into space.

I don't think an orbital AI datacentre makes much sense as your chips will be obsolete so quickly that the capex getting it all up there will be better spent on buying the next chips to deploy on earth.


Well, _if_ they can get launch costs down to 100 dollar / kg or so, the economics might make sense.

Radiative cooling is really annoying, but it's also an engineering problem with a straightforward solution, if mass-in-orbit becomes cheap enough.

The main reason I see for having datacentres in orbit would be if power in orbit becomes a lot cheaper than power on earth. Cheap enough to make up for the more expensive cooling and cheap enough to make up for the launch costs.

Otherwise, manufacturing in orbit might make sense for certain products. I heard there's some optical fibres with superior properties that you can only make in near zero g.

I don't see a sane way to beam power from space to earth directly.


Yes but how do you find the best open model? You check google.


Kagi


Let me google "free alternative to kagi"


I call this the "Karl Marx Fallacy." It assumes a static basket of human wants and needs over time, leading to the conclusion competition will inevitably erode all profit and lead to market collapse.

It ignores the reality of humans having memetic emotions, habits, affinities, differentiated use cases & social signaling needs, and the desire to always want to do more...constantly adding more layers of abstraction in fractal ways that evolve into bigger or more niche things.

5 years ago humans didn't know a desire for gaming GPUs would turn into AI. Now it's the fastest growing market.

Ask yourself: how did Google Search continue to make money after Bing's search results started benchmarking just as good?

Or: how did Apple continue to make money after Android opened up the market to commoditize mobile computing?

Etc. Etc.


this name is illogical as karl marx did not commit this fallacy


Yes, he did, and it was fundamental to his entire economic philosophy: https://en.wikipedia.org/wiki/Tendency_of_the_rate_of_profit...


no, he didn't, and your link has nothing to do with your fallacy you were talking about


It absolutely does, and the fact that now 2 marxists (which I can see from your comment history) have a total inability to offer any actual rebuttal, does not surprise me.


theres nothing to rebut. you made an assertion thats false on the face of it and posted a link to something totally unrelated. it's so wrong i dont even know which part you're misunderstanding.

but one of the core ideas of marx's conception of history is that human needs, wants, and human nature itself are constantly in a state of change and that those needs and desires are in large part a product of the environment in which you live, and further that humans and human society in turn change their own environments which in turn change human nature itself


I'm not seeing anywhere in that page anything about an assumed static basket of human wants and needs. Maybe I missed it -- can you point out where you saw that?

Interesting, though, that per the very same article someone like Adam Smith concurred empirically with Marx's observation on the titular tendency of rates of profit to fall. This suggests to me it likely had some meat to it.


Without going too deep on it (I used to be a fan in university as a silly youth), the tendency of the rate of profit to fall is the key aspect of Marx's crisis theory.

Basically dude thought the competition inherent in capitalism would cause all profit to be competed to zero leading to an eventual 'crisis' and collapse of the capitalist means of production.

Implicit in this assumption is the idea that the things humans need and want changes/evolves in a predictable way, and not in a chaotic/fractal/reflexive way (which is what actually happens).

An eventual static basket of desired goods would be the only mechanism by which competition ever could compete profits to zero. If the basket is dynamic/reflexive/evolving, there's constantly new gaps opening between human desires and market offerings to arbitrage for profit. You can just look at the average profit margins of S&P500 companies over time to see they are not falling.

The further we get from subsistence worries (Adam Smith's invisible hand has pulled virtually the entire globe out of living in the dirt), the more divergent and higher abstraction these wants and needs become, and hence the profit opportunities are only increasing -- which is how the economy grows (no, it's not a fixed pie, another Marxian fallacy).


again, marx didnt see it as a fixed pie. thats the whole reason behind his idea of absolute vs relative surplus value, is that the pie isn't fixed. he absolutely saw the (at his time) modern capitalist economy as a revolutionary, dynamic force that brought about a great increase in the absolute amount of productive capacity and wealth in the world


> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

hopefully they won't

and their titanic off-balance sheet investments will bankrupt them as they won't be able to produce any revenue


>Never

>Mentions one discrete event

Come on...


Maybe because I came into software not from an interest in software itself but from wanting to build things, I can't relate to the anti-LLM attitude. The danger in becoming a "crafter" rather than a "builder" is you lose the forest for the trees. You become more interested in the craft for the craft's sake than for its ability to get you from point A to point B in the best way.

Not that there's anything wrong with crafting, but for those of us who just care about building things, LLM's are an absolute asset.


Glad we agree that the “builder” without craft is just looking for the nearest exit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: