Hacker Newsnew | past | comments | ask | show | jobs | submit | simianwords's commentslogin

>Its baffling to see US engineers repeatedly being shat on by the company, and yet still retain belief in the chain of command.

But they are the best paid and everyone wants to move to the US.


But thats different.

I was paid exceptionally well when I was at a FAANG, but that didn't extend to me blindly following my leadership team into the same obvious bear trap every 6 months.


Blindly following or just paying lip service to keep being paid? How do you know somebody is a cynic if they dont express open their contempt openly?

I'd argue that expressing one's contempt openly is probably a sign of optimism more than anything - it shows you believe that your words might actually affect the direction of the organization rather than just getting you fired for disagreeing with your pointy-haired boss.

A lot of us expressed our contempt pretty damn openly in FAANG. It didn't seem like a barrier to getting paid

I made posters openly mocking the pricks in charge. It didn't stop me getting paid.

Improved morale significantly.


> everyone wants to move to the US

I think this is only true within certain bubbles. Within my bubble there’s a considerable amount of cynicism towards the US. No one wants to move to the US because of gestures broadly.


yours is the bubble. reality is that most people want to move to the US and are happy there.

> But they are the best paid and everyone wants to move to the US.

Not really, I prefer less pay but way more affordable cost of living over a dying country rapidly moving into fascism. What's your money worth compared to a life worth living without tyrannical maniacs ruling over your every move. Fuck the USA.


sure but most don't feel that way and simply prefer higher salary.

This argument always felt insincere to me. What power do big tech companies have and why do you have a problem with it? They are simply providing a service you didn’t have access to.

I remember a time when users had a great deal more control over their computers. Big tech companies are the ones who used their power to take that control away. You, my friend are the insincere one.

If you’re young enough not to remember a time before forced automatic updates that break things, locked devices unable to run software other than that blessed by megacorps, etc. it would do you well to seek out a history lesson.


For some context, this is the a long time Googler who's feats include major contributions to GoLang and Co-creating UTF-8.

To call him the Oppenheimer of Gemini would be overly dramatic. But he definitely had access to the Manhattan project.

>What power do big tech companies have and why do you have a problem with

Do you want the gist of the last 20 years or so, or are you just being rhetorical? im sure there will be much literature over time that will dissect such a question to its atoms. Whether it be a cautionary tale or a retrospective of how a part is society fell? Well, we still have time to write that story.


Rob Pike is not a 'Googler' by birth or fame or identity. He was at Bell Labs and was on the team that created Unix, led the team creating Plan 9, co-created UTF-8, and did a bunch more - all long before Google existed. He was a legend before he deigned to join them and lend them his credibility.

I was gonna say! Working at Bell Labs is a LOT more prestigious (and less humiliating) than working for Google, an advertising company.

It's like the old joke from Mad Magazine:

The Beatles? Weren't they Paul McCartney's backup band before Wings?


In fairness, was Bell Labs, part of (or funded by) AT&T, a phone monopoly, any less corporate than Google's home for genius engineers?

Telephony is much more important to society than advertisement.

I know where they make money, but calling them an advertising company is just a jab. Ha ha, but that doesn't describe Google, like them or not.

I wonder where AT&T made profits and where, like any business, they broke even or had loss leaders. IIRC consumer telephone service was not profitable.


Eh, and it was arguably a mistake to let him force Go on the rest of the organisation by way of starpower.

"force" seems a bit strong, as I remember it.

Yeah, I remember it being a fourth option alongside the others but I quit just before Google lost its serifs and its soul


Just to note: these companies control infrastructure (cloud, app stores, platforms, hardware certification, etc.). That’s a form of structural power, independent of whether the services are useful. People can disagree about how concerning that is, but it’s not accurate to say there’s no power dynamic here.

By this logic there is no corporation or entity that provides anything other than basic food, shelter and medical care that could be criticized - they're all just providing something you don't need and don't have access to without them right?

> What power do big tech companies have

Aftermarket control, for one. You buy an Android/iPhone or Mac/Windows device and get a "free" OS along with it. Then, your attention subsidizes the device through advertising, bundled services and cartel-style anti-competitive price fixing. OEMs have no motivation not to harm the market in this way, and users aren't entitled to a solution besides deluding themselves into thinking the grass really is greener on the other side.

What power did Microsoft wield against Netscape? They could alter the deal, and make Netscape pray it wasn't altered further.


Umm are you being serious? just look of the tech company titans in this photo in this trump inauguration - they are literally a stand in for putins oligarchs at this point

https://www.livenowfox.com/news/billionaires-trump-inaugurat...


No one's saying this but this is around 40% costlier than the previous codex model. Price change is important.


Because the best value is from the subscription where the price is stable


From a couple hours of usage in the CLI, 5.2-codex seems to burn through my plan's limit noticeably faster than 5.1-codex. So I guess the usage limit is a set dollar amount of API credits under the hood.


I would like to challenge this claim. I think LLMs are maybe accurate enough that we don't need to check every line and remember everything. High level design is enough.


I've been tasked with doing a very superficial review of a codebase produced by an adult who purports to have decades of database/backend experience with the assistance of a well-known agent.

While skimming tests for the python backend, I spotted the following:

    @patch.dict(os.environ, {"ENVIRONMENT": "production"})
    def test_settings_environment_from_env(self) -> None:
        """Test environment setting from env var."""
        from importlib import reload

        import app.config

        reload(app.config)

        # Settings should use env var
        assert os.environ.get("ENVIRONMENT") == "production"
This isn't an outlier. There are smells everywhere.


If it is so obvious to you that there is a smell here then an agent would have caught it. Try it yourself.


I have plenty of anecdata that counters your anecdata.

LLMs can generate code that works. That much is true. You can generate sufficiently complex projects that simply run on the first (or second try). You can even get the LLM to write tests for the code. You can prompt it for 100% test coverage and it will provide you exactly what you want.

But that doesn't mean OP isn't correct. First, you shouldnt be remembering everything. If you are finding yourself remembering everything your project is either small (I'd guess less than 1000 lines) or you are overburdened and need help. Reasoning, logically, through code you write can be done JIT as you're writing the code. LLMs even suffer from the same problem. Instead of calling it "having to remember to much" we refer to it as a quantity called "context window". The only problem is the LLM won't prompt you telling you that it's context window is so full it can't do it's job properly. A human will.

I think an engineer should always be reasoning about their code. They should be especially suspicious of LLM generated code. Maybe I'm alone but if I use an LLM to generate code I will review it and typically end up modifying it. I find even prompting with something like "the code you write should be maintainable by other engineers" doesn't produce good value.


My jaw hit the table when I read that. Just checking here but, are you being serious?


I absolutely believe this and follow what I said to an extent. You don't need to triple check every line of code and deeply understand what it has done - just the highlevel design.

I usually skim through the code (spot some issues like are they using modern version of language?), check the high level design like which interfaces and do manual testing. That is more than enough.


I have a specific prediction made that I want to document here.

There will come a new UI framework/protocol, maybe something over HTML/CSS/JS that works within a chat ui context for such ChatGPT (or other llm) integrations.

For example, if you have an ecommerce app or website and want to integrate it with ChatGPT then you will have to develop on the new UI primitives. The primitives might include carousels, lists, tables, media embed. Crucially, natural language will be used to pick and choose these primitives and combine them in the UI (which ChatGPT will decide how to).

Thinking backwards, I want my app to be displayed in chatgpt with maximum flexibility for the user (meaning they can be re-arranged acc to context) but also enough constraint that I can have some control over the layout. That's the problem I think will be solved.


Google literally just released this on their GitHub. It must be in ether.


Right https://developers.googleblog.com/introducing-a2ui-an-open-p...

I swear I had made this prediction quite a while back but thanks for pointing it out :D


I find that technology really exciting. Partly because it’s a polished and comprehensive version of something I was implementing around my MCP cluster anyways.

Mostly, though, because it seems like we’re mere minutes away from having Star Trek style LCARS adaptable GUIs managed by an AI computer system simultaneously so smart it runs mission critical operations yet so dumb we have to remind it that we want our tea “hot” five times a day.

It’s happening. We’re gonna be living in the future!


Do you have a link ready or do you know the name of the project?



I wonder what ChatGPT will do with this - will it adopt it or make its own framework?


https://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-ap...

It’s going to be built into MCP and will be supported by Anthropic and OpenAI or anyone else that supports this mcp spec


Zero sum mentality is tiring!


Unbridled AI mania is as well


The mentality I actually have goes beyond zero sum. I get zero and Sam gets all. Tell me why that's wrong.


you get chatgpt


Who cares if chatgpt can write my essay if writing my essay is no longer worth doing?

Do you think programming Doom from scratch in 2025 would give the programmer the same sense of satisfaction and material security as it did in the 90's? Maybe technology progressing actually devalues the rewards for your outputs?

And saying "chatgpt" is pretty rich considering it's not at all clear at this point whether the societal benefits outweigh the negatives.


People like ChatGPT even if you don’t.


And..? People also hate ChatGPT even if you don't.

Do you think everyone in the world thinks exactly the same way as you do or something? Some people actually enjoy exerting their talents to create things.


> Do you think everyone in the world thinks exactly the same way as you do or something?

"Yes" "We are all the same". Monty Python


That’s the good part! Sam Altman provides ChatGPT for people who like it


Sam Altman provides ChatGPT for everybody. You seem not to be reading what I've written.


Claude code and other AI coding tools must have a * mandatory * hook for verification.

For front end - the verification is make sure that the UI looks expected (can be verified by an image model) and clicking certain buttons results in certain things (can be verified by chatgpt agent but its not public ig).

For back end it will involve firing API requests one by one and verifying the results.

To make this easier, we need to somehow give an environment for claude or whatever agent to run these verifications on and this is the gap that is missing. Claude Code, Codex should now start shipping verification environments that make it easy for them to verify frontend and backend tasks and I think antigravity already helps a bit here.

------

The thing about backend verification is that it is different in different companies and requires a custom implementation that can't easily be shared across companies. Each company has its own way to deploy stuff.

Imagine a concrete task like creating a new service that reads from a data stream, runs transformations, puts it in another data stream where another new service consumes the transformed data and puts it into an AWS database like Aurora.

``` stream -> service (transforms) -> stream -> service -> Aurora ```

To one shot this with claude code, it must know everything about the company

- how does one consume streams in the company? Schema registry?

- how does one create a new service and register dependencies? how does one deploy it to test environment and production?

- how does one even create an Aurora DB? request approvals and IAM roles etc?

My question is: what would it take for Claude Code to one shot this? At the code level it is not too hard and it can fit in context window easily but the * main * problem is the fragmented processes in creating the infra and operations behind it which is human based now (and need not be!).

-----

My prediction is that companies will make something like a new "agent" environment where all these processes (that used to require a human) can be done by an agent without human intervention.

I'm thinking of other solutions here, but if anyone can figure it out, please tell!


Who's money is it?


Tax payers


Tax payer won't get refund whether the data center is built or not.


Why not?

Wouldn't it be great if we could get child care, education, infrastructure, housing, care of the elderly and so on instead?


Why do they want to open data centres in this area though? Seems like there might be better places. What's the incentive for this particular place?


Is this what NIMBY means?


If so then call me a NIMBY. Not a Chandler but in Gilbert, the town next door. No need for data centers here or anywhere else around here either.


where would you like them to be?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: