Benchmarks aren’t broken, the models can learn anything. If we give them true real world data (physics engine), they will learn the real world. We are going to see artificial general intelligence in our lifetime
Not a junior engineer in a developed country, but what was previously an offshore junior engineer tasked with doing the repetitive labor too costly for western labor.
Can you clarify "dynamic agentic workflows?" I use their agent node and it works great but what are you trying to do with it? FYI they just added native support for MCP in the latest release both for exposing their workflows as a MCP server and consuming MCP servcies easily via their MCP client tool:
https://docs.n8n.io/integrations/builtin/core-nodes/n8n-node...https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-n...
That should make things interesting because they have a fairly extensive library of templates which can now easily be converted to MCP servers and be more easily invoked agnatically by LLMs.
Honestly, o3 has completely blown my mind in terms of ability to come up with useful abstractions beyond what I would normally build. Most people claiming LLMs are limited just arent using the tools enough, and cant see the trajectory of increasing ability
> Most people claiming LLMs are limited just rent using the tools enough
The old quote might apply:
~"XML is like violence. If it's not working for you, you need to use more of it".
(I think this is from Tim Bray -- it was certainly in his .signature for a while -- but oddly a quick web search doesn't give me anything authoritative. I asked Gemma3, which suggests Drew Conroy instead)
Living in NYC, I have been around a TON of venture backed startups with the classic non technical CEO, technical CTO. Some HUGE percentage of startups see the CTO fired once the tech stack and revenue are stabilized.
The incentive from the CEOs perspective to remove a contender as well as claw back the equity is huge. Early stage the CTO is the most critical, but after real traction they can be replaced far easier than most want to admit.
My advice for technical founders is to always place themselves first, from a legal and organizational perspective. For a technical founder with social skills, a non technical founder brings very little value relative to their vesting in the early stage.
Once the business hits a certain level of revenue, the mvp is finished etc, whoever is the face of the company, I.e. CEO has way more power. CTO if they have shipped a complete product that’s getting paying customers can be replaced with an engineering manager. But not before
Getting the product to some level of completion is a monumental lift
>> If the technical founder can be replaced so easily, how does it follow that the non-technical founder is less valuable?
Because at that point, you can raise VC cash and hire for the job instead. The idea has been vetted. You can theoretically even rebuild the entire codebase from scratch just looking at the existing app. The CTO should maintain equal voting rights for as long as possible
Trying to reduce complex desicions to "iq" is by itself a marker for a lack of knowledge and a desperate attempt to make yourself feel superior to others.
IQ != intelligence
Anyway, aside from that - if it's not a bubble, there should be a sustainable business model, meaning, one which doesn't lose you ten times the money you make. So, where is it? Which company, besides nvidia, has actually created a sustainable business model based on AI? Where is its moat?
DeepSeek showed us that right now, OpenAI doesn't have any moat to speak of and can be essentially copied. Not exactly great for future profits.
>Trying to reduce complex desicions to "iq" is by itself a marker for a lack of knowledge and a desperate attempt to make yourself feel superior to others.
>IQ != intelligence
Pot, meet kettle.
IQ might not be actually measuring intelligence, whatever that might mean, but it's highly correlated with various things that are generally agreed to be indicators of intelligence, eg. educational attainment or performance on standardized tests. For something as woolly as "intelligence", IQ is pretty much as close as we can get, without trying to claim "it can't be measured exactly so we're not going to even trying to quantify it". Moreover it's pretty obvious that the parent commenter is using "iq" as a shorthand for intelligence, not referring to the results of a test that has to be administered by a trained professional and almost nobody knows the actual value of.
The commenter you replied to made a bad take, but you're basically doing the very thing you're trying to decry, by trying to viciously attack him with accusations of "a lack of knowledge and a desperate attempt to make yourself feel superior to others".
Intelligence is wooly. Making a quotient for it doesn't make it less so.
Using IQ in this fashion, as with many initialisms, can be a means of obfuscating biases, ambiguating, dog whistling, covering one's ass, preening, and discounting nuance. Yes that all is quite pedantic. But I still judge when I see people use them as crutches of (in)articulate communication.
If you have something to say, say it clearly and precisely and embrace the nuance.
You could instead provide an argument of why it is not a bubble. For example, we are perhaps one breakthrough away from something approximating AGI, etc.
Though the irony is that I myself don't follow it, so a man is full of contradictions. He wants to be shown as something when in reality he is something completely else.
My wife and I both work on and with "AI" tools and have for some time. It sure looks like a hype bubble. The stuff's... OK, in limited ways, but total shit if you start trying to use it to revolutionize anything, even in small ways. This has not meaningfully changed in the last couple years, it looks like the whole approach is just slowly converging on a local maximum that's nowhere near what the hype-meisters (Altman) were "warning" (lying, to hype their product) us about.
It continues to be true that the spaces for which it's best-suited to having a great effect on productivity are harmful ones, like spam and scams.