I have investigated taking Amtrak for a family trip to do something different. "The journey is the destination" or something like that. I was branding it "slow travel" to the family so we could use it as a sort of modern life/digital detox. I also looked into a trans-Atlantic passage on the QM2.
I'm sad to report that renting a family bedroom or two joined bedrooms on Amtrak to take a journey on say the California Zephyr didn't pencil out. It is costlier than flying (about $2000 vs $1600 at the low end for both options, resp.) Even if you account for the cost of staying two extra nights at the destination it about breaks even.
With children I don't want to risk the days of travel becoming an ordeal as opposed to hours of flight time. The "digital detox" might quickly go sideways and require hours of screentime pacifiers. Maybe when they are older.
Happily the QM2 actually made financial sense and there would be more room to move about and explore the ship.
I think rail travel makes the most sense in the Acela context the article opened with - routes between cities that take less than a day. For cross-continent travel the time savings of air travel make rail travel a harder case to argue.
The point of cross continent rail travel is not being cheaper than air at all, it is about seeing and enjoying the country and the route, there is no easier or cheaper way to do that.
- A road trip would be both more expensive (fuel, hotels and maintenance/rental), strenuous and also less safe given the number of miles to be driven.
- There is quite little to see in a cruise if not near a shore or on a plane flying at cruising altitudes well above the clouds.
While times have changed and it is lot harder for parents now, I cannot help but remember growing up the number of cross country train trips just sitting at the window with nothing but a book/magazine or conversations with passengers and it was formative life experience even when quite young. It wasn't that long ago and my generation was just as addicted to tech but we were limited to doing that only on a desktop with a modem.
---
If you want to see and show the kids to help them understand the size and complex geography and beauty of the country they will inherit despite what limited time screen distractions allow, I don't think there is any better way to do it.
> The point of cross continent rail travel is not being cheaper than air at all, it is about seeing and enjoying the country and the route, there is no easier or cheaper way to do that.
Amtrak isn't useful for that. For see the continent you need to get off the train for a few hours here and there to see something. That means flexible tickets; more trains so you don't have to spend a day in a small town with 3 hours entertainment, and enough space that you can make a last minute decision to see some little tourist trap for the fun of it knowing you can get the next train.
> A road trip would be both more expensive (fuel, hotels and maintenance/rental),
Very much it depends. If you are single Amtrak is cheaper (coach seats). a family is a lot cheaper to drive, since most of the costs are fixed for everyone. You likely own the car and so are making payments anyway. Gas is the same for 1 person or a full car. Hotels are rented per room. My last trip I needed a rental car to get to the family reunion 1 hour from the station, just the cost of a rental car would have paid for gas and hotel to drive my own car (the strenuous miles is why we took the train anyway, but it was more expensive than driving)
> There is quite little to see in a cruise if not near a shore
I've never been on that type of cruise (they exist, just not what I've been on). What I've been on the sea days were near shore taking in the beautiful scenery (you don't take an Alaska cruise for the ports, you take the cruise to watch the shore on sea days), or the ship hopes between islands at night and so you are at a port all day (though next time I think I'd get a resort and stay on one island). Beware.
Amtrak is often a great choice to get around. However there are problems and they are not to be overlooked.
It is a 12 day one way trip that includes plenty of overnight non train stays(i.e. hotels), full day sight seeing stops etc .
You can mix local car rentals to take extended side trips add much longer breaks with hop-on/hop off if you plan to do so.
There are national park themed trips specifically, or number of great regional options or other cross country journeys like LA to NOLA etc.
It will be never be perfect exact fit to your specific tastes and needs, no public transit can ever be,. However that doesn't mean it is not a great option for a traveling family vacation with sight seeing and breaks, where you can actually spend quality time with the family rather than just looking straight at the road all day, while everyone else is on the phone.
Cross country rail journeys will always be the domain of weirdo railfans (I say, having ridden many of them many times). Flying is just too economical past the first few hundred miles.
However, we live along the Surfliner route, and for weekend trips it's fantastic. It's a 1-3 hour penalty versus driving depending on which city we're going to, but the kids vastly prefer it because they're not strapped in and we can all interact.
The US should focus on medium speed rail (100-155mph). It is easier to upgrade existing track than build new high speed track. There are lots of routes that aren't worth doing for HSR but would be at slower speed.
Good example is the Amtrak Cascades which reaches 80mph. The rolling stock can reach 125 mph. High speed rail would be nice, but Portland, Seattle, Vancouver may not be big enough to support it.
I disagree. The US should focus on those routes that there is ample reason to believe there would be high demand for true high speed rail (280km/h average speed including stops). DC to Boston Via NYC for example: there is every reason to believe this could pay for itself running 8 trains per hour all day. Once we have that there are lots of other cities that can be connected and as the network grows the whole becomes more useful. East of the Mississippi the US is about as densely populated as Europe.
If you already have a route in place using that is cheaper, but often you are stuck with decisions that made sense in 1850 when trains didn't go very fast. Where you are building new track is should always be build to 350km/h standards (you run at 300km/h, but build to a higher standard just in case you need to run fast to make up time at the cost of efficiency). There are many towns with populations of 50,000 or so people that you wouldn't build new track too, but if there is existing track running slower trains make sense.
The US is physically big enough that coast to coast would probably always be a fairly niche pursuit. New York to SF, say, is about 4700km. At 300km/h, assuming no stops, that is 15 hours. Most people won't want to do that.
Seattle to SF, however, is only about 1300km, so a bit over 4 hours under ideal conditions. At that point, it's probably quicker than a plane (no need for the whole getting to and from the airport thing, or the security, or the inevitable delays).
These days, being in the flying sardine tin often beats out train travel. With more adoption, I’ll bet the price difference will be closer. The comfort factor alone means I’ll take the train over flying if it’s feasible, every time. Even coach on the northeast regional is so much nicer than flying, and you’re usually a lot closer to where you actually want to be when you get off.
Depends on where you are going - for my family vacation a sleeper for 4 is cheaper than flying by a lot (i live in a high priced air travel city, I would money driving to chicago despite the higher parking costs). However I have 5 people going and so it does't work out. (It doesn't help that amtrak dosen't suggest options like 2 rooms)
We went coach amtrak which was cheaper and more comfortable than flying. I'd do that again.
Amtrak would benefit from a coach-class sleeper, like they have in India or in Eastern Europe. They just need coach benches that convert to beds. If, for a reasonable price, you could lie flat at night behind a little curtain, like you can in e.g. Indian Railways 2nd Class, it would change the game completely. Without that, you can only travel comfortably during the day, and trips are limited to about eight hours for the non-masochist. With it, cross country would be fine. It doesn't seem that hard. Lots of other railways do it.
Oh, don't get me wrong, during the day it's the most comfortable way to travel. But the ability to lie flat, even in fare classes that aren't too fancy, like they have in India, counts for a lot when you're traveling overnight.
I've thought trains are cool ever since I was a kid reading about the Silver Streak and the Orient Express, so every now and then I look into travelling by train. Unfortunately, Amtrak is like someone was tasked with making train travel as inconvenient and expensive as possible to make the idea of state-funded rail look bad. It's so bad someone wrote a book about it, called "Derailed."
Yes. Perhaps it makes more sense for people "travelling" i.e. exploring the world where the fact that it is a nights accommodation too makes it a savings and speed is not an issue.
It was always my understanding that software careers are shorter than other technical careers, and the higher wages compensate for this. More than compensate, if you invest early.
If by FIRE you mean retire in your 50s, I don't think that's an aspiration. That should be an expectation. You might be able to work a full career in this industry, but I wouldn't plan on it.
Most people don't have the temperament for FIRE. You have to live below your means, save a double digit percentage consistently, and invest.
And you have to do it for decades. You need to be able to tough it out through the worst of times (like the dot-com bubble, financial crisis, covid, and random political chaos like tariffs.)
You have to tune out the noise and always remember that on a long enough timeline, the market only goes up. And if you think it's "different" this time, it won't be for long.
The economy functioned without large numbers of office workers in the past, and there are regions of the country where this is still the case. To an extent they will sell their services to each other. To another extent they will be selling to the owners of AI (imagine an electrician building out a data center). The economic surplus will still be there - it will be larger in fact - and there will still be a need for their services. The players involved will change however.
“In the past” trades did not enjoy nearly the income levels they do now. The rise in demand for their services and corresponding raise in their compensation are linked to the wealth of the other half of the economy.
I'm not sure the comparison is apples to apples, but this article claims the current AI investment boom pales compared to the railroad investment boom in the 19th century.
> Next, Kedrosky bestows a 2x multiplier to this imputed AI CapEx level, which equates to a $624 billion positive impact on the US GDP. Based on an estimated US GDP figure of $30 trillion, AI CapEx is expected to amount to 2.08 percent of the US GDP!
Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century. This means that the ongoing AI CapEx boom has lots of legroom to run before it reaches parity with the rail road boom of that bygone era.
> Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century.
Has anyone found the source for that 20%? Here's a paper I found:
> Between 1848 and 1854, railroad investment, in these and in preceding years, contributed to 4.31% of GDP. Overall, the 1850s are the period in which railroad investment had the most substantial contribution to economic conditions, 2.93% of GDP, relative to 2.51% during the 1840s and 2.49% during the 1830s, driven by the much larger investment volumes during the period.
The first sentence isn't clear to me. Is 4.31 > 2.93 because the average was higher from 1848-1854 than from 1850-1859, or because the "preceding years" part means they lumped earlier investment into the former range so it's not actually an average? Regardless, we're nowhere near 20%.
I'm wondering if the claim was actually something like "total investment over x years was 20% of GDP for one year". For example, a paper about the UK says:
> At that time, £170 million was close to 20% of GDP, and most of it was spent in about four years.
By the way it's always nice when somebody actually tries to double check somebody else's research especially when you hear numbers that seemingly just sound crazy. Maybe another factoid, GDP or GNP for all practical purposes wasn't rigorously done by the government until about 1944. I believe a large part of our viewpoints on what happened in the 1800s is primarily based upon census data. But obviously if you're trying to measure a 7 year event Using census that happens every 10 years, there's going to be a lot of gap in the whisker chart.
Is 20% on railroad actually crazy? We spend 20% on make-work for the healthcare industry.
In a majority agrarian economy where a lot of output doesn't go toward GDP (e.g. milking your own damn cow to feed milk to your own damn family won't show up) I would expect "new hotness" booms to look bigger than they actually are.
So we're much closer to the per year spend US saw during the railroad construction era.
At this rate, I hope we get something useful, public, and reasonably priced infrastructure out of these spending in about 5-8 years just like the railroads.
> Do note that peak spending on rail roads eventually amounted to ~20 percent of the US GDP in the 19th century.
When you go so far back in time you run into the problem where GDP only counts the market economy. When you count people farming for their own consumption, making their own clothes, etc, spending on railroads was a much smaller fraction of the US economy than you'd estimate from that statistic (maybe 5-10%?)
Yes, that was a problem back then, and is also a problem today, but in different ways.
First, GDP still doesn't count you making your own meals. Second, when eg free Wikipedia replaces paid for encyclopedias, this makes society better off, but technically decreases GDP.
However, having said all that, it's remarkably how well GDP correlates with all the goods things we care about, despite its technical limitations.
This has always been an issue with GDP, but it's a much larger issue the father back you go.
While GDP correlates reasonably well, imagine very roughly what it would be like if GDP growth averaged 3% annually while the overall economy grew at 2%. While correlation would be good, if we speculate that 80% of the economy is counted in GDP today, then only 10% would have been counted 200 years ago.
It would be great if there was a "GDP + non-transactional economy" metric. Does one exist, or is there a relatively straightforward way to construct one?
but an estimate could be had if you use an imputed price of similar goods/services that _are_ transactional? So the problem reduces down to counting these events - perhaps a survey and such could be used to estimate their frequency etc?
Why are you so pessimistic? Just because something is hard and you get big error bars doesn't mean we can't do it at all.
If you wanted to, you could look at eg black market prices for kidneys to get an estimate for how much your kidney is worth. Or, less macabre, you can look at how much you'd have to pay a gardener to mow your lawn to see what the labour of your son is worth.
* For each thing you have them tracked doing, estimate how much it would cost to get someone else to do it
But it pretty quickly gets difficult around questions of entertainment. If I go dancing for fun, should you count how expensive it would be to hire a professional to dance in my place? If I woodworking or knit for fun but then I also give away the things I make to my friends as presents should we count that at market value?
Am I the only person with vehicles to wrench, a house to work on, chickens in the yard... as well as open source projects? If I'm not getting paid, I still have plenty to do which feeds me to today or prepares me for tomorrow.
There definitely are still a lot of things outside the market economy. People caring for their own kids is an enormous one. I'm not trying to make a claim about how much bigger the economy is of today's GDP, just that this has changed a lot over time in a more market direction. Which then means if you're trying to put historical numbers in perspective you need to adjust.
> Is it a "good" economy if real GDP is up 4%, the S&P 500 is up 40%, and unemployment is up 10%?
In today's US? Debatable, but on the whole probably not.
In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial. The tax base would bear the vast majority of the burden of those displaced from their jobs making it a much more straightforward collective optimization problem.
The US spends more per capita on their social safety net than almost all other countries, including France and the UK.
The US spends around 6.8k USD/capita/year on public health care. The UK spends around 4.2k USD/capita/year and France spends around 3.7k.
For general public social spending the numbers are 17.7k for the US, 10.2k for the UK and 13k for France.
(The data is for 2022.)
Though I realise you asked for sane policies. I can't comment on that.
I'm not quite sure why the grandfather commenter talks about unemployment: the US had and has fairly low unemployment in the last few decades. And places like France with their vaunted social safety net have much higher unemployment.
> The US spends more per capita on their social safety net than almost all other countries, including France and the UK.
To a vast and corrupt array of rentiers, middlemen, and out-and-out fraudsters, instead of direct provision of services, resulting in worse outcomes at higher costs!
Turns out if I’m forced to have visits with three different wallet inspectors on the way to seeing a doctor, I’ve somehow spent more money and end up less healthy than my neighbors who did not. Curious…
It's easier to see your own society's faults. The NHS also has waste, most obviously the deadweight loss caused by queuing. I know someone who went back to get treated to her own country. Not remarkable except that country was Ukraine.
I lived in the UK for a few years on and off. I agree that rationing by queuing is less efficient than rationing by money. Singapore does a much better job: they always have a co-payment (even if that's often that just for symbolic/ideologic reasons, and less so for rationing).
Yes, because the UK’s two dominant (and right wing) parties have been actively sabotaging it for years, chasing after a despicable dream of homegrown middlemen and fraudsters, envious as they are of the unchecked criminality of their friends from across the pond. Quelle surprise, things have gotten worse.
They need the public to (nominally) assent to it first, otherwise it'd be suicide. They're using the republican playbook: overburden the sector with tasks and regulations while underfunding it, and allow for private competition that is not subject to the same regulatory burden. Then in a decade or so, you can claim that the "free market" works better and the public won't kick up too much of a fuss.
It’s not just how much you spend on healthcare, but what that spending actually delivers. How much does an emergency room visit cost in the U.S. compared to the UK or France? How do prescription drug prices in the U.S. compare to those in the EU? When you look at what Americans pay relative to outcomes, the U.S. has one of the most inefficient healthcare systems among OECD countries.
This. My intent was to refer to outcomes. My hypothetical country was one where being unemployed might lose you various luxuries but would still see you with guaranteed food on the table and a roof over your head. Under such conditions there's no need to consider a rise in the unemployment metric to be a major downside except for the inevitable ballooning cost to the tax base.
> In a hypothetical country with sane health care and social safety net policies? Yes that would be hugely beneficial.
I think you're forgetting the Soviet Union, which looked great on paper until it turned out that it wasn't actually great...
Real GDP can go up, and it doesn't HAVE to mean you are producing more of anything valuable, and can - in fact - mean that you're not producing enough of what you need, and a bunch of what you don't need.
A very simple way to view this is: currently x% of GDP is waste. If Real GDP goes up 4% but the percentage of waste goes from 1% to 8% - you are clearly doing worse.
This is a reduction of what happened in the Soviet Union.
Agree completely. The idea that an increasing GDP or stock market is always good has taken a beating recently. Mostly because it seems that the beneficiaries of that number increase are the same few who already have more than enough, and everyone else continues to decline.
It's more than that even. AI may have plenty of utility. But does the massive capex on GPUs that will all be obsolete in a couple years?
You can still run a train on those old tracks. And it'll be competitive. Sure you could build all new tracks, but that's a lot more expensive and difficult. So they'll need to be a whole lot better to beat the established network.
But GPUs? And with how much tech has changed in the last decade or two and might in the next?
We saw cryptocurrency mining go from CPU to GPU to FPGA to ASICs in just a few years.
We can't yet tell where this fad is going. But there's fair reason to believe that, even if AI has tons of utility, the current economics of it might be problematic.
I'm continually amazed to find takes like this. Can you explain how you don't find clear utility, at the personal level, from LLMs?
I am being 100% genuine here, I struggle to understand how the most useful things I've ever encountered are thought of this way and would like to better understand your perspective.
Can't speak for anyone else, but for me, AI/LLMs have been firmly in the "nice but forgettable" camp. Like, sometimes it's marginally more convenient to use an LLM than to do a proper web search or to figure out how to write some code—but that's a small time saving at best, it's less of a net impact than Stack Overflow was.
I'm already a pretty fast writer and programmer without LLMs. If I hadn't already learned how to write and program quickly, perhaps I would get more use out of LLMs. But the LLMs would be saving me the effort of learning which, ultimately, is an O(1) cost for O(n) benefit. Not super compelling. And what would I even do with a larger volume of text output? I already write more than most folks are willing to read...
So, sure, it's not strictly zero utility, but it's far less utility than a long series of other things.
On the other hand, trains are fucking amazing. I don't drive, and having real passenger rail is a big chunk of why I want to move to Europe one day. Being able to get places without needing to learn and then operate a big, dangerous machine—one that is statistically much more dangerous for folks with ADHD like me—makes a massive difference in my day-to-day life. Having a language model... doesn't.
And that's living in the Bay Area where the trains aren't great. Bart, Caltrain and Amtrak disappearing would have an orders of magnitude larger effect on my life than if LLMs stopped working.
And I'm totally ignoring the indirect but substantial value I get out of freight rail. Sure, ships and trucks could probably get us there, but the net increase in costs and pollution should not be underestimated.
No matter how good or fast you are, you will never beat the LLM. What you're saying is akin to "your math is faster than a calculator" and I'm willing to bet it's not. LLMs are not perfect and will require intervention and fixing but if it can get you 90% there, that's pretty good. In the coming years, you'll soon find your peers are performing much faster than you (assuming you program for a living) and you will have no choice but you do you.
Fun story: when I interned at Jane Street, they gave out worksheets full of put-call parity calculations to do in your head because, when you're trading, being able to do that sort of calculation at a glance is far faster and more fluid than using a calculator or computer.
So for some professionals, mental math really is faster.
Beat an LLM at what? Lines of code per minute? Certainly not. But that's not my job. If anything I try to minimise the amount of code I output. On a good day my line count will be negative.
Mathematicians are not calculators. Programmers are not typists.
So now programmers add value when they write more code faster? Curious how this was anathema but now is a clear evidence of LLM-driven coding superiority.
The math that isn't mathing is even more basic tho. This is a Concorde situation all over again. Yes, supersonic passenger jets would be amazing. And they did reach production. But the economics were not there.
Yeah, using GPU farms delivers some conveniences that are real. But after 1.6 trillion dollars it's not clear at all that they are a net gain.
> Can you explain how you don't find clear utility, at the personal level, from LLMs?
Sure. They don't meaningfully improve anything in my life personally.
They don't improve my search experience, they don't improve my work experience, they don't improve the quality of my online interactions, and I don't think they improve the quality of the society I live in either
so you never read the summary at the top of Google search results to get the answer because it provides the answer to most of my searches. "they don't improve my work experience" that's fair but perhaps you haven't really given it a try? "they don't improve the quality of my online interactions" but how do you know? LLMs are being used to create websites, generate logos, images, memes, art videos, stories - you've already been entertained by them and not even know it. "I don't think they improve the quality of the society I live in either" That's a feeling, not a fact.
I never do because I still don't trust its answers enough without also seeing a secondary source to confirm it and the first result or two is already correct 99% of the time and often also has source citations to tell me how that conclusion or information was made or gathered if im dealing with a potential edge case.
I generally recognise utility of AI, but on this particular point, it has been a net negative if I were to add up the time I wasted by believing a summarised answer, got some length further on given task only to find that the answer was wrong and having to backtrack and redo all that work.
> so you never read the summary at the top of Google search results to get the answer because it provides the answer to most of my searches
Unfortunately yes I do, because it is placed in a way to immediately hijack my attention
Most of the time it is just regurgitating the text of the first link anyways, so I don't think it saves a substantial amount of time or effort. I would genuinely turn it off if they let me
I've read some complete nonsense in those summaries. I use LLMs for other things but I don't find this application useful because I would need to trust it, and I don't.
This whole topic makes me remember the argument for vi, and quick typing. I was always baffled because for the 25 years since I can code, typing was never that huge block of my time that it would matter.
I have the same feeling with AI.
It clearly cannot produce the quality of code, architecture, features which I require from myself. And I also want to understand what’s written, and not saying “it works, it’s fine <inserting dog with coffee image here>”, and not copy-pasting a terrible StackOverflow answer which doesn’t need half of the code in reality, and clearly nobody who answered sat down and tried to understand it.
Of course, not everybody wants these, and I’ve seen several people who were fine with not understanding what they were doing. Even before AI. Now they are happy AI users. But it clears to me that it’s not beneficial salary, promotion, and political power wise.
So what’s left is that it types faster… but that was never an issue.
It can be better however. There was the first case just about a month ago when one of them could answer better to a problem than anything else which I knew or could find via Kagi/Google. But generally speaking it’s not there at all. Yet.
I have tried using them frequently. I've tried many things for years now, and while I am impressed I'm not impressed enough to replace any substantial part of my workflow with them
At this point I am somewhat of a conscientious objector though
Mostly from a stance of "these are not actually as good as people say and we will regret automating away jobs held by competent people in favor of these low quality automations"
There was a study recently that showed how not only did devs overestimate the time saved using AI, but that they were net negative compared to the control group.
Anyway, that about sums up my experience with AI. It may save some time here and there, but on net, you’re better off without it.
>This implies that each hour spent using genAI increases the worker’s productivity for that hour by 33%. This is similar in magnitude to the average productivity gain of 27% from several randomized experiments of genAI usage (Cui et al., 2024; Dell’Acqua et al., 2023; Noy and Zhang, 2023; Peng et al., 2023)
Our estimated aggregate productivity gain from genAI (1.1%) exceeds the 0.7% estimate by Acemoglu (2024) based on a similar framework.
To be clear, they are surmising that GenAI is already having a productivity gain.
The article you gave is derived from a poll, not a study.
As for the quote, I can’t find it in the article. Can you point me to it? I did click on one of the studies and it indicated productivity gains specifically on writing tasks. Which reminded me of this recent BBC article about a copywriter making bank fixing expensive mistakes caused by AI: https://www.bbc.com/news/articles/cyvm1dyp9v2o
It's actually based on the results of three surveys conducted by two different parties. While surveys are subject to all kinds of biases and the gains are self-reported, their findings of 25% - 33% producitivity do match the gains shown by at least 3 other randomized studies, one of which was specifically about programming. Those studies are worth looking at as well.
Yes, self-reporting has biases and estimating tasks is still a fool's errand, which is why I noted that the estimates from these surveys matched the findings from other RTC studies.
However, what doesn't get discussed enough about the METR study is that there was a spike in overall idle time as they waited for the AI to finish. I haven't run the numbers so I don't know how much of the increased completion time it accounts for, but if your cognitive load drops almost to 0, it will of course feel like your work is sped up, even though calendar time has increased. I wonder if that is the more important finding of that paper.
I'm not talking about time saving. AI seems to speed up my searching a bit since I can get results quicker without having to find the right query then find a site that actually answers my question, but that's minor, as nice as it is.
I use AI in my personal life to learn about things I never would have without it because it makes the cost of finding any basic knowledge basically 0. Diet improvement ideas based on several quick questions about gut functioning, etc, recently learning how to gauge tsunami severity, and tons of other things. Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
How much have you actually tried using LLMs and did you just use normal chat or some big grand complex tool? I mostly just use chat and prefer to enter my code in artisanally.
How much of that is junk knowledge, though? I mean, sure, I love looking up obscure information, particularly about cosmology and astronomy, but in reality, it's not making me better or smarter, it's just kind of "science junk food." It feels good, though. I feel smarter. I don't think I am, though, because the things I really need to work on about myself are getting pushed aside.
For me it's pretty much all knowledge that I'm immediately operationalizing. I occasionally use it to look up actors and stuff too, but most of the time it's information that provides direct value to me
1. To work through a question I'm not sure how to ask yet
2. To give me a starting point/framework when I have zero experience with an issue
3. To automate incredibly stupid monkey-level tasks that I have to do but are not particularly valuable
It's a remarkable accomplishment that has the potential to change a lot of things very quickly but, right now, it's (by which I mean publicly available models) only revolutionary for people who (a) have a vested interest in its success, (b) are easily swayed by salespeople, (c) have quite simple needs (which, incidentally, can relate to incredible work!), or (d) never really bothered to check their work anyway.
Why not just look up the information directly instead of asking a machine that you can never truly validate?
If I need information, I can just keyword search wikipedia, then follow the chain there and then validate the sources along with outside information. An LLM would actually cost me time because I would still need to do all of the above anyways, making it a meaningless step.
If you don't do the above then it's 'cheaper' but you're implicitly trusting the lying machine to not lie to you.
> Once you have several fundamental terms and phrases for new topics it's easy to then validate the information with some quick googling too.
You're practically saying that looking at an index in the back of a book is a meaningless step.
It is significantly faster, so much so that I am able to ask it things that would have taken an indeterminate amount of time to research before, for just simple information, not deep understanding.
Edit:
Also I can truly validate literally any piece of information it gives me. Like I said previously, it makes it very easy to validate via Wikipedia or other places with the right terms, which I may not have known ahead of time.
Again, why would you just not use Wikipedia as your index? I'm saying why would you use the index that lies and hallucinates to you instead of another perfectly good index elsewhere.
You're using the machine that ingests and regurgitates stuff like Wikipedia to you. Why not skip the middleman entirely?
Because the middleman is faster and practically never lies/hallucinates for simple queries, the middleman can handle vague queries that Google and Wikipedia cannot.
The same reasons you use Wikipedia instead of reading all the citations on Wikipedia.
> Because the middleman is faster and practically never lies/hallucinates for simple queries
How do you KNOW it doesn't lie/hallucinate? In order to know that, you have to verify what it says. And in order to verify what it says, you need to check other outside sources, like Wikipedia. So what I'm saying is: Why bother wasting time with the middle man? 'Vague queries' can be distilled into simple keyword searches: If I want to know what a 'Tsunami' is I can simply just plug that keyword into a Wikipedia search and skim through the page or ctrl-f for the information I want instantly.
If you assume that it doesn't lie/hallucinate because it was right on previous requests then you fall into the exact trap that blows your foot off eventually, because sometimes it can and will hallucinate over even benign things.
I feel like you're coming from a very strange place of both using advanced technology that saves you time and expands your personal knowledge base and at the same time saying that the more advanced technology that saves you even more time and expands your knowledge base further is useless and a time sink.
For most questions it is so much faster to validate a correct answer than to figure out the answer to begin with. Vague queries CANNOT be distilled to simple keyword searches when you don't know where to start without significant time investment. Ctrl-f relies on you and the article having the exact same preferred vocabulary for the exact same concepts.
I do not assume that LLMs don't lie or hallucinate, I start with the assumption that they will be wrong. Which for the record is the same assumption I take with both websites and human beings.
I don't know how you use search but I often find incredible information that I didn't explicitly search for.
How do you quantify such things? How can you say with a straight face that this magic box gives you more relevant information (which may be wrong!) and that will revolutionize the workforce?
I am searching for the incredible information and I can't find it without the LLM because I don't know the proper terminology ahead of time and search isn't that good unless you know exactly what you want.
Is the "here and there" tasks that were previously so little value that they are always stuck in the backlog? i.e. the parts where it helps have very little value in the first place.
As a dev, I find that the personal utility of LLMs is still very limited.
Analyze it this way: Are LLMs enabling something that was impossible before? My answer would be No.
Whatever I'm asking of the LLM, I'd have figured it out from googling and RTFMing anyway, and probably have done a better job at it. And guess what, after letting the LLM do it, I probably still need to google and RTFM anyway.
You might say "it's enabling the impossible because you can now do things in less time", to which I would say, I don't really think you can do it in less time. It's more like cruise control where it takes the same time to get to your destination but you just need to expend less mental effort.
Other elephants in the room:
- where is the missing explosion of (non-AI) software startups that should've been enabled by LLM dev efficiency improvements?
- why is adoption among big tech SWEs near zero despite intense push from management? You'd think, of all people, you wouldn't have to ask them twice.
> Are LLMs enabling something that was impossible before?
I would say yes when the LLM is combined with function calling to allow it to do web searches and read web pages. It was previously impossible for me to research a subject within 5 minutes when it required doing several searches and reviewing dozens of search results (not just reading the list entries, but reading the actual HTML pages). I simply cannot read that fast. A LLM with function calling can do this.
The other day, I asked it to check the Linux kernel sources to tell me which TCP connection states for a closing connection would not return an error to send() with MSG_NOSIGNAL. It not only gave me the answer, but made citations that I could use to verify the answer. This happened in less than 2 minutes. Very few developers could find the answer that fast, unless they happen to already know it. I doubt very many know it offhand.
Beyond that, I am better informed than I have ever been since I have been offloading previously manual research to LLMs to do for me, allowing me to ask questions that I previously would not ask due to the amount of time it took to do the background research. What previously would be a rabbit hole that took hours can be done in minutes with minimal mental effort on my part. Note that I am careful to ask for citations so I can verify what the LLM says. Most of the time, the citations vouch for what the LLM said, but there are some instances where the LLM will provide citations that do not.
I think he was referring to the ability to go from A to B within a certain amount of time. There is a threshold at which it is possible for a car, yet impossible for a horse and buggy.
That said, I recently saw a colleague use a LLM to make a non-trivial UI for electron in HTML/CSS/JS, despite knowing nothing about any of those technologies, in less time than it would have taken me to do it. We had been in the process of devising a set of requirements, he fed his version of them into the LLM, did some back and forth with the LLM, showed me the result, got feedback, fed my feedback back into the LLM and got a good solution. I had suggested that he make a mockup (a drawing in kolourpaint for example) for further discussion, but he had surprised me by using a LLM to make a functional prototype in place of the mockup. It was a huge time saver.
The issue is that the 'B' is not very consequential.
Consider something like Shopify - someone with zero knowledge of programming can wow you with an incredible ecommerce site built through Shopify. It's probably like a 1000x efficiency improvement versus building one from scratch (or even using the popular lowcode tools of the era like Magento and Drupal). But it won't help you build Amazon.com, or even Nike.com. It won't even get you part of the way there.
And LLMs, while more general/expressive than Shopify, are inferior to Shopify at doing what Shopify does i.e. you're still better off using Shopify instead of trying to vibe-code an e-commerce website. I would say the same line of thinking extends to general software engineering.
What was described was offloading portions of software development to a LLM to reach B faster. This works very well and is an improvement over the traditional method of implementing everything yourself.
Shopify is tangential to this. I will add that having had experience with similar platforms in the past (for building websites, not e-commerce), I can say that you must be either naive or a masochist to use them. They tend to be mediocre compared to what you can get from self hosted solutions and the vendor lock-in always will be used to bite those foolish enough to use them in the end.
Much work is not "text in, text out", and much work is not digital at all. But even in the digital world, inserting an LLM into a workflow is just not always that useful.
In fact much automation, code or otherwise, benefits from or even requires explicit, concise rules.
It is far quicker for me to already know, and write, an SQL statement, than it is to explain what I need to an LLM.
It is also quite difficult to get LLMs into a lot of processes, and I think big enterprises are going to really struggle with this. I would absolutely love AI to manage some Windows servers that are in my care, but they are three VMs deep in a remote desktop stack that gets me into a DMZ/intranet. There's no interface, and how would an LLM help anyway. What I need is concise, discreet automations. Not a chat bot interface to try and instruct every day.
To be clear I do try to use AI most days, I have Claude and I am a software developer so ideally it could be very helpful, but I have far less use for it than say people in the strategy or marketing departments for example. I do a lot of things, but not really all that much writing.
Gemini wasted my time today assuring me that if I want a git bundle that only has the top N commits, yet is cleanly clone-able, I can just make a --depth N clone of the original repo, and and do a git bundle create ... --all.
Nope; cloning a bundle created from a depth-limited clone results in error messages about missing commit objects.
So I tell the parrot that, and it comes back with: of course, it is well-known that it doesn't work, blah blah. (Then why wasn't it well known one prompt ago, when it was suggested as the definitive answer?)
Obviously, I wasn't in the "the right mindset" today.
This mindset is one of two things:
- the mindset of a complete n00b asking a n00b question that it will nail every time, predicting it out of its training data richly replete with n00b material.
- the mindset of a patient data miner, willing to expend all they keystrokes. needed to build up enough context to in effect create a query which zeroes in on the right nugget of information that made an appearance in the training data.
It was interesting to go down this #2 rabbit hole when this stuff was new, which it isn't any more. Basically do most of the work, while it looks as if it solved the problem.
I had the right mindset for AI, but most of it has worn off. If I don't get something useful in one query with at most one follow up, I quit.
The only shills who continue to hype AI are either completely dishonest assholes, or genuine bros bearing weapons-grade confirmation bias.
Let's try something else:
Q: "What modes of C major are their own reflection?"
A: "The Lydian and Phrygian modes are reflections of each other, as are the Ionian and Aeolian modes, and the Dorian and Mixolydian modes. The Locrian mode is its own reflection."
Very nice sounding and grammatical, but gapingly wrong in every point. The only mode that is its own reflection is Dorian. Furthermore, Lydian and Phrygian are not mutual reflections. Phrygian reflected around is root is Ionian. The reflection of Lydian is Locrian; and of Aeolian, Mixolydian.
I once loaded a NotebookLM with materials about George Russel's concept of the Lydian Chromatic, and Tonal Gravity. It made an incomprehensible mess of explaining the stuff, worse than the original sources.
AI performs well on whatever is the focus of its purveyors. When they want to shake down entry-level coding, they beef it up on entry-level coding and let it loose, leaving it unable to tell Mixolydian from mixing console.
Thank you! This is what I've been trying to tell people about LLMs. They don't hold up. They're like those Western movie set towns that look normal from the front, but when you walk around behind them, you see it is all just scaffolding with false fronts.
what model did you ask? here's the exact reply I received from Claude Sonnet, which appears to be exactly the answer you were expecting:
"Among the seven modes of C major, only Dorian is its own reflection.
Understanding Mode Reflections
When we reflect a mode, we reverse its interval pattern. The modes of C major and their interval patterns are:
Ionian: W-W-H-W-W-W-H
Dorian: W-H-W-W-W-H-W
Phrygian: H-W-W-W-H-W-W
Lydian: W-W-W-H-W-W-H
Mixolydian: W-W-H-W-W-H-W
Aeolian: W-H-W-W-H-W-W
Locrian: H-W-W-H-W-W-W
The Palindromic Nature of Dorian
Dorian mode is palindromic, meaning it produces the same scale whether you read its interval pattern forwards or backwards. When you reverse the Dorian interval pattern W-H-W-W-W-H-W, you get exactly the same sequence: W-H-W-W-W-H-W.
Mirror Pairs Among the Other Modes
The remaining modes form mirror pairs with each other:
Ionian-Phrygian: Mirror pair
Lydian-Locrian: Mirror pair
Mixolydian-Aeolian: Mirror pair
For example, when you reflect the C major scale (Ionian), which has the interval pattern W-W-H-W-W-W-H, you get H-W-W-W-H-W-W, which corresponds to the Phrygian mode.
This symmetrical relationship exists because the whole diatonic scale system can be symmetrically inverted, creating these natural mirror relationships between the modes"
I get a lot of garbage out of 2.5 Pro and Claude Sonnet and ChatGPT. There's always this "this is how you solve it", I take a close look and it's clearly broken, I point it out and it's all "you're right, this is a common issue". Okay, so why do we have to do this song and dance a million times to arrive at the actually correct answer?
Gemini 2.5 Flash is meant for things that have a higher tolerance for mistakes as long as the costs are low and responses are quick. Claude Sonnet is similar, although the trade off it makes between mistake tolerance and cost/speed is more in favor of fewer mistakes.
Lately, I have been using Grok 4 and I have had very good results from it.
Today I read a stupid Hackernews comment about how AI is useless. Therefore Hackernews is stupid. Oh, I need a filtered list of which comments to read?
Do you build computers by ordering random parts off Alibaba and complaining when they are deficient? You are complaining that you need to RTFM for a piece of high tech?
> Oh, I need a filtered list of which comments to read?
If they are about something you're not sure about, and you're making decisions based on them ... maybe it would actually help, so yes?
> Do you build computers by ordering random parts off Alibaba and complaining when they are deficient?
We build computers using parts which are carefully documented by data sheets, which tell you exactly for what ranges of parameters their operation is defined and in what ways. (temperatures, voltages, currents, frequencies, loads, timings, typical circuits, circuit board layouts, programming details ...)
Like other commenters here, when I try to use them to help with my work, they don't. It's that simple. I have tried AI coding assistants, and they just guess incorrectly. If I know the answer, they generally give me the same answer. If I don't know the answer, they give me gibberish that ends up wasting time. I would love to look over the shoulder of an AI booster who had a really good interaction, because it's hard for me to believe they didn't just already know what they were looking for.
I think that there's a strong argument to be made that the negatives of having to wade through AI slop outweights the benefits that AI may provide. I also suspect that AI could contribute to enshittification of society; e.g. AI therapy being substituted for real therapy, AI products displacing industrial design, etc.
> e.g. AI therapy being substituted for real therapy, AI products displacing industrial design, etc.
That depends on the quality of the end product and the willingness to invest the resources necessary to achieve a given quality of result. If average quality goes up in practice then I'd chalk that up as a net win. Low quality replacing high quality is categorically different than low quality filling a previously empty void.
Therapy in particular is interesting not just because of average quality in practice (therapists are expensive experts) but also because of user behavior. There will be users who exhibit both increased and decreased willingness to share with an LLM versus a human.
There's also a very strong privacy angle. Querying a local LLM affords me an expectation of privacy that I don't have when it comes to Google or even Wikipedia. (In the latter case I could maintain a local mirror but that's similar to maintaining a local LLM from a technical perspective making it a moot point.)
Last night, I asked a LLM to produce an /etc/fstab entry for connecting to a network share with specific options. I was too lazy to look up the options from the manual. It gave me the options separated by semicolons, which is invalid because the config file requires commas as separators.
I honestly don't see technology that stumbles over trivial problems like these as something that will replace my job, or any job that is not already automatable within ten thousand lines of Python, anytime soon. The gap between hype and actual capabilities is insane. The more I've tried to apply LLMs to real problems, the more disillusioned I've become. There is nothing, absolutely nothing, no matter how small the task, that I can trust LLMs to do correctly.
Which one? There are huge variations between LLMs. Was this a frontier thinking model with tool use? Did you ask it review online references before presenting an answer?
IMO I think it is a combination of being a really great programmer already and then either not all that intellectually curious or so well read and so intellectually curious that LLMs are a step down from being a voracious reader of books and papers.
For me, LLMs are also the most useful thing ever but I was a C student in all my classes. My programming is a joke. I have always been intellectually curious but I am quite lazy. I have always had tons of ideas to explore though and LLMS let me explore these ideas that I either wouldn't be able to otherwise or would be too lazy to bother.
Are you saying that LLMs are most useful if you're not intellectually curious, and therefore most interested in immediate answers, but also that they're very useful if you're a really great programmer for an unstated reason?
The concerning thing is that AI contrarianism is being left wing coded. Imagine you’re fighting a war and one side decides “guns are overhyped, let’s stick with swords”. While there is a lot of hype about AI, even the pessimistic take has to admit it’s a game changing tech. If it isn’t doing anything useful for you, that’s because you need to get off your butt and start building tools on top of it.
Especially people on the left need to realize how important their vision is to the future if AI. Right now you can see the current US admin having zero concern for AI safety or carbon use. If you keep your head in the dirt saying “bubble!” that’s no problem. But if this is here to stay then you need to get involved.
That's a good point and even worse we'll eventually end up with yet another issue where both left and right offer terrible options and there's no nuanced middle ground :/
The worst thing in the information space is bullshit. It's worse than lies because it's slicker. LLMs generate extremely dangerous bullshit that can be difficult to correct. Advice given may apply to old versions, outdated legal precedent, or radical opinions. Fully understanding nuance or mistakes can take a lot of effort and still be off.
Railroads move people and cargo quickly and cheaply from point A to point B. Mechanized textile production made clothing, a huge sink of time and resources before the industrial age, affordable to everybody.
What does AI get the consumer? Worse spam, more realistic scams, hallucinated search results, easy cheating on homework? AI-assisted coding doesn't benefit them, and the jury is still out on that too (see recent study showing it's a net negative for efficiency).
Until you dive deeper and discover that most of what the AI agents provided you was completely wrong...
There's a reason that AI is already starting to fade out of the limelight with customers (companies and consumers both). After several years, the best they can offer is slightly better chatbots than we had a decade ago with a fraction of the hardware.
"Until you dive deeper and discover that most of what the AI agents provided you was completely wrong..."
Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
Learning well is about consulting multiple sources and using them to build up your own robust mental model of the truth of how something works.
If you can really find the single perfect source of 100% correct information then great, I guess... but that's never been my experience. Every source of information has its flaws. You need to build your own mental model with a skeptical eye from as many sources as possible.
As such, even if AI makes mistakes it can still accelerate your learning, provided you know how to learn and know how to use tips from AI as part of your overall process.
Having an unreliable teacher in the mix may even be beneficial, because it enforces the need for applying critical thinking to what you are learning.
> > "Until you dive deeper and discover that most of what the AI agents provided you was completely wrong..."
> Oddly enough, I don't think that actually matters too much to the dedicated autodidact.
I think it does matter, but the problem is vastly overstated. One person points out that AIs aren’t 100% reliable. Then the next person exaggerates that a little and says that AIs often get things wrong. Then the next person exaggerates that a little and says that AIs very often get things wrong. And so on.
Before you know it, you’ve got a group of anti-AI people utterly convinced that AI is totally unreliable and you can’t trust it at all. Not because they have a clear view of the problem, but because they are caught in this purity spiral where any criticism gets amplified every time it’s repeated.
Go and talk to a chatbot about beginner-level, mainstream stuff. They are very good at explaining things reliably. Can you catch them out with trick questions? Sure. Can you get incorrect information when you hit the edges of their knowledge? Sure. But for explaining the basics of a huge range of subjects, they are great. “Most of what they told you was completely wrong” is not something a typical beginner learning a typical subject would encounter. It’s a wild caricature of AI that people focused on the negatives have blown out of all proportion.
I almost always validate what I get back from LLMs and it's usually right. Even when it isn't it still usually gets me closer to my goal (e.g maybe some UX has changed where a setting I'm looking for in an app has changed, etc).
IDK where you're getting the idea that it's fading out. So many people are using the "slightly better chatbots" every single day.
Btw if you only think chat GPT is slightly better than what we had a decade ago then I do not believe that you have used any chat bots at all, either 10 years ago or recently because that's actually a completely insane take.
At a minimum, presumably once it arrives it will provide the consumer custom software solutions which are clearly a huge sink of time and resources (prior to the AI age).
You're looking at the prototype while complaining about an end product that isn't here yet.
I don't have that negative of a take but agree to some extent. The internet, mobile, AI have all been useful but not in the same way as earlier advancements like electricity, cars, aircraft and even basic appliances. Outside of things that you can do on screens, most people live exactly the same way as they did in the 70s and 80s. For instance, it still takes 30-45 minutes to clean up after dinner - using the same kind of appliances that people used 50 years ago. The same goes for washing clothes, sorting socks and other boring things that even fairly rich people still do. Basically, the things people dreamed about in the 50s - more wealth, more leisure time, robots and flying cars really were the right dream.
The luddites were often the ones that built the mechanized looms. They had nothing against mechanized looms, they had everything against the business owners using their workers talents and knowledge to build an entire operation only to later undercut their wages and/or replace them with lesser paid unskilled workers and reduce the quality of life of their entire community.
Getting people to associate the luddites as anti-technology zealots rather than pro-labor organization is one of the most successful pieces of propaganda in history.
People will also use "look society was fine afterwards" as proof the luddites were wrong, but if you look at the fact the growth of industrial revolution cities was driven by importing more people from the countryside than died of disease, it's not clear at all that they were wrong about it's impact on their society, even if it worked out alright for us in the aftermath.
>The luddites were often the ones that built the mechanized looms.
Source? Skimming the wikipedia article it definitely sounds like most were made up of former skilled textile workers that were upset they were replaced with unskilled workers operating the new machines.
> They had nothing against mechanized looms, they had everything against the business owners using their workers talents and knowledge to build an entire operation only to later undercut their wages and/or replace them with lesser paid unskilled workers and reduce the quality of life of their entire community.
Sounds a lot like the anti-AI sentiment today, eg. "I'm not against AI, I'm just against it being used by evil corporations so they don't have to hire human workers". The "AI slop" argument also resembles luddites objecting to the new machines on the quality of "quality" (also from wikipedia), although to be fair that was only a passing mention.
> Getting people to associate the luddites as anti-technology zealots
Interestingly....
..... the fact that luddites also called for unemployment compensation and retraining for workers displaced by the new machinery, probably makes them amongst the most forward thinking and progressive people of the 1800's.
Luddites weren’t anti technology at all[0] in fact they were quite adept at using technology. It was a labor movement that fought for worker rights in the face of new technologies.
Luddites weren't at a point where every industry sees individual capital formation/demand for labor trend towards zero over time.
Prices are ratios in the currency between factors and producers.
What do you suppose happens when the factors can't buy anything because there is nothing they can trade. Slavery has quite a lot of historic parallels with the trend towards this. Producers stop producing when they can make no profit.
You have a deflationary (chaotic) spiral towards socio-economic collapse, under the burden of debt/money-printing (as production risk). There are limits to systems, and when such limits are exceeded; great destruction occurs.
Malthus/Catton pose a very real existential threat when such disorder occurs, and its almost inevitable that it does without action to prevent it. One cannot assume action will happen until it actually does.
With this generation of AI, it's too early to tell whether it's the next railroad, the next textile machine, or the next way to lock your exclusive ownership of an ugly JPG of a multicolored ape into a globally-referenceable, immutable datastore backed by a blockchain.
The mechanical loom produced a tangible good. That kind of automation was supposed to free people from menial work. Now they are trying to replace interesting work with human supervised slop, which is a stolen derivative work in the first place.
The loom wasn't centralized in four companies. Customers of textiles did not need an expensive subscription.
Obviously average people would benefit more if all that investment went into housing or in fact high speed railways. "AI" does not improve their lives one bit.
Computing is fairly general-purpose, so I suspect that the data centers at least will be used for something. Reusing so many GPU's might be harder, but not as bad as ASICs. There are a lot of other calculations they could do.
Most scientific HPC workloads are designed to utilize GPU equipped nodes. If AI completely flops scientific modeling will see huge benefits. It's a win-win (except for the investors I guess).
I don't think we're too worried about wasting sand, though? What are the major costs of producing a GPU? Which of those are we worried about wasting?
I'm not going to do the homework for a Hacker News comment, but here are a few guesses:
I suspect that a lot of it is TSMC's capex for building new fabs. But since the fabs are already built, they could run them for longer. (Possibly producing different chips.)
Meanwhile, carbon emissions due to electricity use by data centers can't be taken back.
But also, much of an investment bubble popping wouldn't be about wasting resources. It would be investors' anticipated profits turning out to be a mirage - that is, investors feel poorer, but nothing material was lost.
If I had to hazard a guess as to why China and the US are building so many GPU farms under the guise of "AI supremacy", I'd say it's to support state sponsored hacking.
the goal of the major AI labs is to create AGI. The net utility of AGI is at least on the level of electricity, or the steam engine. It's debatable whether or not they'll achieve that, but if you actually look at what the goal is, the investment makes sense.
AGI is not even a well-defined goal, let alone one that can be reasonably expected from the current tranche of investment. By your logic, any investment makes sense - this is not investment at all, it is a gambling addiction.
when bubbles burst crashes follow. this is a colossal bubble. i do walk around with that belief every day, because every day that passes is yet another day when this overblown AI hype bullshit fails to deliver the goods.
So you think the entire perceived value of AI will be wiped out.
I think we probably are in a bubble, but much like housing bubbles in major metro areas, the value is real and so the bubble is on top of that real value vs being 100% synthetic.
Yes, because big market crashes don't just impact the previously inflated sectors. They hurt all the sectors. So the wipeout could easily exceed the current perceived value of AI.
IMO it's also clearly wrong, because I think even if you believe most of AI is hype you must see the value that a lot of people are getting from it, like the housing market example I gave.
> There is obvious utility to railroads, especially in a world with no cars.
> The net utility of AI is far more debatable.
As long as people are willing to pay for access to AI (either directly or indirectly), who are we to argue?
In comparison: what's the utility of watching a Star Wars movie? I say, if people are willing to part with their hard earned cash for something, we must assume that they get something out of it.
Isn't the US economy far more varied than it was in the 19th century? More dense? And therefore wouldn't be more difficult for one industry to dominate the US economy today than it was in the 19th century?
People should keep in mind that there was no such thing as a GDP before the 1980's.
All that has been back-calculated, and the further back you go the more ridiculous it gets.
Excuses sounded plausible at the time but killed two birds with one stone.
Less rapid increase in government benefits which had become based on GNP for survival to cope with inflation, and further obscuring the ongoing poor economic performance of the 1980's going forward compared to how it was before 1970 numerically.
The people who were numerically smart before that and saw what things were like first hand were not fooled so easily.
Even using GDP back in the 1980's when it first came out, you couldn't get a good picture of the 1960's which were not that much earlier.
I wonder about actual effectiveness of spending on railroads vs AI... Even if railroads were somewhat waste, did the investment spread much wider? At least geographically it must have as there were workers that moved around and needed services. That is it was mostly spend in economy. Thus had actual change to trickle down.
Where as AI, who actually gets the investment? Nvidia? TMSC? Are people who are employed some that would have anyway been employed? Do they actually spend much more? Any Nvidia profits likely go just back to the market propping it up even higher.
How much efficiency from use of LLMs have actually increased proctiveness?
Put a comment on this below, but the claim is highly misleading...consumer spending is ~$5 trillion, AI investment is ~$100 billion. The graph is looking at something like contribution to GDP growth (not contribution to GDP), but that is even misleading b/c if you don't adjust for seasonality, H1 consumer spending is almost always lower than H2 consumer spending of the previous year (because Q4 always has a higher level of consumer spending).
Consider other infrastructure such as the US highway system. There may be an expansive bubble, but infrastructure such as the increase in base power production needs to be factored as well.
One way to think about it is what if we’d done it the other way around? If we’d had AI first at 20% GDP investment levels, would the subsequent railroad boom have been an order of magnitude smaller at 2% GDP?
For me, that’s enough of a thought experiment — as implausible as it might be to have AI in 1901 — to be skeptical that the difference is simply that the first tech step-change was a pre-war uplift to build the post-war US success story, and the latter builds on it.
I received both the worst and the best pieces of career advice when I was an undergraduate.
The worst advice was that writing software, after the dotcom bust, was dead as a career. This taught me a lot about the value of "conventional wisdom" vs looking at the underlying supply and demand dynamics of a career. Sort of adjacent to the theme of the essay, I think the best careers are those that you can tolerate and those that have favorable supply-demand curves.
The best advice was from a pre-med advisor, who asked me if I wanted to spend the rest of my life surrounded by people who were old, sick, dying, and - not in so many words - decrepit. At that moment I realized I was not a healer, I found most bodies to be gross, and I had no business considering a medical career.
> The best advice was from a pre-med advisor, who asked me if I wanted to spend the rest of my life surrounded by people who were old, sick, dying, and - not in so many words - decrepit. At that moment I realized I was not a healer, I found most bodies to be gross, and I had no business considering a medical career.
Ah yes, I came to the same realisation - my family were pressuring me to be a doctor because my marks were there - but spending all day touching sick people was not for me. Building machines is so much more fun and someone will pay me to do it! - crazy. I do this for free in my spare time.
The headline is provocative, as some amount of corporate-owned farmland is owned by corporations that are in turn owned by farmers and their families.
I would also push back on the notion that owner-operators are in a better position. It's more accurate to say that farmers who have assets of any kind are better off than those who don't have assets. As an example, generations back in my family we owned a lot of farmland. There were some bad investments made in the farming operation and we almost lost it all. This was in the early 1980s for those who are familiar.
If my grandfather had sold all of his land and equipment in the late 1970s and invested it in the recently-started Vanguard group, rather than re-investing in the farming operation, then my family would be wealthy. Now expecting a farmer to know about index investing and to bet on it when it was just starting is unreasonable. But it's a good lesson in diversification.
When people lionize farming owner-operators, they discount the risk that owner is taking by having so many assets concentrated in one operation. Now farmers do know about investing and diversification, and some do make the rational decision to cash out. Many also don't, for various reasons.
But it's not totally fair to expect farmers to behave differently than other asset owners because farming is seen romantically or in terms of national security.
This is a different argument than one which would decry the position of tenant farmers. Obviously being a tenant farmer owning nothing but equipment is harder than being a farmer who has $5 million invested somewhere else and rents the land he farms.
Yeah. I think it's important to think about whether or not you want millions of dollars of assets tied up in land that you can farm. For example, I don't know how to farm, and I don't wish my 401(k) was land and structures... I am happy to have higher exposure to the wider economy.
Obviously there are recessions and having your own vegetable farm and place to live is nice... but most of the time there isn't a cataclysmic recession, so you're leaving money on the table. Meanwhile, it's pretty easy to have a bad year farming. Weather. Pests. Having a lot of assets doesn't help you when they're not liquid and plants won't grow for a year.
There was a theory that autism and schizophrenia were opposite ends of a spectrum, but it's fallen into disfavor. The theory went that autism produces mechanical, rigid thought patterns while schizophrenia takes free association too far.
I think it is possible to be diagnosed with both schizophrenia and autism which is why the theory is not considered anymore.
Interesting. I wouldn’t have intuited autism as being on the opposing end of psychosis, really, at least not based on my experience of both in my family.
I think your LLM temperature analogy is interesting in this deprecated dichotomy between autism and schizophrenia.
One Youtuber Jreg used a breadth-first search (schizophrenia) vs depth-first search (autism) analogy when comparing the the two, but I think your temperature analogy is more apt. Higher temperature results in more disorganized thoughts like schizophrenia. And if you buy into the idea that the root of most schizophrenia is thought disorders, then this analogy implies that dialing up temperature corresponds to more signs of psychosis through speech
My experience with many friends on the autism spectrum is that their speech tends to be more scripted, but I certainly don't think autism and psychosis are mutually exclusive.
They are. Autism is normal, and neurotypicality is simply the socially acceptable/expected level of insanity.
I know I have unconventional beliefs, but the reason is the opposite of what people describe here. There are no patterns and commonly accepted beliefs seem unjustified, and crazy. So I need to find my own answers, but it's difficult. I think that people used to be more like this until recently, and people messed it up because iron is toxic, and heavy metals belong in the brain. I become more and more like this as I keep taking them, and it's better. I admit my IQ has probably dropped, but it's better this way.
I think "Habsburg" is a good enough descriptor. It encompasses the Catholic, northern-influenced culture (if not actually in the north like Trieste) as well as the baroque, counter Reformation-derived artistic styles.
They helped me out for my 8th grade science fair! I wanted to build a shake table to simulate the effects of earthquakes on buildings. (tl;dr: the taller the building, the more flexible it is. At least according to my experiment, which did get me a place at the state finals.) They helped me size the motor and even had an AC one I could plug directly into the wall. This was the Geneva location.
Ok, but please don't post unsubstantive comments to Hacker News.
We want curious conversation here. That can only happen when people say new things. Generic denunciations like what you posted here not only are nothing new, they're as repetitive as it gets.
And if you give the scammers a platform that's not useful for anything else than building endless variations of memecoins and productless fake-tech penny stocks, what is the expected output?
Not sure if all crypto is a scam necessarily, but sure that world is chock full of scammers trying to out-scam each other.
It's all scams. Even if the coin itself is not a total scam, the exchanges do some of the most shady things you can imagine. They'll actively trade against their customers. They'll do insider trading, rugpull their customers, gets "hacked" by an insider and steal all your coins, etc.
Even BTC and ETH are shady. ETH should have been classified as security. BTC is an environmental disaster and centralized by a few large miners.
I don't think that it's all a scam, but I do think that it gives scammers a vehicle to con people without oversight or regulation. They also end up with "real" money which can be spent.
You can keep telling yourself that, but there's enough utility and momentum behind it that it is not going anywhere. I'm saying this as a person with no stake in the matter, just observing from the sidelines.
Interesting point about wash trading being somewhat untraceable. How about a coin that connects a hash whatever to each chunk of the token as it moves around so every coin has a backstory written into the blockchain. And that sounds like current fiat banking.
What is the utility behind it? I don't see people using to actually do any transactions because of the volatility and fluctuating transaction fees. Scams can have momentum behind them
You can sell ownership across the world instantly, and you can use a centralized authority for authenticity and grading. Done. All for less cost than crypto and without enabling DPRK to steal your card with no recourse.
This is exactly what happened with gold, and then people realized that basing money on gold was stupid, so now we have the system we have today. Cryptofools are two financial revolutions behind modernity.
The blockchain technology itself is a marvel and the safety/security properties of properly decentralized systems are unmatched. The whole financial world would be better written using blockchain/smart contracts, same goes for voting/governance, etc.
But these scammers. So many of them... Easy to smell though, but the problem we need to fix is these scammers get most of the online traction/attention, taking it from truer builders.
>The whole financial world would be better written using blockchain/smart contracts, same goes for voting/governance, etc.
Citation needed. Governance and finance absolutely need human judgment recovery edge cases such as "My house burnt down containing my ID (keys) and I need access to my bank account to rebuild it"
Right. All of these "finance and government would be so much better with smart contracts" suggestions seem predicated on the idea that human beings can design a system correctly on the first attempt and deploy an immutable version of that system they can then run independently forever without any bugs or exploits that need to be fixed in the future.
It seems the common approach now is to make a v2/v3/etc. of your protocol and let your own users migrate. Previous versions will still run forever, but your frontend can push migration paths.
>The blockchain technology itself is a marvel and the safety/security properties of properly decentralized systems are unmatched. The whole financial world would be better written using blockchain/smart contracts, same goes for voting/governance, etc.
No, blockchain really isn't all that amazing. Blockchain is useful when you need something decentralized, immutable, and trustless. That trustless bit is the important part, there are existing solutions for decentralized immutable ledgers that work far better than blockchains and require far less compute power. And it turns out that the vast majority of human and business interactions and transactions are actually based on varying levels of trust.
And that's before you get into blockchain-specific issues like the Oracle Problem.
Most important property of blockchains is verifiable transparency. That is a huge deal and yes, most financial and government systems would be much better for people if they were built with that type of verifiable transparency.
Decentralization is important but isn't as important. There are many successful chains that aren't decentralized at all and are quite useful.
Most top/real projects in blockchains aren't immutable. Pretty much everything is upgradable/changeable, including blockchains themselves.
Trustless - even this part isn' true for most blockchain systems. They all contain various levels of trusting different entities.
It is the transparency and verifiability that is the key idea and most important improvement that blockchains bring.
>It is the transparency and verifiability that is the key idea and most important improvement that blockchains bring.
You can have transparency and verifiability without blockchain, so long as you trust the parties publishing the ledger. So once again, trustless is really the key part to blockchain.
No, they don't. Bank networks are fundamentally permissioned, only authorized entities are allowed to participate. Verifiability is a political/legal decision between these entities, not a technical limitation of existing systems.
I would say most current blockchains are pretty bad at privacy. Look at what onchain sleuths do.
Rather, blockchains enable anyone to provably check the state of the database. Your fintech app logs you out. No proof, and frustrating customer service hell for days/weeks/months.
> The whole financial world would be better written using blockchain/smart contracts, same goes for voting/governance, etc.
Eh? I thought blockchain didn't scale? As far as I can see blockchain swaps trust in banking institutions ( not to simply make up numbers in their banking systems ) - to trust in a majority based voting system where you don't know the people involved and you just hope the bad guys don't have a majority.
And the computational cost of running the distributed voting system - is many of orders magnitude higher than the computational costs of transactions in the we-trust-the-bank-model.
As for smart contracts - a quick google suggests that the cost of deploying a smart contract on Ethereum is 500 dollars - that's quite an overhead for the vast majority of financial transactions.
> The whole financial world would be better written using blockchain/smart contracts, same goes for voting/governance, etc.
These are all scams, too. No.
https://en.wikipedia.org/wiki/The_DAO serves as a nice concrete example. "Code is law" fell apart as soon as the code resulted in a large enough bad outcome for enough people.
Same as email I mean most of it is scams and ads so who cares about the tech or other people actually using it not like we are in a place tech literate enough to seperate technology and its users beyond the absolute surface.
When I studied political science and economics in college we had a lot of really intriguing discussions around the rise of fiat currency, its pros and cons, its effect on global trade, etc. Some really well thought out opinions on both sides, from "nothing is real, who cares what this piece of paper means" to people who thought it was all a "globalist" scam and that we should be giving the grocer tiny pieces of gold in exchange for bread. This was before bitcoin existed.
Post-BTC it seems like anyone who uses the term "fiat" to describe currency in any non-academic setting is about to launch into some nonsensical "bitcoin fixes this" tirade, they think everything is some jewish space laser pyramid scheme, and you wonder if they have a sovereign citizen paper license plate on their car.
"Fiat is a pyramid scheme" is a dog whistle except the person blowing it doesn't know.
I don't understand how but somehow "fiat currency" in a comment seems fine and "fiat" seems derogatory in that "get ready to read some crazy shit" kind of way.
Eh, I'm still a goldbug who thinks fiat is a scam and I hate crypto too. I actually don't care about gold specifically, I just think currency should be backed by some sort of real good that's durable and relatively rare. The goal being to make currency harder for governments and banks to manipulate (the whole reason we backed out of Bretton Woods was because our manipulations had finally caught up to the realities of our finite gold reserves).
I'm sad to report that renting a family bedroom or two joined bedrooms on Amtrak to take a journey on say the California Zephyr didn't pencil out. It is costlier than flying (about $2000 vs $1600 at the low end for both options, resp.) Even if you account for the cost of staying two extra nights at the destination it about breaks even.
With children I don't want to risk the days of travel becoming an ordeal as opposed to hours of flight time. The "digital detox" might quickly go sideways and require hours of screentime pacifiers. Maybe when they are older.
Happily the QM2 actually made financial sense and there would be more room to move about and explore the ship.
I think rail travel makes the most sense in the Acela context the article opened with - routes between cities that take less than a day. For cross-continent travel the time savings of air travel make rail travel a harder case to argue.