Hacker Newsnew | past | comments | ask | show | jobs | submit | slibhb's commentslogin

> Currently, the most plausible theory emerging from her team’s research points to metabolism: Healthy and cancerous cells may process reactive oxygen species—unstable oxygen-containing molecules generated during radiation—in very different ways.

Reminds me of this which I (think) was linked here a while ago: https://www.nature.com/articles/s12276-020-0384-2

It really does feel like all these piecemeal cancer treatments are converging on something resembling a cure.


There was also a study that showed that chemotherapy efficacy was enhanced by fasting before treatment.

It seems that when calories are scarce, healthy cells turtle up while cancer cells keep consuming, so fasting reduces absorption rates in healthy tissues and thus collateral damage.


Healthy cells CAN turtle-up, whereas cancer cells engage in unregulated reproduction. Also, some cancer cells can only consume glucose. Which, in a fasted state, would mean that the majority of energy would be in ketones(if the individual were metabolically healthy), starving the cancer cells to death.

Why wouldn’t a strict keto diet not be a cure for those cancers?

Because the cancers cells adapt! (fast reproduction and high mutation rate of the cancerous cells make that process quicker than antibiotics resistance)

the body will actually turn protein into glucose, so the body will never be completely glucose free.

many people try it, but the results are mixed.

Please don't throw around random "a study that showed" about cancer treatments and chemotherapy. If you really think it needs to be shared, share the study and while you're at it, check in with a good oncologist or knowledgeable friend too. In my ~10 years of enduring chemo and other treatments, the amount of garbage you have to wade through from "well meaning" anecdata like "wheat grass" or "smoke huge bongloads" or "don't eat sugar" makes an already horrible process worse.

And yes I checked this with my onc at MSK. Dietary glucose in particular -- if you cut out enough sugar to starve cancer cells you would be doing lots of damage elsewhere as well.


There is this review that havent found any effects: https://www.mdpi.com/2072-6643/15/12/2666 Note that they excluded 274 out of 283 studies, considering only 9. It's in mdpi which is not great.So, the jury is still out I guess

The jury is not out -- it's an unconfirmed hunch that, as the study you link notes, risks harming patients who are having trouble keeping down food as it is.

This is just keto and fasting fans pushing their obsession on cancer patients. Same for marijuanauts -- anti-nausea drugs have long outperformed cannibinoids but you still have stoner friends offering you spliffs (ok, save them for later)


People eat keto diets all the time. What damage do you think they are doing to themselves?

The damage risked by Keto diets includes nutrient deficiency, liver and kidney problems, constipation, fuzzy thinking and mood swings.

Of these risks it's the potential veering into liver and kidney problems that deserves the closest monitoring.

See: https://www.health.harvard.edu/staying-healthy/should-you-tr...

~ Howard E. LeWine, MD, Chief Medical Editor, Harvard Health Publishing


You’re talking about elimination diets, I’m talking about skipping a meal.

I'm saying that cancer treatments are some of the most scientifically-validated procedures out there, because there's essentially unlimited money to pay for them. They have eliminated or modulated any negative side effect they can, via improved anti-nausea drugs, careful dosing+timing, etc.

Still, you can experience all sorts of discomforts during the tmt. I nearly fainted and got horrible chills when getting oxaliplatin for the first time. You're saying I should have _fasted_ for this?


There was a study that chemotherapy works best in the _morning_. Derek Lowe had an article about this:


Did you notice the huge warning at the top of the article? This is garbage science.

No. I read the article when it came out, so I missed the update.

I guess it was too good to be true.


> Nothing will change

Things have already changed!


If we don't need plasma physicists anymore then we probably have fusion reactors or something, which seems like a fine trade. (In reality we're going to want humans in the loop for for the forseeable future)

> What matters in the end is that this tech is not to empower a common person (although it could).

How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person". It lowers barriers!

A lot of the anti-AI sentiment on HN concerns people losing their jobs. I don't think this will happen: programmers who know what they're doing are going to be way, way more effective at using AIs to generate code than others.

But even if it is true and we do see job losses in tech: are software devs really "in a precarious position"? Do they really qualify as "those that have little"? Seems like a fantasy to me. Computer programmers have done great over the past 30 years.

More broadly, anti-AI sentiment comes from people who dislike change. It's hard to argue someone out of that position. You're allowed to prefer stasis. But the world moves on and I think it's best to remain optimistic, keep an open mind, and make the most of it.


It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.

$20/month in return for measurable reductions in quality of life is not an amazing deal. It's "Heads I win, tails you lose."

Or maybe, if you're thinking of it as an enabler for a side hustle or some other project with a low probability of a high payoff, it can slightly more optimistically be regarded as a moderately expensive lottery ticket.

That's not pessimism; it's just a realistic understanding of how the tech industry actually works, informed by decades' worth of experience.


> It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.

Can you share those studies? I'm pretty skeptical of this effect. I find that AI has made my job easier and less stressful.

In general, I think your atittude is not realistic, it's just general pessimism about the world ("everything new is bad") that is basically unfounded.



Paywalled, and HBR articles are famously not a good source

>More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.

Similar things happened with the adoption of computers in the workplace. Perhaps there's a case for banning all digital technology and hiring typists and other assistants to perform the work using typewriters and mechanical calculators? There would certainly be less multitasking when you have 8 hours worth of documents to retype and file/mail. Perhaps there would be less overtime when your boss can see you have a high workload by the state of papers piled upon your desk. Or maybe we can solve these problems in a different way.


> How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person".

This must be sarcasm. This has to be.


> I don't think this will happen

Block just laid off 40% of their company citing AI.


> Block just laid off 40% of their company

Because the company was being horribly run and over hired and "pivoted to blockchain" for no fucking reason.

> citing AI.

Because it's 2026 and they thought that would work to bullshit a few people about point one, which apparently it did.


Tech companies have been laying off employees for a while now. I think it's mostly due to pandemic overhiring and higher interest rates but I suppose we'll see.

I agree that AI was not the _actual_ reason, however, it did allow them to do massive layoffs without admitting they are doing poorly and not taking a massive hit to their stock price.

> I think it's mostly due to pandemic overhiring and higher interest rates

It's not because of pandemic overhiring, and if that were true, the layoffs in 2021-2022 would have handled it. It's 2026. The people getting laid off (on average) haven't worked at these companies since before the pandemic, they got hired in ~2023 (average tenure at a tech company is ~3 years).

It's not because of AI either. Nobody is replacing jobs with AI, AI can't do anyone's job.

It's not because of interest rates. People hired like crazy when interest rates were this high in the oughts.

It's because Elon Musk's Twitter purchase and subsequent management convinced every executive in tech that you can cut to the bone, fuck your product's quality completely, and be totally fine. It's not true, but the downsides come later and the cash influx comes now, so they're doing it anyway.


> It's because Elon Musk's Twitter purchase and subsequent management convinced every executive in tech that you can cut to the bone, fuck your product's quality completely, and be totally fine.

I agreed with you up to this point. Twitter largely operated in the red for its entire existence prior to his "restructuring" to make it leaner and profitable. In my opinion, twitter went to shit when the incentive for creating engagement switched from gaining social capital to gaining... erm... actual capital. The laissez-faire attitude about allowing fairly terrible behavior on there gave it a PR black eye that probably didn't help either in the eyes of advertisers.

If I had to guess what happened with Block (and that's what we're all doing, guessing): a CEO's job is to make the line go up, and saying you introduced tools to increase productivity with half the staff (especially if you're overstaffed) seems to me a pretty easy way to do that. I saw someone on here refer to it as "Vibe CEOing", which I think is pretty on point. Again, just my opinion/guess.


junior dev hiring is down 60%, that’s not just a post pandemic correction.

> It was a nice little feature that I knew exactly how to do, but I hadn’t prioritized getting done yet because there were a bunch of other things on my plate. But with a little assist, it was quick to implement.

Exactly how I feel. AI has allowed me to work on projects that I've wanted to work on but didn't have the time/energy for.


> AI has allowed me to work on projects that I've wanted to work on but didn't have the time/energy for.

So let's flood the world with projects nobody, including their authors in the first place, cared for enough to dedicate time and energy to.


> Clearly, these models still struggle with novel problems.

Do they struggle with novel problems more or less than humans?


Less than most humans, but more than many humans.

LLMs are great with Typescript. But the fact remains that there are many different browsers and several runtimes (Node, Deno, Bun), each of which may have slightly different rules.

[flagged]


> As someone familiar with those ecosystems, I'm have trouble envisioning the degree of operator error or imprecision that would cause this to be a problem.

Because you are familiar with the ecosystem? Just like python devs saying it's normal to juggle with 3 package managers to run a simple script.

Back to your original point: You also are biased by what you are used to use.

Thankfully, I don't have to touch any JS project anymore, but oh god what a nightmare it is. Even just having CLI tools requiring constant updates to not break randomly during the day is enough pain that I won't touch that with a 10m pole.


[flagged]


[flagged]


I genuinely think you're having a mental health episode. I don't see any other way to justify your reaction to my post. Or the fact that you've now gone through my post history and made several more rude posts.

On the off chance that I'm wrong and this is just your normal personality (in which case, God help you): what extraordinary claim did I make? Quote it.


> Take your meds

> I genuinely think you're having a mental health episode

It's not acceptable to post like this on HN. It counts as name-calling, snark, flamebait and other breaches of the guidelines. We're trying for curious conversation here, not this. I understand the other commenter was the first to escalate, but it takes two to make a hellish subthread like this.

https://news.ycombinator.com/newsguidelines.html


I think you have a pattern not reading comments which answer your questions before you ask them which then causes negative reactions.

> On the off chance that I'm wrong and this is just your normal personality (in which case, God help you):

When asked to stop making personal attacks, you made more. I've flagged your comment and reached out to @dang, I think your behavior is inappropriate for this forum.

Looking at the site guidelines, do you feel your comments are appropriate and following the spirit of the site rules? Really?


If your first comment in the subthread wasn't escalatory enough, this phrase in your second comment was way out of line:

> Surely you're not just talking out of your ass

It's not cool to start a flamewar like this then call in moderator support when replies get heated. We can't take sides in a flamewar like this when both participants are going at each other.

You evidently understand plenty how HN's guidelines and moderation work. Please make more effort to uphold the standards that are expected here in future.


Maybe your cultural frame is that mental health isn't a protected category and it's an appropriate insult. That's not what the law says in professional environments and that's not been my experience in professional discourse.

If your attitude is that if one person litters it's now ok for anyone else to dump on protected categories, it's your clubhouse and you can set the rules.

That's your right, and it's also my right to say that I don't want to be associated with this forum even indirectly.

Could you provide me with a good email address to send my CCPA request to delete all my data and comments and account? I sent emails to a few YC addresses with:

"I am writing to request that you delete my personal data from all of your paper and/or computer records pursuant to Section 1798.105 of the California Consumer Privacy Act (CCPA)

To the extent that you rely on consent to process my personal data, I withdraw that consent. To the extent that you rely on your 'legitimate interest' to process my personal data, I object to the processing as there are no overriding legitimate grounds.

If you are selling my personal data to third parties, please consider this email as my direction to you not to sell my personal data to third parties.

Please don’t ask me to perform a self-service process such as locating my information on your website, filling out a form, or providing a mobile advertising ID. These requests place an undue burden on my side.

If you are not able to comply with my request to delete all of my personal data, please advise as to the specific reason for which this request cannot be acted on. Please advise which sections and subsections of the law you are relying upon, and identify the specific reason for which you are relying on those exceptions, such as which legal obligation, or internal purpose or use. Please delete all my personal data which does not fall under these exceptions."

I have not received a response.

I can write again in 46 days with a complaint attached mentioning you personally and a letter from my state representative indicating they'd like to know more as well. Or you can resolve this quickly in a mutually amicable way by just deleting my account and all my comments.


I replied to the other commenter before I replied to you and made it clear that attacks on fellow community members with insults about mental health are not acceptable.

YC's privacy policy and instructions for requesting data removal can be found here: https://www.ycombinator.com/legal/.


Does that really happen "any time you go to a doctor's office"?

That aside, what if novel therapies like this are linked to the fact that US healthcare is expensive? If you make it cheap -- as in other countries -- there's less incentive for companies to invest and you get less research and fewer breakthroughs. Also fewer doctors, hospital beds, and more rationing.

In an ideal world, everyone would have exactly the right amount of healthcare. But our world isn't ideal, it runs on incentives, and it's not clear to me that all the hand-wringing over US healthcare will lead to positive changes.


> Does that really happen "any time you go to a doctor's office"?

Yes. I recently made a resolution to get established with all the medical professionals I don’t have set up. So a primary care, dermatologist, etc. over the past 2 months I’ve visited and had to go back a couple of times. I’ve literally overheard insurance-related issues in all cases. Whether it was the person in line before me or just overhearing people complaining while I’m in the waiting room.

Just last week I was waiting to get my blood drawn and the woman at the front desk, after continued prodding by an elderly man frustrated with lack of coverage, out loud said “Well, that’s insurance in America for you. Go ahead and call the number on the back of your insurance card because we can’t do anything for you.” Just deeply disheartening stuff to watch a late 80s man not realize after 15 minutes of being tossed between automated insurance phone responses that he simply won’t get the help he needs.


Well I just had an MRI and didn't see one elderly person complaining about bills. Guess that cancels our anecdotes out

This point of view runs directly against mutually agreed upon matters of fact: https://petrieflom.law.harvard.edu/2022/03/15/ama-scope-of-p...

The US healthcare system is not a market system nor did it occur naturally. Do you have any conflicts of interest that could cause you to have an emotional need to misunderstand basic information about it?


> That aside, what if novel therapies like this are linked to the fact that US healthcare is expensive?

you don't have to wonder, people have been writing about this as a major factor of costs for nearly 50 years


I think this concern is overblown. AI is an incredible teaching tool. It's probably better for teaching/explaining than for writing code. This will make the next generation of junior devs far more effective than previous generations. Not because they're skipping the fundamentals...because they have a better grasp of the fundamentals due to back-and-forth with infinitely patient AI teachers.

Not in my experience. They just regurgitate code, and juniors don’t know if/why it’s good or bad and consequently can’t field questions on their PR.

“It’s what the LLM said.” - Great. Now go learn it and do it again yourself.


I always say "own the output". No need to do it by hand but you better damn well research _why_ the AI chose a solution, and what alternatives there are and why not something else, how it works and so on. Ask the AI, ask a seperate agent/model, Google for it, I don't care, but "I don't know the LLM told me" is not acceptable.

Unless your company is investing in actually teaching your junior devs, this isn't really all that different than the days when jr devs just copied and pasted something out of stack overflow, or blindly copied entire class files around just to change 1 line in what could otherwise have been a shared method. And if your company is actually investing time and resources into teaching your junior devs, then whether they're copying and pasting from stack overflow, from another file in the project or from AI doesn't really matter.

In my experience it is the very rare junior dev that can learn what's good or bad about a given design on their own. Either they needed to be paired with a sr dev to look at things and explain why they might not want to something a given way, or they needed to wind up having to fix the mess they made when their code breaks something. AI doesn't change that.


This just means you have bad juniors who aren’t interested in learning.

It's easier to be lazy now more than ever. Hard to blame them because the temptation to deliver and prove oneself as a junior is always high.

I can't count how many seniors have forgotten what it means to understand the code they're merging since AI coding tools became popular. So long as businesses only value quantity the odds are stacked against juniors.


For me, the hardest part of software development was learning incantations. These are not interesting, they're conventions that you have to learn to get stuff to work. AI makes this process easier.

If people use AI to generate code they don't understand, that will bite them. But it's an incredibly tool for explaining code and teaching you boring, rote incantations.


>AI is an incredible teaching tool.

As a junior, my top issue is finding valuable learning material that isn't full of poor or outright wrong information.

In the best and most generous interpretation of your statement, LLM's simply removed my need to search for the information. That doesn't mean it's not of poor quality or outright wrong.


Here's a tip from an old timer: read the official docs.

I work a lot with juniors, and they all seem to prefer watching video's. But videos in my opinion are a slow way to gain superficial knowledge.

Do it the hard way and read the official docs, it will be your superpower. Go fast over the easy parts, go slow over the hard parts, it's that simple.


I suspect that the quality is ironically correlated with the expertise of the user (i.e. it is knowledgeable if you are knowledgeable), which puts you in a conundrum (I can report that with a couple decades of experience, LLMs are giving me high quality, correct results, but I can already see that it somehow doesn't work as well for some of my less experienced colleagues. A lot of what I've been doing over the last couple months is trying to find how to make it "just work" for them.).

As a general principle, take advantage of the fact that it can easily generate stuff. If you don't know whether something is true, have it prove it. Make a PoC/test/benchmark to demonstrate what it's saying. Have it pull metrics that you have access to. Add more observability. Create feedback loops (or rather, ask it to create feedback loops). They're very good at reasoning given access to the ground truth, so give them more ability to ground themselves.

They also have fantastic knowledge of public things, but no knowledge of your company, so your instructions should mostly be documentation of what's unique to your company. If it can write an instruction on its own (e.g. how to use git or kubernetes), it is a useless instruction; it already knows that. What it doesn't know is e.g. where your git server is. It also doesn't know what matters to your company: are you a startup trying to find product market fit? Are you an established company that is not allowed to break customer setups? etc. You might even be able to ask it what kinds of questions a senior might ask about how a company/team works when coming into a new job, and then see if you can answer those questions (or find someone who can). In fact, go ask chatgpt:

> What are some questions a senior engineer might ask when coming into a new role to make themselves more effective?

> What are some questions a principle engineer might ask when coming into a new role to make themselves more effective?

> What are some questions an engineering manager might ask when coming into a new role to make themselves more effective?

> What are some questions an engineering director might ask when coming into a new role to make themselves more effective?


Objectively speaking, students that use AI score more than a full grade point below their peers not using AI.

AI makes students dumber, not smarter.


Research [0] from Anthropic about juniors learning to code with AI/without:

>the AI group averaged 50% on the quiz, compared to 67% in the hand-coding group

And why would they do better? There's less incentive to learn because it's so easy to offload thinking to AI.

[0] https://www.anthropic.com/research/AI-assistance-coding-skil...


Only for people who wants be taught, this argument keeps coming up again and again but people in general doesn’t want to learn how to fish, they want the fish on a plate ready to eat, so that they can continue scrolling. I see this a lot in juniors, they are solution seekers, not problem solvers, and AI makes this difference a lot worse.

> AI is an incredible teaching tool. It's probably better for teaching/explaining than for writing code.

It is but how do you teach to people who think their new profession is being a "senior prompt engineer" (with 4 months of experience) and who believe that in 12 months there won't be any programmer left?


I do agree it’s a great tool, so much better than trying to hope and pray someone on the internet can help you with “I don’t understand this line of code.”

However, it’s got a lot of downsides too.


A teacher who just gives you the solution isn’t a good teacher.

You can use AI as a teacher but how many will do that?


Highly motivated people will use whatever tools they have to get better at something, whether they have a textbook, the internet or a LLM to use.

The skill of the very top programmers will continue to increase with the advent of new tools.


And how many will not? For mist people it’s just a job to get money, they will put exactly as much effort in it as is necessary to produce an acceptable result

Presumably you do a lot of back-and-forth with AI, and as many other commenters have pointed out, this seems to have made you more credulous and less informed.

It's almost like the Trump administration wanted to switch providers and this whole debate over red lines was a pretext. With this administration, decisions often come down to money. There are already reports that Brockman and Altman have either donated or promised large sums of money to Trump/Trump super pacs

Can't recall the source right now (it would've been on one of the several podcasts I listened to on Friday I think), but there's a story/rumor to the effect that at some point during Claude's earlier deployment at the Pentagon — might've well been in the context of the Venezuela/Maduro operation — someone at Anthropic had in one way or another flagged some kind of legal(ity) concerns regarding the relevant operation (and/or perhaps Anthropic's role in it) with Palantir, who was maintaining the Claude deployments for the DoD. The story goes that after Palantir had then relayed this information further to DoD, Hegseth had this major fit over how Anthropic's hippie-ass North California woke bros should have no say in matters relating to national security, that of Hegseth's "warfighters" or whatever, etc...

Also, in the latest Hard Fork episode, Casey or Kevin mentions how the DoD undersecretary in charge of this contract doesn't apparently get along with or even pretty much hates Amodei for some reason. I think this might be the same undersecretary dude who actively commented the whole contract term controversy on X yesterday. Too bad I can't recall his name either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: