Hacker Newsnew | past | comments | ask | show | jobs | submit | embedding-shape's commentslogin

Good news: https://www.congress.gov/119/bills/hres1155/BILLS-119hres115... ("Impeaching Donald J. Trump, President of the United States, for high crimes and misdemeanors." - APRIL 6, 2026)


According to wikipedia there were others before. What does it mean? Since it's not even in the news, I don't get the significance of this. Is this a draft? Does it need support that doesn't exist?


It's a bit late. I wouldn't call that good news until it is over and done with.


> It's an okay model. My biggest issue using GLM 5.1 in OpenCode is that it loses coherency over longer contexts

Since the entire purpose, focus and motivation of this model seems to have been "coherency over longer contexts", doesn't that issue makes it not an OK model? It's bad at the thing it's supposed to be good at, no?


long(er) contexts (than the previous model)

It does devolve into gibberish at long context (~120k+ tokens by my estimation but I haven't properly measured), but this is still by far the best bang-for-buck value model I have used for coding.

It's a fine model


i have glm and kimi. kimi was in most of the cases better and my replacement for claude when i run out of tokens. Now im finding myself using glm more then kimi. Its funny that glm vs kimi, is like codex vs claude. Where glm and codex are better for backend and kimi and claude more for frontend.

as kimi did a huge amount of claude distilation it seems to be somewhat based in data

https://www.anthropic.com/news/detecting-and-preventing-dist...


Have you tried gemma4?

I'm curious how the bang for buck ratio works in comparison. My initial tests for coding tasks have been positive and I can run it at home. Bigger models I assume are still better on harder tasks.


> e.g. $200 sub = $200 in API Codex usage [...] In terms of raw Codex usage, you could just as easily buy API usage.

I don't think it's made out like that, I'm on the ChatGPT Pro plan for personal usage, and for a client I'm using the OpenAI API, both almost only using GPT 5.4 xhigh, done pretty much 50/50 work on client/personal projects, and clients API usage is up to 400 USD right now after a week of work, and ChatGPT Pro limit has 61% left, resets tomorrow.

Still seems to me you'd get a heck more out of the subscription than API credits.


This. ChatGPT Pro personal at $20/month and using GPT 5.4 xhigh is the best deal currently. I don't know if they are actually losing money or betting on people staying well under limits. Clearly they charge extra to businesses on the API plans to make up for it.

In the future, open models and cheaper inference could cover the loss-leading strategies we see today.


ChatGPT Personal Pro plan hasnt had the change yet. It is rolling out to Enterprise users first.

Right, because you're on the old and not new structure.

They just rolled it out for new subscribers and existing ones will be getting it in the "coming weeks." Enterprise already got hit with this from my understanding.


> and I have to consume it in the way that it's presented

I'm just curious, why do you "have to"? Don't get me wrong, I'm making the same choice myself too, realizing a bunch of global drawbacks because of my local/personal preference, but I won't claim I have to, it's a choice I'm making because I'm lazy.


What are the reasonable options besides a Claude Code subscription (or an equivalent from Codex or Copilot)?

I could pay API prices for the same models, but aside from paying much more for the same result that doesn't seem helpful

I could pay a 4-5 figure sum for hardware to run a far inferior open model

I could pay a six figure sum for hardware to run an open model that's only a couple months behind in capability (or a 4-5 figure sum to run the same model at a snail's pace)

I could pay API costs to semi-trustworthy inference provider to run one of those open models

None of those seem like great alternatives. If I want cutting-edge coding performance then a subscription is the most reasonable option

Note that this applies mostly to coding. For many other tasks local models or paid inference on open models is very reasonable. But for coding that last bit of performance matters


I use my OAI subscription on my Claude Code. I get the benefit of the Claude Code interface with the intelligence of OAI models.

https://prabal.ca/posts/claude-code-chatgpt-subscription/


My job title is "provide value".

I'm given a tool that lets me 10x "provide value".

My personal preferences and tastes literally do not matter.


As a professional you have a choice in how you produce whatever it is you produce. Sure, you can go for the simplest, most expensive and "easiest" way of doing things, or you can do other things, depending on your perspective and requirements. None of this is set in stone, some people make choices based on personal preferences, and that matters as much to them as your choices matter to you.

> Note that "gas" in this context means natural gas, not gasoline

What's up with Americans consistently calling things "wrong" like this? "Gas" isn't even the right state of matter for the subject, nor is "football" actually a sport where the ball is mostly for the foot, almost like things are intentionally named bad.


"Gas" is short for "gasoline" which means "gas oil". That is a perfectly cromulent name for a liquid.

"Football" is a different game in the US because it arrived there from England in the 19th century when carrying the ball was allowed. In England the sport eventually split into distinct sports: association football (aka soccer) and rugby. In America they evolved the game independently but didn't change the name.

Hope that clears it up.


> but didn't change the name.

That's the part that don't make no sense, so no, still very unclear why Americans keeps insisting on calling things the wrong names :)


Because changing a name that's been in use for decades is very confusing and unnecessary. A rose by another name etc.

The full names of the two rugby codes are "rugby union football" and "rugby league football". So Americans aren't alone in their cavalier use of the word "football".

See also: https://en.wikipedia.org/wiki/Australian_rules_football


> Because changing a name that's been in use for decades is very confusing and unnecessary

Yeah, that never happens, not even with important national institutions or anything like that.


Why do Germans and Dutch call gasoline "benzin"? It's clearly not benzene.

It is the British that changed things. They also used to call it soccer and then changed in the 1980s. Canada and Australia still use soccer, probably cause they have native footballs.

What's up with the British calling refined gasoline "petrol"? It's not even an abbreviation for the word, it's a totally different material? You don't go calling refined aluminium "bauxite", but you do call gasoline "petrol".

We're both wrong. It's a liquid at room temperature, and it's called not petroleum.


Yes, we all know the French are right on this one by calling it "essence"

Seems cromulent to me. One of the common meanings of essence is "a product of distillation" (compare e.g. essential oils - oils won through steam distillation). And gasoline is won through fancy distillation

Also, they drive on a parkway and park on a driveway.

Did you know that "Danishes" were... Austrian?

"running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it? Checking against an LLM then using your own voice feels completely fine, just another type of validation before you share something, but if you actually let the LLM rewrite what you say, then I feel like that's beyond "running it through an LLM", it's basically letting the LLM write your text for you instead of just checking/validating.

The decline of writing is something that's been going on for a long time. Well written and grammatically correct emails are something that's been on the down turn. Consider how often people send emails in all lower case, lacking punctuation, or even without any sentence structure.

The "you need to write in a more professional business oriented way" is something that a lot of people are having difficulty with. Yes, this needs to be addressed earlier in someone's education more forcefully - but the SMSification of long form text started a while ago.

With that said, the "Ok, you need to write long form with correct grammar when sending an email that a director or VP is CC'ed on". It used to be Grammarly as the "install this and have it fix up your grammar and tone" ( https://web.archive.org/web/20191104093353/https://www.gramm... GPT-1 timeframe there). However, LLMs of today seem to be more accessible than Grammarly but it largely does the same thing - fix up and refine tone.

What I don't see from back then is people decrying Grammarly saying that it's making everything sound the same.

I'm also not sure if I would prefer the pre-fixup emails to what is produced by an LLM unless sending coworkers to remedial writing classes is something that is acceptable.


Yes checking and validation is one thing, but there are several engineers in my area that only communicate using agent copy paste. I challenged one fellow about that and he was furious!

"running it through an LLM" doesn't mean "Give LLM my text -> Copy-paste the output of the LLM" does it?

The article seems to imply this is what is happening, as writing style converges towards LLM's style. You can call it what you want, but the important bit is that this is how it appears that LLM's are being used.

Checking against an LLM then using your own voice feels completely fine

Why use an LLM? If you're worried about style, starting with your own voice is more efficient. If you're worried about facts, looking something up in a primary source is best, and is probably cheaper on a few axes, especially if you need to check/validate anyway...


> because that article is full of Claude-isms

Not sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism.


Real talk. You're not just making a good point -- you're questioning the dominant paradigm

Horrible

I'm sure some human writers would write:

> The specification forces this question on every path through the IMU mode-switching code. A reviewer examining BADEND would see correct, complete cleanup for every resource BADEND was designed to handle.

> The specification approaches from the other direction: starting from LGYRO and asking whether any paths fail to clear it.

> *Tests verify the code as written; a behavioural specification asks what the code is for.*

However this is a blog post about using Claude for XYZ, from an AI company whose tagline is

"AI-assisted engineering that unlocks your organization's potential"

Do you really think they spent the time required to actually write a good article by hand? My guess is that they are unlocking their own organizations potential by having Claude writes the posts.


> Do you really think they spent the time required to actually write a good article by hand?

Given I'm familiar with Juxt since before, used plenty of their Clojure libraries in the past and hanged out with people from Juxt even before LLMs were a thing, yes, I do think they could have spent the time required to both research and write articles like these. Again, won't claim for sure I know how they wrote this specific article, but I'm familiar with Juxt enough to feel relatively confident they could write it.

Juxt is more of a consultancy shop than "AI company", not sure where you got that from, guess their landing page isn't 100% clear what they actually does, but they're at least prominent in the Clojure ecosystem and has been for a decade if not more.


Your guess is worth what you paid for it.

Any specific sections that stick out? Juxt in the past had really great articles, even before LLMs, and know for a fact they don't lack the expertise or knowledge to write for themselves if they wanted and while I haven't completely read this article yet, I'd surprise me if they just let LLMs write articles for them today.

Here's one tell-tale of many: "No alarm, no program light."

Another one: "Two instructions are missing: [...] Four bytes."

One more: "The defensive coding hid the problem, but it didn’t eliminate it."


That's just writing. I frequently write like that.

This insistence that certain stylistics patterns are "tell-tale" signs that an article was written by AI makes no sense, particularly when you consider that whatever stylistic ticks an LLM may possess are a result of it being trained on human writing.


These are just some of the good examples I found.

My hunch that this is substantially LLM-generated is based on more than that.

In my head it's like a Bayesian classifier, you look at all the sentences and judge whether each is more or less likely to be LLM vs human generated. Then you add prior information like that the author did the research using Claude - which increases the likelihood that they also use Claude for writing.

Maybe your detector just isn't so sensitive (yet) or maybe I'm wrong but I have pretty high confidence at least 10% of sentences were LLM-generated.

Yes, the stylistic patterns exist in human speech but RLHF has increased their frequency. Also, LLM writing has a certain monotonicity that human writing often lacks. Which is not surprising: the machine generates more or less the most likely text in an algorithmic manner. Humans don't. They wrote a few sentences, then get a coffee, sleep, write a few more. That creates more variety than an LLM can.

Fun exercise: https://en.wikipedia.org/wiki/Wikipedia:AI_or_not_quiz


Here's an alternative way of thinking about this...

Someone probably expended a lot of time and effort planning, thinking about, and writing an interesting article, and then you stroll by and casually accuse them of being a bone idle cheat, with no supporting evidence other than your "sensitive detector" and a bunch of hand-wavy nonsense that adds up to naught.


To start, this is more or less an advertising piece for their product. It's pretty clear that they want to sell you Allium. And that's fine! They are allowed! But even if that was written by a human, they were compensated for it. They didn't expend lots of effort and thinking, it's their job.

More importantly, it's an article about using Claude from a company about using Claude. I think on the balance it's very likely that they would use Claude to write their technical blog posts.


> They didn't expend lots of effort and thinking, it's their job.

Your job doesn't require you to think or expend effort?


While I agree with the sentiment, using AI to write the final draft of the article isn’t cheating. People may not like it, but it’s more a stylistic preference.

Using AI and a human byline is 100% cheating.

Yeah I agree. Don't tell me you authored something when claude did the majority of the writing. Use claude if you want, but don't pretend you wrote the content when you didn't.

I also hate this style of plastic, pre-digested prose. Its soulless and uninteresting. Maybe I've just read too much AI slop. I associate this writing style with low quality, uninteresting junk.


Yet another way the mere possibility of AI/LLM being involved diminishes the value of ALL text.

If there is constant vigilance on the part of the reader as to how it was created, meaning and value become secondary, a sure path to the death of reading as a joy.


Those aren’t good examples - that’s just LLMs living for free in your head.

I am reminded of the Simpsons episode in which Principal Skinner tries to pass off the hamburgers from a near-by fast food restaurant for an old family recipe, 'steamed hams,' and his guest's probing into the kitchen mishaps is met with increasingly incredible explanations.

I’m so glad the witch hunt has moved on to phrasing so I get less grief for my em dashes.

See also: “I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me” by Marcus Olang', https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li...

For what it’s worth, Pangram reports that Marcus’ article is 100% LLM-written: https://www.pangram.com/history/640288b9-e16b-4f76-a730-8000...


In theory, wouldn't be too hard be to settle the question if whether he used ChatGPT to write it: get Olang to write a few paragraphs by hand, then have people judge (blindly) if it's the same style as the article. Which one sounds more like ChatGPT.

When people judge blindly, the are more likely to think the human is the AI and the AI is the human.

73% judged GPT 4.5 (edit: had incorrectly said 4o before)to be the human.

https://arxiv.org/abs/2503.23674

Not only are people bad at judging this, but are directionally wrong.


There is research showing the contrary that is far more convincing:

> Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization.

https://arxiv.org/html/2501.15654v2


Great find, I've submitted this preprint as a standalone item: https://news.ycombinator.com/item?id=47678270

The times I've written articles, and those have gone through multiple rounds of reviews (by humans) with countless edits each time, before it ends up being published, I wonder if I'd pass that test in those cases. Initial drafts with my scattered thoughts usually are very different from the published end results, even without involving multiple reviewers and editors.

I hate that I can’t write em dashes freely anymore without people accusing the writing of being AI generated.

Even though they are perfect for usage in writing down thoughts and notes.


One thing you can try⸺admittedly it's not quite correct⸺is replacing them with a two-em dash. I've never seen an AI use one, and it looks pretty funky.

Since the advantage of standards is that there are so many to choose from, one lesser-used but still regionally acceptable approach (e.g. https://www.alberta.ca/web-writing-style-guide-punctuation#j...) is to use en-dashes offset with spaces.

I have nothing against em dashes. As long as your writing is human, experienced readers will be able to tell it's human. Only less experienced ones will use all or nothing rules. Em dashes just increase the likelihood that the text was LLM generated. They aren't proof.

That nuance is lost on the majority of anti-AI folks who’ve learned they get positive social reactions by declaring essentially everything to be AI written and condemnable.

“An em dash… they’re a witch!”… “it’s not just X, it’s Y… they’re a witch!”


> anti-AI folks who’ve learned they get positive social reactions by declaring essentially everything to be AI written and condemnable.

that's a strawman alright; all the comments complaining how they can't use their writing style without being ganged up on are positive karma from my angle, so I'm not sure the "positive social reactions" are really aligned with your imagination. Or does it only count when it aligns with your persecution complex?


You have the same problem apparently. You think it’s okay to go witch hunting and accuse people with no real evidence.

Evidently there are no experienced readers who post AI accusations.

Same weight as "there are no experienced men who'll ask a woman if she's pregnant."

Why do you care what others accuse you of?

No, it’s pretty obviously AI written. Not sure why you’re running so much interference for them…are you affiliated with this company?

This is my exact writing style - I'm screwed.

I doubt you write like that. Where can I find your writing other than your comments which IMO don't read like the blog post?

Justify your doubt.

This is just writing; terse maybe and maybe not grammatically correct, but people write like that.

It's not just terseness, it's the rhythm and "it's not x, it's y".

In fact, the latter is the opposite of terseness. LLMs love to tell you what things are not way more than people do.

See https://www.blakestockton.com/dont-write-like-ai-1-101-negat...

(The irony that I started with "it's not just" isn't lost on me)


> (The irony that I started with "it's not just" isn't lost on me)

But an LLM wouldn't write "It's not just X, it's the Y and Z". No disrespect to your writing intended, but adding that extra clause adds just the slightest bit of natural slack to the flow of the sentence, whereas everything LLMs generate comes out like marketing copy that's trying to be as punchy and cloying as possible at all times.


"Here’s how the bug might have manifested."

Wild how different experience people can have. Both Google's models and Anthrophic's hallucinate a lot for me, even when I try the expensive plans and with web searches, for some reason, and none of them come close to the accuracy and hallucination-free responses of ChatGPT Pro, which to me still is SOTA and has been since it was made available. But people keep having opposite experiences apparently, I just can't make sense of it.

Kagi (assistant.kagi.com) with Kimi K2.5 (their current default) has worked great for me in scenarios where the search result data is more important than the model.

I.e. what I used to use Google for and when I don't want an AI to overly summarize / editorialize result data.


oh thats probably because im a cheap-skate and just use the free garbo models. im sure the pro version is quite good.

> AFAIK there is no exemption that says it is OK to commit war crimes if the other side does.

Of course not, but I still think the expectation that someone doesn't commit war crimes against you disappears relatively quickly when you're openly and proudly admitting you'll open to violating the rules of war and saying international humanitarian law doesn't matter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: