“Dietary Intervention to Reverse Carotid Atherosclerosis” (Circulation, 2010) — participants were randomized to low-fat, Mediterranean, or low-carbohydrate diets; carotid arteries were imaged with 3-D ultrasound cross-sections at baseline and follow-up. After 2 years there was a ~5% regression in carotid vessel-wall volume, with similar regression across all diets (i.e., including the low-carb arm). [1]
Volek et al., 2009 (Metabolism) — 12-week very-low-carb vs low-fat trial; ultrasound of the brachial artery showed improved post-prandial flow-mediated dilation (a marker of endothelial function/inflammation) in the low-carb group. Not carotid 3-D slices, but still vascular imaging with before/after comparisons. [2]
The claim that big US companies “cannot explain the upsides” of AI is misleading. Large firms are cautious in regulatory filings because they must disclose risks, not hype. SEC rules force them to emphasise legal and security issues, so those filings naturally look defensive. Earnings calls, on the other hand, are overwhelmingly positive about AI. The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place. Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction. These are significant operational changes.
It is also wrong to frame limited stock outperformance as proof that AI has no benefit. Stock prices reflect broader market conditions, not just adoption of a single technology. Early deployments rarely transform earnings instantly. The internet looked commercially underwhelming in the mid-1990s too, before business models matured.
The article confuses the immaturity of current generative AI pilots with the broader potential of applied AI. Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
There was a weird moment in the late noughties where seemingly every big consumer company was creating a presence in Second Life. There was clearly a lot of strategic interest...
Second Life usage peaked in 2009 and never recovered, though it remains somewhat popular amongst furries.
Bizarrely, this kind of happened _again_ with the very similar "metaverse" stuff a decade or so later, though it burned out somewhat quicker and never hit the same levels of farcical nonsense; I don't think any actual _countries_ opened embassies in "the metaverse", say (https://www.reuters.com/article/technology/sweden-first-to-o...).
The issue is that the examples you listed mostly rely on very specific machine learning tools (which are very much relevant and good use of this tech), while the term "AI" in layman terms is usually synonymous for LLMs.
Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.
The comment was definitely not LLM generated. However, I certainly did use search for help in sourcing information for it. Some of those searches offered AI generated results, which I cross-referenced, before using to write the comment myself.
That in no way is the same as “an LLM-generated comment”.
For the benefit of external observers, you can stick the comment into either https://gptzero.me/ or https://copyleaks.com/ai-content-detector - neither are perfectly reliable, but the comment stuck out to me as obviously LLM-generated (I see a lot of LLM-generated content in my day job), and false positives from these services are actually kinda rare (false negatives much more common).
But if you want to get a sense of how I noticed (before I confirmed my suspicion with machine assistance), here are some tells:
"Large firms are cautious in regulatory filings because they must disclose risks, not hype." - "[x], not [y]"
"The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place." - "concrete examples" as a phrase is (unfortunately) heavily over-represented in LLM-generated content.
"Stock prices reflect broader market conditions, not just adoption of a single technology." - "[x], not [y]" - again!
"Failures of workplace pilots usually result from integration challenges, not because the technology lacks value." - a third time.
"The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest." - not just the infamous emdash, but the phrasing is extremely typical of LLMs.
The use of “ instead of ", two different types of hyphens/dash, specific wording and sentence construction are clear signs that the whole comment was produced by chatGPT. How much of it was actually yours (people sometimes just want LLM to rewrite their thoughts), we will never know but it's an output of an LLM.
I'm using ChatGPT daily to correct wording and I work on LLMs, construction and the wording in your comment is straight from ChatGPT. I looked at your other comments, and a lot of them seem to be LLM output. This one is an obvious example: https://news.ycombinator.com/item?id=44404524
And anyone can go back to the pre LLM era and see your comments on HN.
You need to understand that ChatGPT has a unique style of writing and overuses certain words and sentence constructions that are statistically different from normal human writing.
Rewriting things with an LLM is not a crime, so you don’t need to act like it is.
I've actually seen LLMs put spaces around em dashes more often than not lately. I've made accusations of humanity only to find that the comment I was replying to was wholly generated. And I asked, there was no explicit instruction to misuse the em dashes to enhance apparent humanity.
and you're responding to a comment where the LLM has been instructed to not to use emdashes.
And I'm responding to a comment that was generated by an LLM that was instructed to complain about LLM generated content with a single sentence. At the end of the day, we're all stoichastic parrots. How about you respond to the substance of the comment and not whether or not there was an emdash. Unless you have no substance.
Posting (unmarked) LLM-generated content on public discussion forums is polluting the commons. If I want an LLM's opinion on a topic, I can go get one (or five) for free, instantly. The reason I read the writing of other people is the chance that there's something interesting there, some non-obvious perspective or personal experience that I can't just press a button to access. Acting as a pipeline between LLMs and the public sphere destroys that signal.
Have you ever listened to a bad interview? Like, really bad? Conversely, have you ever listened to a really good interview? Maybe even one of the same subject? The phrase "prompt engineering" is a bit much, but there's still some skill to it. We know this is true, because every thread there's people saying "it doesn't work for me!" while others are saying it's the second coming.
So maybe while it makes you feel smart because you're a stoichastic parrot that can repeat LLM generated!111 like you're a model with a million parameters, every time you see an emdash, it's a lazy dismissal and tramples curiosity.
I have no idea what you think you're responding to. I use LLMs frequently in both professional and personal contexts and find them extremely useful. I am making a different, more specific claim than the thing you think I am saying. I recommend reading my comment more carefully.
> Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction.
But most AI push is for LLMs, and all the companies you talk about seem to be using other types of AI.
> Failures of workplace pilots usually result from integration challenges, not because the technology lacks value.
Bold claim. Toxic positivism seems to be too common in AI evangelists.
> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.
If the financial crisis taught me something is that if a company jumps of a bridge the rest will follow. Assuming that there must be some real value because capitalism is missing the main proposition of capitalism, companies will take stupid decisions and pay the price for it.
> The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.
Yes, the medical dark ages. Not the metaphorical kind either, but the kind where people die foaming at the mouth because their neighbor decided a rabies shot might cause dog autism. The CDC has apparently chosen the new strategy of “if we ignore it, maybe it will go away,” which worked wonders in the 14th century. Soon, every PTA meeting will double as a plague ward while Etsy sellers crank out crystal collars to protect your doodle from brain inflammation. The irony is perfect: the world’s richest country, armed with billion-dollar labs, yet losing a fight our grandparents solved with a needle and some common sense. When your local ER is triaging between toddlers bitten by strays and adults bitten by TikTok misinformation, maybe then people will realize the dark ages don’t arrive with torches and pitchforks. They arrive with Facebook posts and silence from the institutions that should have known better.
I thought perhaps you were joking but it turns out people are also hesitant to vaccinate their pets.
"""
About 37 percent of dog owners also believe that canine vaccination could cause their dogs to develop autism, even though there is no scientific data that validates this risk for animals or humans.
"""
You are deliberately answering a different question. The paper quotes.
> As Table 3 demonstrates, a large minority of dog owners consider vaccines administered to dogs to be unsafe (37%),
That is not the same thing as 84% having their pets fully vaccinated for Rabies which you know. As is obvious a person can have their dog be vaccinated but still have been hesitant about it due to feeling it is unsafe.
You have discounted the fact that it is, in many states, a civil offence not to vaccinate your dog.
I didn't "highlight a quote". I showed that the entire paper is invalid. They use poor methodology and base their model of "vaccine hesitance" on a population with self-admitted high rates of vaccination.
Your analysis is incorrect. High rates of vaccination does not necessarily correlate with beliefs that the vaccination is unsafe especially in areas where there are severe civil penalties against those who do not vaccinate. It seems you jumped to the conclusion that the study was bogus and doubled down but got your statistical reasoning upside down. You misquoted the study and then claimed it is bogus.
Maybe you're right, but the cited study doesn't provide evidence for your claim. I also didn't misquote the study.
They surveyed ~2200 people, of whom roughly half had pets. Of those, 84% were up-to-date on the Rabies vaccine. Another 5% didn't know.
It doesn't matter why the 84% of the pet-owning respondents were up-to-date on the vaccine, and moreover, you're just guessing. Maybe, as you say, it was "severe penalties" that compelled them. Or maybe they believe in the Rabies vaccine, and answered the other questions generally about all vaccines, since the questions weren't specific to Rabies. Maybe they were answering in theory, but felt differently for their pets.
The only thing you know is that they vaccinated their pets, which means that the entire premise of "vaccine hesitancy amongst pet owners" is unsupported by the evidence in the study.
You are claiming the study is claiming something it doesn't and then claiming it is bogus because they can't claim what you claim they are claiming.
> A new study has found that US dog owners who harbor mistrust in the safety and efficacy of childhood and adult vaccines are also more likely to hold negative views about vaccinating their four-legged friends.
They didn't claim that 37% are not vaccinating due to mistrust. They claimed that people hold negative views about vaccines with regards to pets. It was a study about beliefs not about actions.
The abstract states clearly.
> Canine vaccine hesitancy (CVH) can be thought about as dog owners’ skepticism about the safety and efficacy of administering routine vaccinations to their dogs. CVH is problematic not only because it may inspire vaccine refusal
It is certainly important for public policy to understand the trajectory of anti-scientific thinking in the community ( not withstanding that this thinking is now running the USA ) If today a large percentage of people are expressing vaccine hesitancy then next week maybe they stop vaccinating. This is a worthwhile attempt to understand what is going on. Calling it bogus is just wrong. They are also aware of their own limitations.
> Limitations & discussion. We view this work as an important first step in understanding canine vaccine hesitancy and its public health consequences. We recognize, of course, that some measures employed in this study are imperfect. For example, our measure of canine rabies vaccine uptake is self-reported, and therefore may be subject to inaccurate and/or biased recall. Correspondingly, we see future efforts to clinically validate self-reported vaccine uptake measures (as is often done with human vaccines; see [2] as an
But you just did a drive by "It's bogus" without even trying to understand the context.
You literally quoted (with a “>”) and replied to a comment with a highlighted (using italics) quote from the article and told the commenter they are incorrect.
Maybe so but that’s not clear from your comment which is why I asked for clarification. Clearly you’re unwilling or unable to provide that so there’s nowhere else for this conversation to go.
The “40,000-year-old writing” headline is a bit ahead of the evidence. What researchers have actually found is that Ice Age caves are full of recurring abstract marks — dots, lines, Y-shapes, grids — that show up across sites and cultures. That’s fascinating on its own, because it suggests early humans were tracking things and passing on symbolic systems.
A recent paper argued those dots and Ys might form a kind of lunar calendar tied to animal life cycles. That’s where the headlines about “the earliest written language” came from. But specialists in Paleolithic art have already pushed back pretty hard: the associations are often mis-read, the counts don’t fit neatly, and there’s no sign of syntax or actual language encoding. At best it looks like a notation system or proto-writing, not “writing” in the Mesopotamian sense.
So the consensus is: yes, Ice Age people were doing more with symbols than just decorating caves — but no, we haven’t pushed the invention of writing back 35,000 years. The earliest real writing systems still show up in Sumer and Egypt ~5k years ago. These cave signs are another reminder that symbolic thought is very old and very human — but we shouldn’t confuse notation with language.
I have had the privilege of touring some of the French caves with these paintings, and it is a profoundly moving experience to face a wall of hand stencils that is tens of thousands of years old. From the sizes of the hands, it seems clear that a community had made them, both little children and grown adults. The marks are so incredibly old, and yet it’s easy to visualize the people holding up their hands and blowing ocher on them, leaving a shadow behind. I don’t know if it’s possible to still visit these caves today —- we were on a National Geographic tour led by paleoanthropologist Don Johanson, discoverer of “Lucy”, and he has long since retired —- but it’s well worth your time if you ever get a chance to see them.
Thank you for this link. This whole over painting thing, it just, I felt like im the only person on the planet shocked by it. The second I arrive in italy I was told yes thats the painting of the last supper. And I was like wtf, it looks freshly painted. I read a placque which said somethign like ( due to the poor damp conditions of the room) the painting needed constant restoration.
I was like wtf so basically non of this shit is original?
Noone else seemed to even consider this.
Im sure poepl ein the art world come to terms with it, but I dont think anyone outside the field even conceives of what a restoration normally is
No. Linguists make a distinction between "writing" and "proto-writing". Generally speaking, proto-writing involves using abstract symbols for some particular use case, historically it was common for accounting. You draw a sheep and put five tally marks next to it to indicate you have five sheep, that kind of thing. Proto-writing is considerably older than true writing, among the earliest widely known examples would be clay tokens known as "bulla" [1] from at least 8000 BCE
Proto-writing can be very complex (I remember reading a book where a linguist calls mathematical notation "proto-writing"), but it's not "true writing" until it's capable of communicating essentially any idea you can express in spoken language (it's hard to write "I miss my cat Whiskers" in mathematical symbols) in at least a partly phonetic way (all true writing languages are phonetic to some extent). The earliest examples of that is Sumerian cuneiform and Egyptian hieroglyphs from around 3000 BCE. Whatever this discovery is, it's not true writing.
Writing proper is a correspondence between marks and sounds to represent speech.
Early Sumerian symbols depicting kinds of goods (wheat, sheep, beer, etc.), and marks next to these to indicate quantities, are classed as proto-writing.
There's also more general use of symbols to represent ideas or groups, like a cross representing Jesus or Christianity, for example, which aren't classed as writing
> Writing proper is a correspondence between marks and sounds to represent speech.
I don't think this is the proper definition, since by this standard, Egyptian hieroglyphs, Chinese ideograms, Norse runes and many others would not be considered writing; and any attempt to notate sign languages would not be writing by definition.
Instead, writing is a direct and consistent correspondence between marks and elements of human language (alphabets and abjads represent speech sounds, various ideographic systems represent semantics, you can have hybrids etc). This still makes sure that tallies or just general symbols or icons are not a form of writing, but it doesn't require any phonetic aspect to it either.
All true written languages are phonetic to some extent, even though they may not be alphabetical the way English is. Chinese characters have phonetic components indicating tone and pronunciation, Egyptian hieroglyphs are largely syllabic, Norse runes are alphabetic like English. In Chinese and Egyptian, there are purely non-phonetic symbols representing ideas (and other things like determinatives), but most have some kind of phonetic meaning (this is my understanding at least).
There's a spectrum of how phonetic a language is, where Finnish is on one end (sounds very closely align with spelling) and Chinese characters on the other, but all written languages are phonetic to some degree.
Yes, I would just add as clarification that from my learning of (Mandarin) Chinese, each character is unambiguously associated with a syllable (including tone), so if you know the syllable corresponding to the character, you should be able to read a sentence exactly (modulo occasional changes to the tones of some syllables to make it flow better). (If we defined "phonetic" to have that meaning then Chinese is actually very phonetic!)
The converse is not the case: Chinese is very homophonic so there are a lot of syllable (sounds) that have many different meanings and hence characters associated with them.
I should explain a little further what I mean: there are "pure" ideograms even in English, like the & and % characters. These unambiguously refer to the words "and" and "percent", but the way they're written gives no clue whatsoever to a reader on how they're pronounced. If you had gone your entire life reading and writing English but somehow never encountered them, the way they're written is entirely unhelpful. Emoji are an even more abstract example, that don't even correspond to any word at all, just usually indicating mood or something like that.
It's a common misconception that all Chinese characters are like that, but my understanding is that while there are many, many more ideograms in Chinese than English, something like 80% of the characters do in some way indicate pronunciation (even if it's just something like tone) or use the "rebus principle" or something like that. So again, it's a spectrum, but all writing systems are phonetic to some extent. Human's wouldn't be able to use it to communicate effectively otherwise.
I will say that I'm not a linguist, nor can I read or write a word of Mandarin Chinese, and will happily stand corrected. This is just what I've picked up from reading books about the history and development of writing.
Norse runes are just an alphabet. As long as your language uses the same set of sounds, you can write with them today.
People get confused about them because there's a tie-in with the old Germanic religions where they're used by the gods for divination, and the neopagans have adopted them for that purpose. But they're really just a set of alphabets optimized for carving into wood.
> writing is a ... correspondence between marks and elements of human language
Yes, this is what I had in mind by saying "speech", but you're right, the connection to language is the essential part, and sound just happens to be the paradigmatic medium of human language
Written numbers and math were born out of accounting. Who owed how much to whom. This could be similar to that, although I think the societies of that time were collectivist enough to not worry much about debt.
> A recent paper argued those dots and Ys might form a kind of lunar calendar tied to animal life cycles. That’s where the headlines about “the earliest written language” came from. But specialists in Paleolithic art have already pushed back pretty hard: the associations are often mis-read, the counts don’t fit neatly, and there’s no sign of syntax or actual language encoding. At best it looks like a notation system or proto-writing, not “writing” in the Mesopotamian sense.
> So the consensus is: yes, Ice Age people were doing more with symbols than just decorating caves — but no, we haven’t pushed the invention of writing back 35,000 years. The earliest real writing systems still show up in Sumer and Egypt ~5k years ago. These cave signs are another reminder that symbolic thought is very old and very human — but we shouldn’t confuse notation with language
Okay, so what's the bar for "written language" then?
The specialists in this field appear to be using some criteria for "written language" but it is not clear to me how that criteria might accept maths symbols or maybe roman numerals to indicate counters as a written language while discarding a notation system.
Personally, I would consider that any form of intentional knowledge transmission a "written language".
Scratch a line onto a rock each time you see a full moon? That's written language.
Paint handprints on a cave wall? That's written language too.
Discovery doesn't fail your criteria, however I don't think most people would agree that hand prints and tally marks are written language. Certainly doesn't pass the sniff test for me.
Well, for me the intention matters; is the intention communication (and yes, art is communication as well - it communicates the artists feelings at the time)?
If the intention is to communicate how many moons have passed, why is tally marks not considered written language?
We talk about the language of mathematics, and no one bats an eye, but tally marks still fall into the category of language of mathematics.
I am seeing the stated criteria as a distinction without a difference: The intentional mark `5` signifying how many moons have passed is somehow different to the intentional mark `|||||`, but no one is explaining what the difference is.
I don't think the linguists would consider arabic numerals on their own to be a language either. The main distinction, as I understand it, is having something like a grammar, i.e. a set of consistent rules about how to arrange the symbols to have meaning beyond just the sum of the meaning of each individual symbol. So no matter how you mark down your count, it's not language until you have some consistent pattern of signifying that that means how many moons have passed, or how many people are in the local community, or something like that.
How about looking at descendants of fair-skinned Britains in sunnier climes?
Australia has the highest rate of skin cancer in the world. This is due to a combination of factors: very high levels of ultraviolet (UV) radiation, outdoor lifestyles, and a largely fair-skinned population that is more vulnerable to sun damage. Rates of both melanoma (the deadliest form of skin cancer) and non-melanoma skin cancers are higher in Australia than anywhere else. New Zealand follows closely behind.
No, the language clearly limits the restriction to those “aliens … currently outside the United States.” “Entry” in this context means seeking admission (or re-entry) to the U.S. from abroad, under a new petition or visa that starts outside. It is tied to new petitions, and specifically those where the beneficiary is abroad.
“(b) The Secretary of Homeland Security shall restrict decisions on petitions not accompanied by a $100,000 payment for H-1B specialty occupation workers … who are currently outside the United States …”
“Dietary Intervention to Reverse Carotid Atherosclerosis” (Circulation, 2010) — participants were randomized to low-fat, Mediterranean, or low-carbohydrate diets; carotid arteries were imaged with 3-D ultrasound cross-sections at baseline and follow-up. After 2 years there was a ~5% regression in carotid vessel-wall volume, with similar regression across all diets (i.e., including the low-carb arm). [1]
Volek et al., 2009 (Metabolism) — 12-week very-low-carb vs low-fat trial; ultrasound of the brachial artery showed improved post-prandial flow-mediated dilation (a marker of endothelial function/inflammation) in the low-carb group. Not carotid 3-D slices, but still vascular imaging with before/after comparisons. [2]
[1] https://www.ahajournals.org/doi/pdf/10.1161/CIRCULATIONAHA.1...
[2] https://lowcarbaction.org/wp-content/uploads/2019/12/Volek-e...