You are very wrong. I polish my non-native english with Ollama and deepseek.
Content is my own. Like on my podcast (datascienceathome.com) that exists way before GPT was even a name.
The syntax kind of does mimic the rhythm of AI generated text but the ideas and flow appear original to me. I thought it was a good article and appreciate the points you make which I would not have otherwise known about.
I also think it bears pointing out that you chose not to refer readers to your podcast which is the exact opposite of what an AI bot would have done.
If the writing style exhibited here where each paragraph has a different variation of the GPT's "a but b" standby such as "It's a, but this time, b.", "Nothing says a like b." and "This isn't speculation, it's a, but b." really is your organic writing style going back to the 2010s, you should write an article about being the guy that naturally wrote in GPT voice, all the time, before GPT was a thing.
There is a huge profit opportunity in marketing yourself as a unique example for linguists to pin down and dissect.
Thank you for noticing! I’ll keep writing my thoughts and express them in clearer English with the tools I (we) currently have. One day, when Italian becomes a global lingua franca, I’ll wow you even more with my fluency :D
Deepseek was trained on ChatGPT outputs, so yes, their writing style is running throughout the post you claim to be the author of (I almost said your writing style, which it is not)
Share the original in Italian and your prompt. Because my suspicion is that you had a half baked idea and the LLMs did all the rest.
Cool, if I wanted a shitty LLM generated essay on this topic I would have just queried the model myself. You think you are bringing anything of value to the work? Have a nice life as a copy paste machine.
Exactly. What I consider a patch and definitely a symptomatic solution is "solved" via agents that search the web (e.g. asking for the weather forecast of this year - in that case the LLM cannot know the year I am referring to, if not via a web search).
Generally speaking, LLMs lack direct temporal awareness. Standard models do not model the flow of time unless explicitly. Some models can encode a model of time when trained on sequential video data and rely on external encoders to provide temporal structure. But that is a very narrow application (video in this example). That cannot be considered a generic form of awarenes of time as a concept through which facts can change.
Fair points, and I appreciate the depth of your critique. You’re right—broad appeals to "moral evolution" can feel hollow without actionable solutions or a deeper analysis of the mechanics involved. My intention wasn’t to provide an exhaustive exploration but to spark a discussion, which your comment does beautifully.
Just to clarify:
I don’t consider myself a Democrat (in the context of US elections). In fact, if I were American, I definitely wouldn’t have voted for Harris.
I referenced Trump and Meloni as case studies simply because they’re among the most recent examples. That said, I firmly believe two things:
1) The masses have an incredibly short memory.
2) It’s not just the right that engages in this kind of manipulation.