I'm appalled by how dismissive and heartless many HN users seem toward non-professional users of ChatGPT.
I use the GPT models (along with Claude and Gemini) a ton for my work. And from this perspective, I appreciate GPT-5. It does a good job.
But I also used GPT-4o extensively for first-person non-fiction/adventure creation. Over time, 4o had come to be quite good at this. The force upgrade to GPT-5 has, up to this point, been a massive reduction in quality for this use case.
GPT-5 just forgets or misunderstands things or mixes up details about characters that were provided a couple of messages prior, while 4o got these details right even when they hadn't been mentioned in dozens of messages.
I'm using it for fun, yes, but not as a buddy or therapist. Just as entertainment. I'm fine with paying more for this use if I need to. And I do - right now, I'm using `chatgpt-4o-latest` via LibreChat but it's a somewhat inferior experience to the ChatGPT web UI that has access to memory and previous chats.
Not the end of the world - but a little advance notice would have been nice so I'd have had some time to prepare and test alternatives.
A lot of people use LLMs for fiction & role playing. Do you know of a place where some of these interactions are shared? The only ones I've found so far are, well, over-the-top sexual in nature.
And I'm just kind of interested _how_ other people are doing all of this interactive fiction stuff.
Sure. Here is the fanfiction book I've been using LLMs to help me write. Helps a lot with improving prose and identifying plot holes. It's much better then a rubber duck for talking out how to improve a chapter and write plausible story arcs. It's not great at word smithing, but I find it errs on the side of too many similies and metaphors, so I just delete some of them as I copy the suggestions over into my draft.
Thank you for your feedback. I've rewritten the "Forward" to apply your suggestions and will keep them in mind when writing and improving chapters in the future.
I have some science-fiction story ideas I'd love to flesh out. However, it turns out that I'm a terrible writer, despite some practice at it. Also, I can never be surprised by my own writing, or entertained by it in the same way that someone else's writing can.
I've tried taking my vague story ideas, throwing them at an AI, and getting half a chapter out to see how it tracks.
Unfortunately, few if any models can write prose as good as a skilled human author, so I'm still waiting to see if a future model can output customised stories on demand that I'd actually enjoy.
I am not sure which heartless comments you are referring to but what I do see is genuine concern for the mental health of individuals who seem to be overly attached, on a deep emotional level, to an LLM: That does not look good at all.
Just a few days ago another person on that subreddit was explaining how they used ChatGPT to talk to a simulated version of their dad, who recently passed away. At the same time there are reports that may indicate LLMs triggering actual psychosis to some users (https://kclpure.kcl.ac.uk/portal/en/publications/delusions-b...).
Given the loneliness epidemic there are obvious commercial reasons to make LLMs feel like your best pal, which may result in these vulnerable individuals getting more isolated and very addicted to a tech product.
The place we still call America for illogical reasons is a broken society in seemingly finals stages of its existence. Of course broken people will glom onto yet another digital form of a drug that gives an impression of at least suppressing the pain they feel for reasons they do not understand.
It is little more than the Rat Park Experiment, only in this American version, the researchers think giving more efficient and various ways of delivering morphine water is how you make a rat park.
I don't live in this broken place you speak of and don't feel the pain you mention.
Outside of work I sometimes user LLMs to create what amounts to infinitely variable Choose Your Own Adventure books just for entertainment, and I don't think that's a problem.
Yes. I understand that. Most of us are totally detached and solely unaware of what goes on outside of the bubble we are in. Very few of us actually try to find out what is going outside of the walls of Versailles.
Personally, I prefer GPT-5 than 4o. It does a good job. But like many others I don't like the sudden removal because it also removed O3, which I sometime use for research based task. GPT-5 thinking mode is okay, but I feel O3 is still better.
F# has diverged from OCaml a bit, but they're still very similar.
I mentioned in a top-level comment that F#'s "lightweight" syntax is basically what I want when I use OCaml. I know ReasonML is a thing, but if I'm writing OCaml I don't want it to look more JavaScripty - I prefer syntax like "match x with" over "switch(x)" for pattern matching, for example.
I know some people dislike the way F#'s newer syntax makes whitespace significant, and that's fair. But the older verbose syntax is there if you need or want to use it. For example, something like
let things =
let a = 1 in
let b = 2 in
let c = 3 in
doSomething a b c
Thank you! I knew this, but of course blanked on it when I came up with an Ocaml example.
There are a few other places I prefer F#'s syntax, but overall it's not the reason I'd pick F# over OCaml for a project. It's usually mostly about needing to integrate with other .NET code or wanting to leverage .NET libraries for specific use cases.
Can't lose either way - they're both a please to work with.
I like OCaml a lot - but I think I like F# a little more. They're very similar, since F# as essentially Ocaml running on the .NET VM.
I know some people dislike the fact that F# lacks OCaml's functors, but I can see why they weren't included. Due the the way F# integrates .NET classes/objects, I can accomplish more or less the same thing that way. In some ways I prefer it - a class/type full of static methods has the same call syntax as a module full of functions, but gives me the option of overloading the method so it'll dispatch based on argument types. Having everything that's iterable unified under IEnumerable/Seq is nice, too.
Having said all that, I still enjoy OCaml a ton. One thing I wish I could have is F#'s updated lightweight syntax brought over to OCaml. I think ReasonML is great, but after using it for a while I realized that what I really want isn't OCaml that looks more like JavaScript. What I want is OCaml that looks like OCaml, but a little cleaner. F# gives me that, plus, via Fable, compilation to JS, TypeScript, Python, and Rust. And via the improved native AOT compilation in .NET 9, I can build fast and reasonably small single-file executables.
Despite all that, I still try to dive in OCaml whenever it's a decent fit for a problem for the problem I'm trying to solve. Even if it's a little quirky sometimes, it's fun.
IMO, a language without proper modules, GADTs, or (now) an effect system, does not seem to me like it could plausibly be described as "essentially OCaml". Your point about having ad hoc polymorphism (outside of an object system) is another good point about why F# really is not OCaml on .NET.
Not to mention the difference of being trapped in (or having the luxury of, as you prefer) .NET vs. compiling to native binaries.
For the tangible ones, it's often relatively easy to get financing that lets you spread the payment over the asset's useful life, which solves most of the cash flow issues you get if you pay in cash up front but have to spread the expense over many years.
It's a lot easier to get financing for a tangible asset like an oven or a delivery truck, which mitigates the cash flow issue.
Sure, you can only deduct a certain percentage of the asset's value as an expense each year, but your cash expenditures to pay for it are also spread over a multi year period.
Neat! I've written streaming Markdown renderers in a couple of languages for quickly displaying streaming LLM output. Nice to see I'm not the only one! :)
It's a wildly nontrivial problem if you're trying to only be forward moving and want to minimize your buffer.
That's why everybody else either rerenders (such as rich) or relies on the whole buffer (such as glow).
I didn't write Streamdown for fun - there are genuinely no suitable tools that did what I needed.
Also various models have various ideas of what markdown should be and coding against CommonMark doesn't get you there.
Then there's other things. You have to check individual character width and the language family type to do proper word wrap. I've seen a number of interesting tmux and alacritty bugs in doing multi language support
The only real break I do is I render h6 (######) as muted grey.
Compare:
for i in $(seq 1 6); do
printf "%${i}sh${i}\n\n-----\n" | tr " " "#";
done | pv -bqL 30 | sd -w 30
to swapping out `sd` with `glow`. You'll see glow's lag - waiting for that EOF is annoying.
Also try sd -b 0.4 or even -b 0.7,0.8,0.8 for a nice blue. It's a bit easier to configure than the usual catalog of themes that requires a compilation after modification like with pygments.
Not yet - I just created a browser-wasm project using the .NET CLI and then experimented with it. I spent a bunch of the digging through .targets files to see what optimization options were available.
I plan to put the source on GitHub shortly so others can use it as an example. Just need to clean things up a little first.
The fun thing about watching a Sprint video for the first time is that you just assume it must be sped up. Eventually you realize that no, it really does accelerate that quickly.
I use the GPT models (along with Claude and Gemini) a ton for my work. And from this perspective, I appreciate GPT-5. It does a good job.
But I also used GPT-4o extensively for first-person non-fiction/adventure creation. Over time, 4o had come to be quite good at this. The force upgrade to GPT-5 has, up to this point, been a massive reduction in quality for this use case.
GPT-5 just forgets or misunderstands things or mixes up details about characters that were provided a couple of messages prior, while 4o got these details right even when they hadn't been mentioned in dozens of messages.
I'm using it for fun, yes, but not as a buddy or therapist. Just as entertainment. I'm fine with paying more for this use if I need to. And I do - right now, I'm using `chatgpt-4o-latest` via LibreChat but it's a somewhat inferior experience to the ChatGPT web UI that has access to memory and previous chats.
Not the end of the world - but a little advance notice would have been nice so I'd have had some time to prepare and test alternatives.