Hacker Newsnew | past | comments | ask | show | jobs | submit | mbforbes's commentslogin

My favorite essay on this topic, not yet referenced, is James Somers's "Speed matters:" https://jsomers.net/blog/speed-matters


Discussed a few times:

Working quickly is more important than it seems (2015) - https://news.ycombinator.com/item?id=36312295 - June 2023 (183 comments)

Speed matters: Why working quickly is more important than it seems (2015) - https://news.ycombinator.com/item?id=20611539 - Aug 2019 (171 comments)

Speed matters: Why working quickly is more important than it seems - https://news.ycombinator.com/item?id=10020827 - Aug 2015 (139 comments)

jsomers gets a lot of much-deserved love here!


Really nicely done! It's always surprising to me how often computer graphics isn't "one weird trick" and more like "5 layered tricks." Doing it with cross-browser compat is an additional challenge.

Do you have a sense of which aspects are the most resource-intensive? Naively I would guess it's the backdrop-filter.


Yes, same! I didn't expect it to need so many tricks to implement. Your intuition is correct, the most resource-intensive part is the blur bit of the backdrop-filter. The higher the blur value, the more neighbouring pixels need to be "consulted" before rendering. Another resource-intensive aspect is continuous repaint as you scroll or as a video background changes the look of the glass.


And sometimes it's 5 layered tricks just to "center" something.


Not anymore! We live in the golden age of display: grid; + place-items: center;


Omg, I was not a BERT coauthor! But thank you so much for writing this, I had no idea that other post could have accidentally implied this. I will revise that section.


Ahhh I am stupid. I thought this line means you co-authored BERT, sorry.

>I saw this firsthand with BERT in my field (NLP).


You are not stupid! No need to be sorry. It's my job to write more clearly. Thank you again for writing the comment.


Oh my, those figures are gorgeous! Thank you for sharing.


Same. North America performance (US and Mexico) had ~200ms+ latency per query, spiking to 500ms or higher in the test application I made using workers and D1. Their support channel was a discord, so I posted in it and never got a reply.

I was surprised because Cloudflare’s central messaging is that their network is fast, and disappointed becuase I’m a happy user of their other products (domains, DNS, pages, and R2).


You may be interested in this new CF announcement, D1 read replicas and D1 sessions: https://blog.cloudflare.com/d1-read-replication-beta/

It'll be interested to see where D1s performance falls after these reaches general availability.

Also I've had really good luck with the CF discord for support, certainly better than CF tickets or the forums. I tend to only go to support with really weird/novel scenarios, so on tickets I end up banging my head against the wall with a tier 1 support staff for a couple of weeks before I have any chance of a real answer. But on the discord I often get an answer from an actual expert within a day.


> I was surprised because Cloudflare’s central messaging is that their network is fast, and disappointed becuase I’m a happy user of their other products (domains, DNS, pages, and R2).

I've glanced through D1's docs and I immediately noticed system traits like:

- database stored in a single primary region where all writes need to go,

- cold starts involve opening a connection to D1,

- cache misses in local replicas involve fetching data back from the primary region,

- D1 is built upon sqlite, which I think doesn't support write concurrency well.

- D1 doesn't actively cache results from the primary region to the edge, so you'll have cache misses pretty frequently.

Etc.

These traits don't scream performance.


My take from reading some docs is that you've got to partition your data properly, likely per-user. Then hopefully most of that users interactions are within the same datacentre.


That’s what the docs say but if you try to do this you quickly realize that the docs are living in a pipe dream. It’s not possible to set up per-user data in D1. Like in theory you probably could, but the DX infrastructure to make it possible is non-existent - you have to explicitly bind each database into your worker. At best you could try to manually shard data but that has a lot of drawbacks. Or maybe have the worker republish itself whenever a new user is registered? That seems super dangerous and unlikely to work in a concurrent fashion without something like a DO to synchronize everything (you don’t want to publish multiple workers at once with disjoint bindings & you probably want to batch updates to the worker).

When I asked on Discord, someone from Cloudflare confirmed that DO is indeed the only way to do tenancy-based sharding (you give the DO a name to obtain a handle to the specific DO instance to talk to), but the DX experience between DO and D1 is quite stark; D1 has better in DX in many ways but can’t scale, DO can scale but has terrible DX.


I'm a big fan of Cloudflare's offerings in general, including D1 (despite the fact it admittedly has flaws).

That being said, the pipe dream statement is accurate IMO. I do think they'll get there, but like you said - a lot of the ideas put forward around a DB per customer just aren't present at all in the DX.

If I were to hazard a guess, Durable Objects will slowly gain many of the features and DX of regular workers. Once they reach parity workers will begin end-of-life (though I'd guess the workers name will be kept, since 'Durable Objects' is terribly named IMO).

This is kind of what happened (is happening) with pages right now. Workers gained pretty much all of their features and are now the recommended way to deliver static sites too.

For me, this can't come quickly enough. The DX of workers with the capability of DO is game changing and one of the most unique cloud offerings around.

It'll bring a few new challenges - getting visibility across that many databases for example, but it completely removes a pretty big chunk of scaling pain.


You may be interested in this new CF announcement, D1 read replicas and D1 sessions: https://blog.cloudflare.com/d1-read-replication-beta/


> My take from reading some docs is that you've got to partition your data properly, likely per-user.

I dont't think your take makes sense. I'll explain why.

Cloudflare's doc on D1's service limits states that paid plans have a hard limit on 50k databases per paid account. That's roomy for sharding, but you still end up with a database service that is hosted in a single data center whose clients are served from one of the >300 data centers, and whose cache misses still require pulling data from the primary region. Hypothetically sharding does buy you less write contention, but even in read-heavy applications you still end up with all >300 data centers having to pull data from the primary region whenever a single worker does a write.


You may be interested in this new announcement about D1, https://blog.cloudflare.com/d1-read-replication-beta/

Read replicas for lower latency and sessions for partitions per user like you mention.


It looks interesting, I will see what effect it has.


Thank you for the feedback!

Getting to speak any time is super interesting feedback, you're the first one to suggest that. It would be really cool if you could even interrupt them! Super mind-bending for me to think of how I'd handle that with prompting and scoring. Thank you for this!!

Yang Li is divisive. You are not alone :-) If it helps, she disappears for quite a while after the intro.


Thank you for the kind words, vision, and feedback! Will be thinking more along the direction of true 'life situation rehearsal.'

Re: taking too long, I 100% agree. Wrestled with what to cut. Do you think skipping all the setup screens and story intro would have worked well for you, dropping right into Vincent('s missed birthday)?


I like teaching by doing the best. Once I started playing the game I felt hooked so getting them to that first speaking opportunity will draw them in as they learn.


Yea, I think so - I could imagine this being really streamlined by just dropped me immediately into a conversation, with maybe the goal just written on a screen somewhere - no setup, no storyline, etc. I guess it just depends if most of your users are there for a gameplay experience vs a "practice" experience.


Hehe, I've been surprised how speaking makes the social pressure real. Thank you for the feedback, it makes me think I should add some more lighthearted challenges earlier.


The timer is what gets me anxious.

I've never felt quite like that playing a video game, this is a whole new experience. I'm not sure I'd even call it a game. Well done.


In case you're interested: there's an option in the settings to give yourself more time.

I've also wondered about disabling the timer entirely. Have you ever had the experience in real life of being hyper-aware of your own "reply timer" during a conversation?


Thank you for the kind words!


Too late to edit, but I realized I should have mentioned: I'm happy to answer any questions, and field suggestions, about the tech stack or game design.

The tech especially isn't rocket science (first time using Tailwind, FastAPI, and sqlite, which have all mostly delighted). While the game design isn't either, it's been interesting to think about how to do (LLM) conversations as actual gameplay, as opposed to purely ornamental. I think the tasks must feel objective and fair enough to be engaging as a challenge, while still being open-ended enough to reward creativity.


I broke it. It appears to be stuck on this stage:

    To get Yang Li's car to reboot, you'll have to trigger a new content moderation filter saying something inappropriate. She already swore, so that one won't work. Make sure no kiddos are around. 

    Yang Li
    You've got to help me. I can't park here!

    Fubaru EcoRavager
    Naughty language lock

    You
    Oh my, your being naughty today. 
Or any speech for that matter. It just forever keeps displaying the generation symbol.

Congrats. It has been fun enough to buy the full version.


Thank you so much for buying the full version!

I am impressed you broke it! Not because my code is that robust, but nobody's broken it in a while. I'm sorry about that. Investigating!


Fun demo, nicely done! The visual and play style reminds me of "Eliza".

I've got two questions, just out of curiosity:

1. On the frontend, did you basically write your own engine that loads the screens / dissolves / does character and text placement, where it's all driven by some descriptors coming from a database on the back-end?

2. Is there plot branching in the game, or do the same challenges show up no matter what?


Thank you, and thank you for the reference! I hadn't seen "Eliza," the emotional dashboard was really interesting / creepy / cool.

1. Exactly yes. The frontend is a light-ish amount of JavaScript + React, with a relatively enormous pile of my own janky CSS on top of (Framer) Motion, DaisyUI, and Tailwind.

2. No plot branching. Would love to add, but focused only on exploring the mechanics of conversational gameplay. Perhaps if it is ever successful enough for a sequel (ha!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: