Something that amazed me around the time Cider V was first introduced is that some folks have been at Google for so long, they have never used VSCode, and didn’t recognize the UI at all.
100% up-to-date VSCode is still pretty trashy, IMO. It's a mixed bag of plugins without cohesion, no awareness of code other than what that mixed bag attempts to provide (poorly). It is and always has been little more than a progressively more complicated mobius loop of autocompletion-oriented UI experimentation.
Ah, I feel so much better now. ;)
VSCode never made it past the first 10% of what Eclipse did (does). VSCode did succeed at being something for everybody, available everywhere.
What do you find appealing about GCP? I occasionally hear positive sentiment like this but don’t entirely understand the reason, mostly because I haven’t used non-GCP clouds professionally. Is it just the least bad of all the big clouds?
In this specific case. In the case of trace IDs (an example of which is [1]) where the equivalent of UUIDs are explicitly used to avoid coordination, it’s hard to imagine how you’d reliably detect and retry.
A lot of databases have a uniqueness constraint that is basically a register level compare and replace. Others have a if_not_exists which is nearly the same. If you're not targeting a serious throughput use case, it's enough. If you are then there are lots of solutions/alternatives that completely avoid coordination. On the other hand, maybe tracing protocols are robust to out of order delivery. If that won't do them sequence numbers tied to monotonic sequence IDs should be plenty. If not then I'd need very serious conversations to be convinced you're not wasting everyone's time
To me it’s more like being a super micro-managing TL that would annoy the hell out of their human reports. It comes with all the pros and cons of micro-management.
I wonder from time to time whether you can decide the best “schema shape” beforehand, ie before you can run real workloads that stress the memory implications of such things. This can be very useful if you are trying to decide the boundary of some public facing API, but for whatever reason can’t run benchmarks (lack of impl, data, time, etc).
Without that, if you try to suggest a transformation like this when the schema is first conceived, it will likely be considered premature optimization.
This tells me a real developer wrote the docs, instead of someone with good English writing skills but is less technical.
> they could have even used their own LLM to edit their documentation to fix grammar issues
In my experience companies who do this rarely stop at using LLMs to fix grammar issues. It becomes full on LLM speak quite fast, especially if there isn’t a native English speaker in the room who can discern what’s good and bad writing.
Yep - It's nearly impossible to assign profit to those things - we have X revenue from Android licenses, what's the cost of an android license? Is it all the R&D that goes to UI or hardware research? What's the cost of a Youtube Ad?
I’d even argue that most things operated by tech doesn’t need 24x7x365 availability. If it’s about life-and-death, then yes make it super reliable and available. Otherwise, bring back scheduled downtime please.
I’ve always thought about this as more of a meme than a serious point. Trivially, aren’t most of the network protocol RFCs “sufficiently comprehensive spec that isn’t code”?
reply