It's maybe possible. Custom shaders are OpenGL syntax so it'd require transforming them to something compatible with WebGL/WebGPU. In Ghostty GUI we use spire-cross and glslang to transpile shaders at runtime from OpenGL to Metal or OpenGL (with features our host supports).
We'd have to look and see if these support WebGL/GPU. The next problem would be making all that fit into the wasm blob.
Or, we may be able to skip most of this is the OpenGL syntax we use is already compatible. Then no transpiling necessary...
Holy shit Kyle. I had no idea you were working on this. This is amazing. Your patch is also very instructive on what you need me to do for you to make this more reasonable.
I'm guessing that performance of this relative to xterm right now isn't... the best, mainly because the way you're grabbing the viewport seems expensive. I'm curious though if you did any benchmarks?
One thing you probably really want to expose is the new RenderState API: https://github.com/ghostty-org/ghostty/blob/main/src/termina... You're doing per row line grabbing currently which is probably pretty slow. The RenderState API is stateful and produces the state necessary to create a high-performance, delta update renderer. It's what our production GPU renderers are now built on (but the API itself is compatible with any kind of renderer). It'd be better for you.
After all that, I'm very curious even at this rudimentary level what the performance on various benchmarks look like compared to xterm.js.
This is why I use the agent I use. I won't name the company, because I don't want people to think I'm a shill for them (I've already been accused of it before, but I'm just a happy, excited customer). But it's an agentic coding company that isn't associated with any of the big model providers.
I don't want to keep up with all the new model releases. I don't want to read every model card. I don't want to feel pressured to update immediately (if it's better). I don't want to run evals. I don't want to think about when different models are better for different scenarios. I don't want to build obvious/common subagents. I don't want to manage N > 1 billing entities.
I just want to work.
Paying an agentic coding company to do this makes perfect sense for me.
I’ve been surprised at the lack of discussion about sourcegraph’s Amp here which I’m pretty sure you’re referring to - it started a bit rough but these days I find that it’s really good
So, I tried to sign up for Amp. I saw a livestream that mentioned you can sign up for their community Buildcrew on Discord and get $100 of credits. I tried signing up, and got an email that I was accepted and would soon get the credits. The Discord link did not work (it was expired) and the email was a noreply, so I tried emailing Amp support. This was last Friday (8 days ago.) As of today, no updated Discord link, no human response, no credits. If this is their norm, people probably aren't talking about it because they just haven't been able to try it.
Sorry we missed that email! I don’t know what went wrong there, but I just replied and will figure it out. This is definitely not the norm (and Build Crew is a small fraction of our users).
The answer to this is not Zig specific (and predates Zig). I'm guessing good blog posts exist but I don't have a link handy, sorry. If not, I agree one should be written.
Don't sleep on the StackFallbackAllocator either, which will try to use a certain amount of stack space first, and only if it overflows that space will it fallback to another allocator (usually the heap but you can specify). This can speed up code a lot when the common case is small values with periodic very large values.
> the common case is small values with periodic very large values
I find that a common scenario here is parsers where "Real world" input will tend to be small but you're also exposed to adversarial input. e.g. a parser for function prototypes would typically not expect to see more than 16 arguments in the wild, but you still need to handle it without erroring in case someone decides to send a 1000-argument function through your parser.
I'll add the title is a bit of bait. I don't use the word "vibe" (in any of its forms) anywhere outside of the title.
I'm not baiting for general engagement, I don't really care about that. I'm baiting for people who are extremist on either side of the "vibe" spectrum such that it'd trigger them to read this, because either way I think it could be good for them.
If you're an extreme pro-vibe person, I wanted to give an example of what I feel is a positive usage of AI. There's a lot of extreme vibe hype boys who are... sloppy.
And if you're an extreme anti-vibe person, I wanted to give an example that clearly refutes many criticisms. (Not all, of course, e.g. there's no discussion here one way or another about say... resource usage).
Not sure exactly what you're referring to, but I'm guessing it may be this interview I did 2 years ago: https://youtu.be/rysgxl35EGc?t=214 (timestamp linked to LLM-relevant section) I'm genuinely curious because I don't quite remember saying the quote you're saying I did. I'm not denying it, but I'd love to know more of the context. :)
But, if it is the interview from 2 years ago, it revolved more around autocomplete and language servers. Agentic tooling was still nascent so a lot of what we were seeing back then was basically tab models and chat models.
As the popular quote goes, "When the Facts Change, I Change My Mind. What Do You Do, Sir?"
The facts and circumstances have changed considerably in recent years, and I have too!
They even used the quote as the title of the accompanying blog post.
As I say, I didn’t mean this as a gotcha or anything- I totally agree with the change and I have done similarly. I’ve always disabled autocomplete, tool tips, suggestions etc but now I am actively using Cursor daily.
Yeah understood, I'm not taking it negatively, I just genuinely wanted to understand where it came from.
Yeah this is from 2021 (!!!) and is directly related to LSPs. ChatGPT didn't even get launched until Nov 2022. So I think the quote doesn't really work in the context of today, it's literally from an era where I was looking at faster horses when cars were right around the corner and I had not a damn clue. Hah.
Off topic: I still dislike [most] LSPs and don't use them.
What do you not like about LSPs? When you do eg refactoring isn't it nice to do operations on something that actually reflects the structure of your code?
I use agents for that, and it does a shockingly good job. LSPs constantly take up resources, most are poorly written, and I have to worry about version compatibility, editor compatibility, etc. It's just a very annoying ecosystem to me.
External agent where I can say "rename this field from X to Y" or "rewrite this into a dedicated class and update all callers" and so on works way better. Obviously, you have to be careful reviewing it since its not working at the same level of guarantee an LSP is but its worth it.
The CEO of Sourcegraph Quinn was pretty negative on coding agents and agentic development only about 10 months ago [0]. He had 'agentic stuff' in the Deader category (Used rarely, Reviewing it aint worth it). In fairness, he did say it was the future but 'is not there yet'. Since then, Sourcregraph's code assistant plugin Cody has been deprecated an they are all in on agents and agentic with Amp.
Yeah, I said about coding agents, “it’s obviously the future, but it’s not there yet”. That talk was from the AI Engineer conference in June 2024 (16 months ago). Coding agents have come a long way since then!
Even then AI autocomplete is way better today than at the start of the year; never mind 2 years ago.
Not only are the suggestions better, but they are presented much less obtrusively now in IDEs. For example it no longer ambushes you with a 500 line suggestion while you are typing, it shows a bit of what it wants to add next and seems to pull away if you are laying down code while ignoring it.
> As the popular quote goes, "When the Facts Change, I Change My Mind. What Do You Do, Sir?"
> The facts and circumstances have changed considerably in recent years, and I have too!
This should be everyone's default if we are learning. Not saying that's what the OP did here, but I hate how people dig out something that was said years ago like it's some gotcha. I think it's a bigger issue if people never change their minds about anything because it would show hubris and a lack of learning.
It only uses the LLVM backend for release builds. For debug builds Zig now defaults to self-hosted (on supported platforms, only x86_64 right now). Debug builds are directly in the hot path of testing your software (this includes building test binaries), and the number of iterations you do on a debug build are multiple orders of magnitude higher than release builds, so this is a good tradeoff.
Indeed. Also note this isn't the real public C API. This is, as I noted in the blog post as a disclaimer, an internal-only C API so it is admitedly very ugly.
(I assume you know this, just adding context for other readers)
Yep, sorry in my mind I was going to mention it (with "even if it's only me seeing private code like this" or something similar) and later forgot after some edits to fix the formatting...
Anyway, I hope it doesn't get lost that this comment was only meant to be half informative, half public statement, and half a light joke :)
We'd have to look and see if these support WebGL/GPU. The next problem would be making all that fit into the wasm blob.
Or, we may be able to skip most of this is the OpenGL syntax we use is already compatible. Then no transpiling necessary...