> The buffer is the UI, rendered by Emacs's extremely optimised text display machinery
Doesn't emacs lag like crazy in files with large lines. Why is this still a problem? Every modern editor handles this gracefully. I remember reading something about using regexes for syntax highlighting. This looks like a problem in the rendering layer which shouldn't be too hard to fix without touching the core engine. Are there any other problems that make it difficult to fix without disabling any useful features?
The problem with long lines was reportedly markedly improved in Emacs 29:
Emacs is now capable of editing files with very long lines.
The display of long lines has been optimized, and Emacs should no
longer choke when a buffer on display contains long lines. The
variable 'long-line-threshold' controls whether and when these display
optimizations are in effect.
A companion variable 'large-hscroll-threshold' controls when another
set of display optimizations are in effect, which are aimed
specifically at speeding up display of long lines that are truncated
on display.
If you still experience slowdowns while editing files with long lines,
this may be due to line truncation, or to one of the enabled minor
modes, or to the current major mode. Try turning off line truncation
with 'C-x x t', or try disabling all known slow minor modes with
'M-x so-long-minor-mode', or try disabling both known slow minor modes
and the major mode with 'M-x so-long-mode', or visit the file with
'M-x find-file-literally' instead of the usual 'C-x C-f'.
In buffers in which these display optimizations are in effect, the
'fontification-functions', 'pre-command-hook' and 'post-command-hook'
hooks are executed on a narrowed portion of the buffer, whose size is
controlled by the variables 'long-line-optimizations-region-size' and
'long-line-optimizations-bol-search-limit', as if they were in a
'with-restriction' form. This may, in particular, cause occasional
mis-fontifications in these buffers. Modes which are affected by
these optimizations and by the fact that the buffer is narrowed,
should adapt and either modify their algorithm so as not to expect the
entire buffer to be accessible, or, if accessing outside of the
narrowed region doesn't hurt performance, use the
'without-restriction' form to temporarily lift the restriction and
access portions of the buffer outside of the narrowed region.
The new function 'long-line-optimizations-p' returns non-nil when
these optimizations are in effect in the current buffer.
Right- but if you have a long line that is, for example, a JSON object, then surely it can't be properly be validated or syntax-highlighted before the entire line is scanned?
I do agree that Emacs can be slower than the terminal when handling long lines/files, although (depending on your case) this can be easily mitigated by running a terminal inside of Emacs.
Generally though, for everyday use, Emacs feels a lot snappier than VSCode.
Good point. Though for widget UIs you're typically rendering structured data you control, not parsing arbitrary text files. The syntax highlighting / validation concern applies to editing code, not to building interactive interfaces.
> Generally though, for everyday use, Emacs feels a lot snappier than VSCode.
The long-line issue is real, though my statement was specifically about building UIs with widgets/overlays/text properties - not handling arbitrary files. In that context, Emacs's display engine is genuinely well-optimized: it handles overlays, faces, text properties, and redisplay regions efficiently.
When you're building a UI, you control the content. Lines are short by design (form fields, buttons, lists). The pathological case of a 50KB minified JSON line simply doesn't occur.
The long-line problem stems from how Emacs calculates display width for bidirectional text and variable-pitch fonts - it needs to scan the entire line. That's orthogonal to rendering widgets or interactive buffers.
So long mode is the best fix for this issue but it disables syntax highlighting and line numbers. Vscode can handle long lines just fine without disabling anything.
The buffer is the UI, rendered by Emacs's extremely optimised text display machinery
The author is known in the community as a mere packager whose knowledge of the nitty-gritty derives entirely from hearsay. Perhaps he read the long-winded preamble to xdisp.c written in 1995 boasting of all manner of optimisations. But they were written so long ago, almost no one believes most of them matter anymore, what with thirty years of bitrot.
I saw him as a content creator doing some research, like pewdiepie or mrbeast. He's a good writer though. The article was a fun read.
> Your goal here is to make the best YOUTUBE videos possible. That’s the number one goal of this production company. It’s not to make the best produced videos. Not to make the funniest videos. Not to make the best looking videos. Not the highest quality videos. It’s to make the best YOUTUBE videos possible. Everything we want will come if we strive for that. Sounds obvious but after 6 months in the weeds a lot of people tend to forget what we are actually trying to achieve here.
In September there was a supply-chain attack on NPM where the payload code injected hooks into the DOM API. Changing the behaviour of encapsulated components, like Java's standard library, is not possible now without the application explicitly allowing code to break the integrity of the encapsulated component.
Without getting into specific stuff I've run into, automated stuff just, breaks.
This is a living organism with moving parts and a time limit - you update nginx with a change that breaks .well-known by accident, or upgrade to a new version of Ubuntu and suddenly some dependency isn't loading correctly, or that UUID generator you depended on to generate the name for the challenge doesn't get loaded, or certbot becomes obsolete because of some API change and you can't upgrade to the latest because the OS is older and you installed it from the package manager.
You eventually see it in your exception monitoring or when an ssl monitor detects the cert is about to expire. Then you have to drop that other urgent thing you needed to get done, come in and debug it, fix it, and re-issue all the certs at the rate limit allowed. That's assuming you have that monitoring - most sites probably don't.
If you detect that issue with 1/3 of the cert left, you will now have 15 days to figure that out instead of 30. If you can't finish it in time, or you don't learn about it in time, the site(s) hard fail on every web browser that visits and you've effectively got a full site outage until you repair it.
So you discover it's because of certbot not working with a new API change, and you can't upgrade with the package manager. Now you need to figure out how to compile it from source, but it doesn't like the python that is currently installed and now you need to install that from source, but that version of python breaks your python web app so you have to figure out how to migrate your app to that version of python before you can do that, and the programmer that can do that is on a week long whitewater rafting trip in Idaho.
Aside from all that, what happens if a hacker manages to wreck the let's encrypt infra so badly they need 2 weeks to get it back online? The internet archive was offline for weeks after a ddos attack. The cloudflare outage took one site of mine down for less than 10 minutes, it's not hard to imagine a much worse outage for the web here.
AKA the real world, a place where you have older appliances, legacy servers, contractual constraints and better things to do than watch a nasty yearly ritual become a nasty monthly ritual.
I need to make sure SSL is working in a bunch of very heterogeneous stuff but not in a position to replace it and/or pick an authority with better automation. I just suck it up and dread when a "cert day" looms closer.
Sometimes these kind of decisions seem to come from bodies that think the Internet exists solely for doing the thing they do.
Happens to me with the QA people at our org. They behave as if anything happens just for the purpose of having them measure it, creating a Heisenberg situation where their incessant narrow-minded meddling makes actually doing anything nearly imposible.
The same happens with manual processes done once a year - you just aren't aware of it until renewal.
Consider the inevitable need for immediate renewal due to an incident. Would you rather have this renewal happen via a fast, automated and well-tested process, or a silently broken slow and manual one?
The manual process was annoying but it wasn't complicated.
You knew exactly when it was going to fail and you could put it on your calendar to schedule the work, which consisted of an email validation process and running a command to issue the certificate request from your generated key.
The only moving part was the issued certificate, which you copied and pasted over and reloaded the server. There are a lot less things to go wrong on this process, which at one point I could do once every two years, than in a really complicated automated background task that has to happen within 15 days.
I love short duration automated free certs, but I think we really need to have a conversation about how short we can make them before we make it so humans no longer have the time required to fix problems anymore.
There are also alternatives to Cloudflare and AWS, that didn't stop their outages from taking down pretty much the entire internet. I'm not sure what your point is, pretty much everybody is using let's encrypt and it will very much be a huge outage event for the web if something were to go seriously wrong with it.
One key difference: A cert is a “pickled” thing, it’s stored and kept until it is successfully renewed. So if you attempt to renew at day 30 and LE is down, then you still have nearly more than two weeks to retrieve a new cert. Hopefully LE will get on their feet again within that time. Otherwise you have Google, ZeroSSL, etc where you can fetch a replacement cert.
Doesn't emacs lag like crazy in files with large lines. Why is this still a problem? Every modern editor handles this gracefully. I remember reading something about using regexes for syntax highlighting. This looks like a problem in the rendering layer which shouldn't be too hard to fix without touching the core engine. Are there any other problems that make it difficult to fix without disabling any useful features?
reply