Hacker Newsnew | past | comments | ask | show | jobs | submit | malthejorgensen's commentslogin

Don’t you have to manually “refresh” Postgres materialized views, essentially making it an easier to implement cache (the Redis example in the blog post) rather than the type always-auto-updating materialized view the blog post author is actually touting?


The real bummer is not that you have to manually refresh them, it's that refreshing them involves refreshing the entire view. If you could pick and choose what gets refreshed, you might just sometimes have a stale cache here and there while parts of it get updated. But refreshing a materialized view that is basically just not small or potentially slightly interesting runs the risk of blowing your write instance up.

For this reason I would strongly advise, in the spirit of https://wiki.postgresql.org/wiki/Don't_Do_This, that you Don't Do Materialized Views.

Sure, Differential/Timely Dataflow exist and they're very interesting; I have not gotten to build a database system with them and the systems that provide them in a usable format to end users (e.g. Materialize) are too non-boring for me to want to deploy in a production app.


This is where Oracle has a upper hand in the materialised view department: a materialised view can be refreshed either manually or automatically, with both, incremental and full, refresh options being available.


Yeah, but then you're using Oracle.


Out of the box, you're right, but there are extensions that do just that:

https://github.com/sraoss/pg_ivm

It's however not available on RDS, so I've never had the chance to try it myself.


I think it's impossible to do an incremental update in an arbitrary case. Imagine an m-view based on a query that selects top 100 largest purchases during last 30 days on an e-commerce site. Or, worse, a query that selects the largest subtree of followers on a social network site.

Only certain kinds of conditions, such as a rolling window over a timestamp field, seem amenable to efficient incremental updates. What am I missing?



That's probably beyond the scale level appropriate for a materialized view. For that I'd use something like DBT.


Yes, you need to refresh the materialized views periodically. Which mean that, just like any other caching mechanism, you're solving one problem (query performance) but introducing another (cache invalidation). I've personally used Postgres MVs to great success, but there are tradeoffs.


So the author is wrong that they’re automatic kept in sync?


No, they conflate the two concepts together, though they acknowledge this is a special case here:

> There are a few startups these days peddling a newfangled technology called “incremental view maintenance” or “differential dataflow”

I think they should be a little more explicit about the differences though, because it can be very misleading for those who arent aware of the distinction.


Oh interesting, I didn’t know that - I’ve been so far in MySQL/Vitess land for so long, I haven’t used Postgres in several years. That’s disappointing!


In the ancient times there was a startup called The Grid. Very hyped. They implemented the Cassowary layout algorithm for the web and called it GSS.

I loved the idea of GSS but it never caught on: https://gss.github.io/


I consulted with them briefly. Felt like it was stimulant-fueled hustleporn. They had an interesting concept, but not surprised they didn't turn out a sustainable product.


> in the ancient times > 2014

Rage successfully baited - that was like three months ago, tops!

I wonder what ever happened with Grid - they were indeed very hyped, but fairly-obviously vaporware too IIRC. At least in terms of marketing, where they claimed AI web-property generation.


Very ahead of the times for 2014.

> GSS was created by Dan Tocchini, leader of the TheGrid, the first AI website platform.


I programmed that gss thing. Fundamentally the issue is that cassowary can’t do 2d constraints and can’t do li e wraps. It’s dumb.


sourcehut isn’t weird at all.

It’s made by Drew Devault who is mostly well-respected in the hacker community, and it’s made exactly to be an alternative to BigCo-owned source hosts like GitHub, Gitlab and Bitbucket.


Drew isn't well-respected, he's been far too antagonistic to far too many people over the years for that.

Latest news is that he authored/published a controversial character assassination on Richard Stallman while trying and failing to stay anonymous. Then some further digging after this unmasking found he's into pedophilic anime. Sitting on his computer uploading drawings of scantily-clad children to NSFW subreddits.

No-one with any decency can respect that behavior, it's disgusting.


Feels similar to `sd` (https://github.com/chmln/sd)

which in my mind was the first “replace” version of ripgrep

grep -> ripgrep

sed -> sd


The argument made here is like “RIP C” now come over to my new project: “C++”.

Nope. Nope, nope, nope. Neither C nor Jekyll are dead.


I’d imagine it’s more like an adjacency list structure with various indexes (similar to a regular relational dbs) to allows lookups based on node properties


My impression is that most editors already use this or similar optimized data structures for representing the text internally.


Emacs uses a much less advanced data structure called a "gap buffer":

https://www.gnu.org/software/emacs/manual/html_node/elisp/Bu...

https://news.ycombinator.com/item?id=15199642

It's basically just two stacks of lines, where a line is an array of characters. One stack has the lines before the cursor, the other has the lines after the cursor (in reverse order).

I use emacs, but ropes are a much better way to go if you're starting from scratch.


Is it though? After about 20 years of Emacs I have never thought it was slow regardless of anything found in etc/JOKES


The last 20 years have been OK. Before that not really so.

I remember starting emacs on a VAX-750 with 4 MB main memory and 8 users in the 80s.

Even in the 90s with Suns and competitors in single user usage emacs could bring things to a halt. Especially when starting a certain messaging client (hej LysKOM).

Well, back then emacs was a resource hog like today VSCode using 8GB and swapping as someone wrote here. (I would not touch VSCode with a stick). Times have changed, but emacs rather little. So today it can nearly be considered lightweight and efficient.


opening any file with extra long lines (for example minified js files which somehow end up littering any codebase) in emacs is still bring emacs to its knees, especially if you have line wrapping enabled.


Ah, I don't have line wrapping enabled by default.


Headline is a bit rich, since duckduckgo exists (and Bing/Microsoft if you really stretch it).

I get the point though -- duckduckgo doesn't provide a browser. But I'm guessing Brave doesn't provide hosted email service (Gmail), file and document hosting (Google Drive and Docs) so the analogy breaks down either way.

That snark being said, I do want more privacy on the web -- so yay Brave!


> duckduckgo doesn't provide a browser

That's not entirely true. They do have a browser for Android and iOS[0]. When I search they frequently show a small pop-up to install it.

[0] https://apps.apple.com/us/app/duckduckgo-privacy-browser/id6...


This was my first response as well (and posted a sibling that said the same).


Do we know if DDG passes through every query on demand, or does it have a caching layer or something where popular queries only hit Microsoft once a day?


Bing search API terms of use do not allow caching [0].

"You must use results you obtain through the Services only in Internet Search Experiences (as defined in the use and display requirements) and must not cache or copy results. "

But they may have a special agreement in place due to the number of searches happening on DDG (100 million + daily [1]). They're larger than Bing in many countries.

[0] - https://www.microsoft.com/en-us/bing/apis/legal

[1] - https://duckduckgo.com/traffic


Does it matter? If it's proxied through a single API "user", Microsoft can't track individual DDG users, right?


I think it still matters— maybe it's way harder to track an individual user, but you could still do some inference about queries which lead to other queries, or watching for bursts/crescendos of traffic on a particular topic in response to current events.


Your first claim, I'm suspicious this can be done after a certain scale. If DDG is sending queries from millions of users, it's much harder to untangle them.

Your second query, absolutely, Microsoft is getting very good aggregated data from their API, which is useful to them, but that's not really a privacy violation for individual users.


Yeah, that's fair— once the queries are anonymized and stripped of any locale information, there isn't too much more to go on. And while there may be technical reasons to want to cache popular searches, then you're mostly just denying your upstream analytics which are fairly reasonable for them to want to have.

OTOH, Debian deliberately provides the technical means for third parties to host verifiable mirrors of their package repository, and then makes the analytics an opt-in thing (popcon).


Totally agree – KaTeX is good enough. Browser implementors should spend their time on other things.


Browser implementors have been spending their time on other things. People have been asking for chrome support for almost a decade now. I believe MathML support in Firefox is entirely due to volunteers. And as you see from this post Chrome support is being implemented by Igalia, not the Chromium core developers.


The first part rings dangerously close to solipsism in my ears, and once you’re there nothing much matters. Just trash the planet, use people and move on. Cause once you’re dead it doesn’t matter? I might be wrong but you can’t build a civilization in that way.

Second thing is that immortality creates conservatism. Old age does too, but it seems to me immortality over-indexes self-preservation over progress of civilization, where the former is just slightly less of a concern for the individual when lifetimes are as short as they are now.


> immortality creates conservatism

I want to refute this, but I have to back up a bit first.

The late Frank Schirrmacher (a German journalist and essayist who died in 2014) argued that German politics is getting more conservative over time because of demographics. Large-scale reforms or revolutions typically cause less prosperity in the short term (as societal institutions get rebuilt and investments in future institutions are required) and higher prosperity in the long term. That's a bargain that works well for young people: They get to reap the benefits for a long time and thus can tolerate austerity in the short term, but from the POV of an old person (e.g. age 60+), they get all the troubles and none of the benefits. Hence an older society is a more conservative society.

In other words, a significant part of conservatism [1] is caused by people knowing that they will die soonish (say, within the next 10-20 years) and therefore deciding that the best way to ensure prosperity within these remaining years is to defend the status quo as fiercely as possible. Immortality would fix this behavior, bringing personal incentives more in line with the incentives of society as a whole.

[1] The other part of conservatism can be attributed to people genuinely being unable to imagine a better society than the one they live in or genuinely not believing a better society to be attainable (whether justified or not). As far as I can see, that part of conservatism would be largely unaffected by the availability of immortality.


No, because if you want to live forever, you can't trash the planet.

You got to keep nature in check as long as possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: