Hacker Newsnew | past | comments | ask | show | jobs | submit | xxs's commentslogin

instead, sub 12cm disc shaped ones are rather well understood and perform well. They suck opening doors though - but the 40cm one would have a similar issue.

Besides that: I, personally, am totally fine with the current state of the technology.


>an Acer 486 with 40MB drive and 32MB RAM.

32MB ram <-- no way. 4 and 8MB were the standard (8MB being grand), you could find 16MB on some Pentiums. So 40MB drive and 32MB RAM is an exceptionally unlikely combo.

32MB become norm around Pentium MMX and K6(-2).


Haha, I wondered if someone would complain about 32MB. We had the board maxed out. My grandfather’s computer before ours.

A few months after taking possession, I upgraded the disk to a luxurious 400MB.


The classic NAND-me-down. My first personal computer was a "broken" 486 system I got for $25 at a yard sale. All it needed was a hard drive.

We had a 386 DX with 32MB of RAM. I don’t think it was that uncommon. DOOM still didn’t run super smoothly, though.

Nah, as the other poster said 4 or 8 MB was what was common on 486 machines. Even less on 386. Most 386 motherboards didn't even support more than 16MB.

It could have been bought old and upgraded. Not everyone had the luxury of a brand new first computer.

Possibly, but even mother boards supporting 32MB would be rare. Perhaps on "DX3"?

As for a new computer and price - it was like $1000 to get AMD 486DX2-80 with 4MB RAM in '95...


So this depends if it was a 72 pin DIMM board. I don't think you could get there (easily?) on a 30 pin board, but 72 may have had native support for 64 out of the box.

I upgraded a ~1992 Dell 486 DX2 to 36MB (original 4MB + 32MB...or was it a pair of 16MB sticks? hard to remember) around 1997 or so.

Yeah, IIRC my first computer, or at least the first one I really maintained, was a Pentium 2 with 32MB of ram and a 2gb hard drive. Good ole gateway pcs.

The first first computer I had was an old IBM PC.


22 years back is still this century, nothing weird about. As for remembering stuff 6502/48KB RAM (along with call -151) seems boring, I guess.

Interestingly I can't remember any specs since about 22 years ago.

First modern PC (dos/win3.1) I had a 12mhz 286, 1 meg of ram, AT keyboard, 40MB hard drive. This progressed via a 486/sx33/4m/170mb and at one point a pentium2 600 with (eventually) 96mb of ram, 2g hard drive, then a p3 of some sort, but after that it's just "whatever".


Backpack can have metal reinforcements that would make a proper weapon too, Same broken glass bottles and what not.

The entire point is futile and pointless.


...or be very anxious and resent air travel. I don't feel any safe through body searches, coupled with belt/coat removal, not wearing glasses and what not.

Personally, I don't know a single person who feels more secure due to the checks.


That's ok - 6cm blades are allowed. You can also carry it in a cabin luggage anyways.

realistically any broken glass bottle can be used as a blade.


Whether they are allowed or not, probably depends on the place.

In Germany, at Frankfurt, I had to dump in a garbage bin a smaller Swiss army knife, to be allowed to pass.

I had it because my high-speed train of Deutsche Bahn had arrived more than one hour late, so there was no time to check in my luggage.

After losing the knife, I ran through the airport towards my gate, but I arrived there a few seconds after the gate was closed. Thus I had to spend the night at a hotel and fly next day, despite losing my knife in the failed attempt to catch the plane. Thanks Deutsche Bahn !


>Whether they are allowed or not, probably depends on the place.

It's a EU thing, even though the Swiss are outside... and I was sure it was a directive until:

The recommendation allows for light knives and scissors with blades up to 6 cm (2.4 in) but some countries do not accept these either (e.g. nail care items)[citation needed]

I thought it was universal mostly since I had no issues at the airports.

Prior to the 6 cm rule, once I had to run to a post office at the airport and mail a parcel to myself with the pocket knife (which is also a memento)


Realistically, you could bring a nub of copper or steel or antler, and your glass bottle, and knap an excellent knife in a few minutes.

if you see an order of magnitude difference and a language involved in the title, it's something I refuse to read (unless it's an obvious choice - interpret vs compilied/jit one)

have found bugs in native JVM, usually it takes some effort, though. Printing the assembly is the easiest one. (I consider the bug in java.lang/util/io/etc. code not an interesting case)

Memory leaks and issues with the memory allocator are months long process to pin on the JVM...

In the early days (bug parade times), the bugs are a lot more common, nowadays -- I'd say it'd be an extreme naivete to consider JVM the culprit from the get-go.


yup, zstd is better. Overall use zstd for pretty much anything that can benefit from a general purpose compression. It's a beyond excellent library, tool, and an algorithm (set of).

Brotli w/o a custom dictionary is a weird choice to begin with.


Brotli makes a bit of sense considering this is a static asset; it compresses somewhat more than zstd. This is why brotli is pretty ubiquitous for precompressed static assets on the Web.

That said, I personally prefer zstd as well, it's been a great general use lib.


You need to crank up zstd compression level.

zstd is Pareto better than brotli - compresses better and faster


I thought the same, so I ran brotli and zstd on some PDFs I had laying around.

  brotli 1.0.7 args: -q 11 -w 24
  zstd v1.5.0  args: --ultra -22 --long=31 
                 | Original | zstd    | brotli
  RandomBook.pdf | 15M      | 4.6M    | 4.5M
  Invoice.pdf    | 19.3K    | 16.3K   | 16.1K
I made a table because I wanted to test more files, but almost all PDFs I downloaded/had stored locally were already compressed and I couldn't quickly find a way to decompress them.

Brotli seemed to have a very slight edge over zstd, even on the larger pdf, which I did not expect.


EDIT: Something weird is going on here. When compressing zstd in parallel it produces the garbage results seen here, but when compressing on a single core, it produces result competitive with Brotli (37M). See: https://news.ycombinator.com/item?id=46723158

I did my own testing where Brotli also ended up better than ZSTD: https://news.ycombinator.com/item?id=46722044

Results by compression type across 55 PDFs:

    +------+------+-----+------+--------+
    | none | zstd | xz  | gzip | brotli |
    +------|------|-----|------|--------|
    | 47M  | 45M  | 39M | 38M  | 37M    |
    +------+------+-----+------+--------+

Turns out that these numbers are caused by APFS weirdness. I used 'du' to get them which reports the size on disk, which is weirdly bloated for some reason when compressing in parallel. I should've used 'du -A', which reports the apparent size.

Here's a table with the correct sizes, reported by 'du -A' (which shows the apparent size):

    +---------+---------+--------+--------+--------+
    |  none   |  zstd   |   xz   |  gzip  | brotli |
    +---------|---------|--------|--------|--------|
    | 47.81M  | 37.92M  | 37.96M | 38.80M | 37.06M |
    +---------+---------+--------+--------+--------+
These numbers are much more impressive. Still, Brotli has a slight edge.

Worth considering the compress/decompress overhead, which is also lower in brotli than zstd from my understanding.

Also, worth testing zopfli since it's decompression is gzip compatible.


> I couldn't quickly find a way to decompress them

    pdftk in.pdf output out.pdf decompress

Does your source .pdf material have FlateDecode'd chunks or did you fully uncompress it?

I wasn't sure. I just went in with the (probably faulty) assumption that if it compresses to less than 90% of the original size that it had enough "non-randomness" to compare compression performance.

Ran the tests again with some more files, this time decompressing the pdf in advance. I picked some widely available PDFs to make the experiment reproducable.

  file            | raw         | zstd       (%)      | brotli     (%)     |
  gawk.pdf        | 8.068.092   | 1.437.529  (17.8%)  | 1.376.106  (17.1%) |
  shannon.pdf     | 335.009     | 68.739     (20.5%)  | 65.978     (19.6%) |
  attention.pdf   | 24.742.418  | 367.367    (1.4%)   | 362.578    (1.4%)  |
  learnopengl.pdf | 253.041.425 | 37.756.229 (14.9%)  | 35.223.532 (13.9%) |
For learnopengl.pdf I also tested the decompression performance, since it is such a large file, and got the following (less surprising) results using 'perf stat -r 5':

  zstd:   0.4532 +- 0.0216 seconds time elapsed  ( +-  4.77% )
  brotli: 0.7641 +- 0.0242 seconds time elapsed  ( +-  3.17% )
The conclusion seems to be consistent with what brotli's authors have said: brotli achieves slightly better compression, at the cost of a little over half the decompression speed.

Whats the assumption we can potentially target as reason for the counter-intuitive result?

that data in pdf files are noisy and zstd should perform better on noisy files?


What's counter-intuitive about this outcome?

maybe that was too strongly worded but there was an expectation for zstd to outperform. So the fact it didnt means the result was unexpected. i generally find it helpful to understand why something performs better than expected.

Isn't zstd primarily designed to provide decent compression ratios at amazing speeds? The reason it's exciting is mainly that you can add compression to places where it didn't necessarily make sense before because it's almost free in terms of CPU and memory consumption. I don't think it has ever had a stated goal of beating compression ratio focused algorithms like brotli on compression ratio.

I actually thought zstd was supposed to be better than Brotli in most cases, but a bit of searching reveals you're right... Brotli, especially at the highest compression levels (10/11), often exceeds zstd at the highest compression levels (20-22). Both are very slow at those levels, although perfectly suitable for "compress once, decompress many" applications which the PDF spec is obviously one of them.

Are you sure? Admittedly I only have 1 PDF in my homedir, but no combination of flags to zstd gets it to match the size of brotli's output on that particular file. Even zstd --long --ultra -22.

on max compression (11 vs zstd's 22) of text brotli will be around 3-4% denser... and a lot slower. Decompression wise zstd is over 2x faster.

The pdfs you have are already compressed with deflate (zip).


I love zstd but this isn't necessarily true.

Not with small files.

If that's about using predefined dictionaries, zstd can use them too.

If brotli has a different advantage on small source files, you have my curiosity.

If you're talking about max compression, zstd likely loses out there, the answer seems to vary based on the tests I look at, but it seems to be better across a very wide range.


No, it's literally just compressing small files without training zstd dict or plugging external dictionaries (not counting the built-in one that brotli has). Especially for English text, brotli at the same speed as zstd gives better results for small data (in kilobyte to a few of megabyte range).

> Pareto

I don’t think you’re using that correctly.


It's correct use of Pareto, short for Pareto frontier, if the claim being made is "for every needed compression ratio, zstd is faster; and for every needed time budget, zstd is faster". (Whether this claim is true is another matter.)

brotli is ubiquitous because Google recommends it. While Deflate definitely sucks and is old, Google ships brotli in Chrome, and since Chrome is the de facto default platform nowadays, I'd imagine it was chosen because it was the lowest-effort lift.

Nevertheless, I expect this to be JBIG2 all over again: almost nobody will use this because we've got decades of devices and software in the wild that can't, and 20% filesize savings is pointless if your destination can't read the damn thing.


Brotli compresses my files way better, but it's doing it way slower. Anyway, universal statement "zstd is better" is not valid.

On max compression "--ultra -22", zstd is likely to be 2-4% less dense (larger) on text alike input. While taking over 2x times times to compress. Decompression is also much faster, usually over 2x.

I have not tried using a dictionary for zstd.


This bizzare move has all the hallmarks of embrace-extend-extinguish rather than technical excellence

>The US has gotten tremendous value from AI agents

Any quote on that part...


I read this with a large /s on the end...


Fair point - yet the very official US stance is to reduce regulation and what not. If it was the sarcasm, it'd be the "US population". I could contribute 'tremendous' for obvious reasons but still.

That would likely the 1st time to miss sarcasm... need few more words not the '/s' (I never use /s)


Hah, also fair point - there are plenty of people who would say this completely earnestly! :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: