instead, sub 12cm disc shaped ones are rather well understood and perform well. They suck opening doors though - but the 40cm one would have a similar issue.
Besides that: I, personally, am totally fine with the current state of the technology.
32MB ram <-- no way. 4 and 8MB were the standard (8MB being grand), you could find 16MB on some Pentiums. So 40MB drive and 32MB RAM is an exceptionally unlikely combo.
Nah, as the other poster said 4 or 8 MB was what was common on 486 machines. Even less on 386. Most 386 motherboards didn't even support more than 16MB.
So this depends if it was a 72 pin DIMM board. I don't think you could get there (easily?) on a 30 pin board, but 72 may have had native support for 64 out of the box.
Yeah, IIRC my first computer, or at least the first one I really maintained, was a Pentium 2 with 32MB of ram and a 2gb hard drive. Good ole gateway pcs.
Interestingly I can't remember any specs since about 22 years ago.
First modern PC (dos/win3.1) I had a 12mhz 286, 1 meg of ram, AT keyboard, 40MB hard drive. This progressed via a 486/sx33/4m/170mb and at one point a pentium2 600 with (eventually) 96mb of ram, 2g hard drive, then a p3 of some sort, but after that it's just "whatever".
...or be very anxious and resent air travel. I don't feel any safe through body searches, coupled with belt/coat removal, not wearing glasses and what not.
Personally, I don't know a single person who feels more secure due to the checks.
Whether they are allowed or not, probably depends on the place.
In Germany, at Frankfurt, I had to dump in a garbage bin a smaller Swiss army knife, to be allowed to pass.
I had it because my high-speed train of Deutsche Bahn had arrived more than one hour late, so there was no time to check in my luggage.
After losing the knife, I ran through the airport towards my gate, but I arrived there a few seconds after the gate was closed. Thus I had to spend the night at a hotel and fly next day, despite losing my knife in the failed attempt to catch the plane. Thanks Deutsche Bahn !
>Whether they are allowed or not, probably depends on the place.
It's a EU thing, even though the Swiss are outside... and I was sure it was a directive until:
The recommendation allows for light knives and scissors with blades up to 6 cm (2.4 in) but some countries do not accept these either (e.g. nail care items)[citation needed]
I thought it was universal mostly since I had no issues at the airports.
Prior to the 6 cm rule, once I had to run to a post office at the airport and mail a parcel to myself with the pocket knife (which is also a memento)
if you see an order of magnitude difference and a language involved in the title, it's something I refuse to read (unless it's an obvious choice - interpret vs compilied/jit one)
have found bugs in native JVM, usually it takes some effort, though. Printing the assembly is the easiest one. (I consider the bug in java.lang/util/io/etc. code not an interesting case)
Memory leaks and issues with the memory allocator are months long process to pin on the JVM...
In the early days (bug parade times), the bugs are a lot more common, nowadays -- I'd say it'd be an extreme naivete to consider JVM the culprit from the get-go.
yup, zstd is better. Overall use zstd for pretty much anything that can benefit from a general purpose compression. It's a beyond excellent library, tool, and an algorithm (set of).
Brotli w/o a custom dictionary is a weird choice to begin with.
Brotli makes a bit of sense considering this is a static asset; it compresses somewhat more than zstd. This is why brotli is pretty ubiquitous for precompressed static assets on the Web.
That said, I personally prefer zstd as well, it's been a great general use lib.
I made a table because I wanted to test more files, but almost all PDFs I downloaded/had stored locally were already compressed and I couldn't quickly find a way to decompress them.
Brotli seemed to have a very slight edge over zstd, even on the larger pdf, which I did not expect.
EDIT: Something weird is going on here. When compressing zstd in parallel it produces the garbage results seen here, but when compressing on a single core, it produces result competitive with Brotli (37M). See: https://news.ycombinator.com/item?id=46723158
Turns out that these numbers are caused by APFS weirdness. I used 'du' to get them which reports the size on disk, which is weirdly bloated for some reason when compressing in parallel. I should've used 'du -A', which reports the apparent size.
Here's a table with the correct sizes, reported by 'du -A' (which shows the apparent size):
I wasn't sure. I just went in with the (probably faulty) assumption that if it compresses to less than 90% of the original size that it had enough "non-randomness" to compare compression performance.
Ran the tests again with some more files, this time decompressing the pdf in advance. I picked some widely available PDFs to make the experiment reproducable.
For learnopengl.pdf I also tested the decompression performance, since it is such a large file, and got the following (less surprising) results using 'perf stat -r 5':
The conclusion seems to be consistent with what brotli's authors have said: brotli achieves slightly better compression, at the cost of a little over half the decompression speed.
maybe that was too strongly worded but there was an expectation for zstd to outperform. So the fact it didnt means the result was unexpected. i generally find it helpful to understand why something performs better than expected.
Isn't zstd primarily designed to provide decent compression ratios at amazing speeds? The reason it's exciting is mainly that you can add compression to places where it didn't necessarily make sense before because it's almost free in terms of CPU and memory consumption. I don't think it has ever had a stated goal of beating compression ratio focused algorithms like brotli on compression ratio.
I actually thought zstd was supposed to be better than Brotli in most cases, but a bit of searching reveals you're right... Brotli, especially at the highest compression levels (10/11), often exceeds zstd at the highest compression levels (20-22). Both are very slow at those levels, although perfectly suitable for "compress once, decompress many" applications which the PDF spec is obviously one of them.
Are you sure? Admittedly I only have 1 PDF in my homedir, but no combination of flags to zstd gets it to match the size of brotli's output on that particular file. Even zstd --long --ultra -22.
If that's about using predefined dictionaries, zstd can use them too.
If brotli has a different advantage on small source files, you have my curiosity.
If you're talking about max compression, zstd likely loses out there, the answer seems to vary based on the tests I look at, but it seems to be better across a very wide range.
No, it's literally just compressing small files without training zstd dict or plugging external dictionaries (not counting the built-in one that brotli has). Especially for English text, brotli at the same speed as zstd gives better results for small data (in kilobyte to a few of megabyte range).
It's correct use of Pareto, short for Pareto frontier, if the claim being made is "for every needed compression ratio, zstd is faster; and for every needed time budget, zstd is faster". (Whether this claim is true is another matter.)
brotli is ubiquitous because Google recommends it. While Deflate definitely sucks and is old, Google ships brotli in Chrome, and since Chrome is the de facto default platform nowadays, I'd imagine it was chosen because it was the lowest-effort lift.
Nevertheless, I expect this to be JBIG2 all over again: almost nobody will use this because we've got decades of devices and software in the wild that can't, and 20% filesize savings is pointless if your destination can't read the damn thing.
On max compression "--ultra -22", zstd is likely to be 2-4% less dense (larger) on text alike input. While taking over 2x times times to compress. Decompression is also much faster, usually over 2x.
Fair point - yet the very official US stance is to reduce regulation and what not. If it was the sarcasm, it'd be the "US population". I could contribute 'tremendous' for obvious reasons but still.
That would likely the 1st time to miss sarcasm... need few more words not the '/s' (I never use /s)
Besides that: I, personally, am totally fine with the current state of the technology.
reply