Hacker Newsnew | past | comments | ask | show | jobs | submit | Mr_Minderbinder's commentslogin

> Some people are calling it the "American century of humiliation"

They should wait until some or all of the following things have happened:

1. Camp David is sacked, looted and burned to the ground by foreign troops. [1]

2. Foreign naval vessels patrol American rivers to protect foreign corporate interests in America. [2]

3. Foreign nations have unrestricted access to American ports and trade. [3]

4. America pays a large indemnity for attempting to resist. [4]

5. Foreign nationals become immune to US law. [5]

6. Multiple military defeats and territorial losses. [6]

7. This goes on unfettered for 100 years.

All in all perhaps it is a bit early to call it that.

[1] https://en.wikipedia.org/wiki/Old_Summer_Palace#Destruction

[2] https://en.wikipedia.org/wiki/Yangtze_Patrol

[3] https://en.wikipedia.org/wiki/Unequal_Treaties

[4] https://en.wikipedia.org/wiki/Boxer_Indemnity#The_clauses

[5] https://en.wikipedia.org/wiki/Extraterritoriality#China

[6] https://en.wikipedia.org/wiki/Century_of_humiliation#History


> In terms of strength he's the weakest player to win in half a century even in absolute terms.

Gukesh is arguably stronger than either of Khalifman, Kasimdzhanov and Ponomariov, who won the FIDE title before it was re-unified. Also his current rating is higher than either Karpov’s or Kasparov’s were when they first won the title. His rating when he first won was about the same as Fischer’s when Fischer first won. Neither Kramnik or Anand were clearly the best player throughout the entirety of their reigns and both of their ranks fluctuated amongst the top ten positions.


> Also his current rating is higher than either Karpov’s or Kasparov’s were when they first won the title. His rating when he first won was about the same as Fischer’s when Fischer first won.

This doesn't really mean anything. Rating is a purely relative system, as in the other thing that matters when performing Elo calculations is the difference in Elo between the two players. The absolute value of an Elo rating carries no real meaning and drifts over time based on the volume, skill level, and initial rating of lower level players. Since these change frequently, it's pretty much useless to compare ratings separated in time by more than a decade or so, maybe less. 50+ years is certainly far too long.


My views on this, which are mature and have been held for many years now, are mostly informed by the results obtained by Kenneth Regan and Guy Haworth in their paper “Intrinsic Chess Ratings” which, unless you have intelligence to the contrary, is the only rigorous treatment of this issue that has yet been performed and is yet the only argument that has any persuasive hold over me.

You say that ratings drift over time to such an extent that to use them in comparisons across long time spans is meaningless yet their analysis determined that chess ratings as a measure of intrinsic quality of move choice (which must be highly correlated with playing strength) is stable over several decades with only some indications that a small amount of deflation has occurred.

Your argument in comparison amounts to informal speculation. If I were to share my own I would say that those potentially error-inducting considerations, are statistically insignificant compared to the sheer number of games, that is to say corrective and informative exchanges of points, that occur. Further, I would add that the absolute values of ratings were defined by the playing strengths of the original players and that this definition has been well preserved even as the player pool has evolved.

I have heard many such arguments in my time yet not a single proponent cares to demonstrate them. What I find amusing is that those same proponents will often readily accept a comparison across time of a single player (often themselves) across similar time spans without controversy, as evidence of their progress as a player, for instance using Carlsen’s rating today and comparing it with one from early in his career, say from 2003 or 2004, which at this point was more than 20 years ago.


I have been into computer chess for many years and I was fully expecting those concessionary statements. I have seen enough programs in this lucrative genre where a lot of attention can be gained by fraudulently claiming you implemented chess in a seemingly impossibly small size. When confronted, the charlatans will often claim senselessly that those omissions were in fact superfluous. This is a behaviour I have unfortunately also observed in other areas of computing.

If anyone reading this is interested in small and efficient chess programs that are still reasonably strong, there was a x86 assembly port of Stockfish called asmFish from a couple of years ago (the Win64 release binary was about 130KiB). Also see OliThink (~1000 LOC) and Xiphos which has some of the simplest C code for an engine of its strength that I have seen. I have not investigated the supposedly 4K sized engines that participated in TCEC too closely but from what I have seen so far it would seem that there are a few asterisks to be attached to those claims.


> I understood that machine completely.

This claim is frequently made about that era yet ignores the fact it was almost certainly running proprietary software.


A 5B market cap would imply a P/E ratio of 1.3 and a P/FCF ratio of 0.8, which essentially would be saying “this business is only worth approximately what it made last year”. The corresponding multiples for other auto makers are typically in the high single digits. Even if you believed Tesla’s whole business would collapse tomorrow (i.e. revenue goes to zero) book value is ~83B and net cash is ~29B.


Yes, I think that sounds about right.


You may think that sounds right but I can assure you that convincing others that ~$29B of accessible pure cash or ~$83B of equity is really only worth $5B will be a more difficult venture. You can dispute the carrying value of Tesla's assets and liabilities but the cash is cash which is why I included that metric as a baseline. At the end of the day $29B is worth $29B and nothing else.


Cool, so yeah I'd value them at $29B max (you have to account for all future lawsuits...). Far cry from $1.4T! Lots of downside yet to come.


> YouTube is an absolute clown show. It's so bad that I'm certain Google devs are actively making it terrible on purpose.

Exactly, which is why I thought this was a terrible and meaningless benchmark. It completely obfuscates the actual video playback performance of these machines. It is more a measure of how awful and inefficient YouTube is. I am surprised that the author did not remark on or seem to be aware of this at all.


The latest version of Crafty has a significantly higher rating on CCRL than Fritz 10, the version that defeated Kramnik in 2006. He was the World Champion and was rated 2750 at the time. I do not know what source you used for Crafty’s rating but ratings from different lists are not comparable. It is highly probable that Crafty running on a Ryzen could defeat any human.

I am also of the opinion that with an optimised program the CRAY-1 would have been on par with Karpov and Fischer. I also think that Stockfish or some other strong program running on an original Pentium could be on par with Carlsen. I am not sure if Crafty’s licence would count as FOSS.


Why does the current design paradigm in image coding formats emphasise supporting as many features as possible in order to have “one image format to rule them all”? You do not see this in audio and does anybody think that Opus and FLAC should be combined into one format? Does the fact that Opus does not support lossless encoding make it worse?


From a user perspective it is nice to know that the person decoding will likely support a given format, both now and in the future.

More use cases for a single popular format makes this more likely.


> You can pretty much draw a parallel line with hardware advancement and the bloating of software.

I do not think it is surprising that there is a Jevons paradox-like phenomena with computer memory and like other instances of it, it does not necessarily follow that this must be a result of a corresponding decline in resource usage efficiency.


> Over-focus on the highest possible quality

This is not an issue in my view. I like the fact that I can download 100 MiB ultra-high resolution TIFF files of scans of photographs from the original negative from the Library of Congress and 24-bit/96kHz FLAC files of captures of 78 RPM records from the Internet Archive. In addition to maintaining completeness and quality of information, one of the main goals of preservation is to guard against further degradation and information loss. You should try to preserve the highest quality copies available (because they contain more information) and re-encoding (deliberate degradation) should only be used to create convenient access copies.

Inferior copies, in addition to being less informative, have the potential to misinform. Only the archivist will enjoy space savings. All the readers who might consult your library in the infinite future will bear the cost.

> ...(e.g. lossless FLAC). This inflates the file size...

This is entirely the wrong view. The file size of a raw capture compressed to FLAC should be thought of as the “true” or “correct” size. It is roughly the most efficient (balancing various trade-offs) representation of sampled audio data that we can presently achieve. In preservation we seek to preserve the item or signal itself and not simply what we might perceive thereof. This human-centric perception view is just wrong. There is data in film photographs which cannot be perceived visually yet can be of interest to researchers and be revealed with digital image analysis tools.

As an example of how much information celluloid can contain see: https://vimeo.com/89784677 (context: he is comparing a Blu-ray and a scan of a 35mm print)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: