Hacker Newsnew | past | comments | ask | show | jobs | submit | iNic's commentslogin

The amount of cattle required to maintain pasture is way fewer than we have right now. From a CO2 perspective factory farmed cattle tends to look a little better than "free-range" mostly due to reduced land use changes (but it is obviously worse from a cruelty perspective). Finally, we can still have farm animals without eating them!!


Can't we argue for the low amount of anti-matter as a type of anthropic principle? The early universe was super dense meaning that areas with imbalance would quickly annihilate and leave only one type of matter. Then, due to rapid expansion, our observable universe is dominated by only one type of matter. If we imagine a universe with a more even mix it would be less welcoming to life, so we are less likely to observe it. Has someone modeled something like this?


The anthropic principle doesn't imply that our entire observable universe has to contain only matter.

Why shouldn't we observe clouds of anti matter and matter annihilating millions or billions of light years away? Why does the annihilation have to have happened so early on that we can't see any evidence anywhere?

I think there does need to be an explanation and it can't be an anthropic principle cop out.


Not only there is no evidence for the existence of antimatter in quantities comparable with matter, but there also is no logical necessity for this.

People who entertain the idea of an initial state with equal amounts of matter and antimatter do this because thus the properties of the matter that are conserved, except the energy, would sum to zero in the initial state.

However, such people forget that not only the particle-antiparticle pairs that can be generated or annihilated through electromagnetic interactions have this property that the conserved quantities except the energy sum to zero.

The particle-antiparticle symmetry is important only for the electromagnetic interactions, while other interactions have more complex symmetries.

All the so-called weak interactions are equivalent with the generation or annihilation of groups of 4 particles, for which all the conserved properties except energy sum to zero. Such a group of 4 particles typically consists of a quark, an antiquark, a charged lepton or anti-lepton and a neutrino or antineutrino.

For instance the beta decay of a neutron into a proton is equivalent with the generation of 4 particles, an u quark, an anti-d quark, an electron and an antineutrino. The electron and the antineutrino fly away, while the anti-d quark annihilates a d quark, so the net effect for the nucleus is a change of a d quark into an u quark, which transforms a neutron into a proton.

The generation and annihilation of groups of 4 particles in the weak interactions are mediated by the W bosons, but this is a detail of the mechanism of the interactions, which is necessary for computations of numeric values, but not for the explanation of the global effect of the weak interactions, for which the transient existence of the W intermediate bosons can be ignored.

So besides the symmetry between a particle and an anti-particle, we have a symmetry that binds certain groups of 4 quarks and leptons.

There is a third symmetry, which binds groups of 8 particles. For instance, there are 3 kinds of u quarks, 3 kinds of d quarks, electrons and neutrinos, a total of 8 particles that belong to the so-called first generation of matter particles (i.e. the lightest such particles).

All the conserved quantities except energy sum to zero for this group of 8 particles. The neutrino is necessary in this group so that the spin will also sum to zero, not only the electric charge and the hadronic charge.

These 8 kinds of particles are exactly those that are supposed to compose in equal quantities the matter of the Universe at the Big Bang.

So all the conserved quantities except energy sum to zero for the Universe at the Big Bang, when it is composed entirely of ordinary matter, without any antimatter.

Therefore there is no need for antimatter in the initial state.

There is no known reason for this symmetry between the 8 particles of a generation of quarks and leptons, except that this allows for the initial state at the Big Bang to have a zero sum for the conserved properties.

It can be speculated that this symmetry might be associated with a supplementary hyper-weak interaction, in the same way as the symmetry between certain groups of 4 quarks and leptons is associated with the weak interaction. Such an interaction would allow the generation and annihilation of ordinary matter, without antimatter, but with an extraordinarily low probability.


Follow up question. How do we know that some distant galaxy we are observing isn't made up entirely of anti-particles? Wouldn't it behave identically?


This is very helpful thanks!


I have the same thing for the red blue illusion [1]. With glasses I see this effect extremely strongly, and without it is barely perceptible.

[1] https://en.wikipedia.org/wiki/Chromostereopsis


This might be pedantic, but I think "happy" is not the right metric. I would love to run a survey which instead asks "are you unhappy" or "are you content"


I didn't know the sofa problem had been resolved. Link for anyone else: https://arxiv.org/abs/2411.19826


Discussion at the time of publication: https://news.ycombinator.com/item?id=42300382


Still not peer-reviewed


Yes, in numpy we also have that `np.float64(nan) != np.float64(nan)` evaluates to true.


On average. But it wasn't that uncommon to have people reach 100 years of age even 500 years ago. The biggest impact on lifespans was hygiene not medicine (except for maybe anti-biotics).


Vaccines dramatically reduce childhood mortality significantly skewing mortality figures.


For this reason, "life expectancy" doesn't tell you much. "Life expectancy at age 5" tells much, much more about how adults fare.


At the time getting complete sentences was extremely difficult! N-gram models were essentially the best we had


No, it was not difficult at all. I really wonder why they have such a bad example here for GPT1.

See for example this popular blog post: https://karpathy.github.io/2015/05/21/rnn-effectiveness/

That was in 2015, with RNN LMs, which are all much much weaker in that blog post compared GPT1.

And already looking at those examples in 2015, you could maybe see the future potential. But no-one was thinking that scaling up would work as effective as it does.

2015 is also by far not the first time where we had such LMs. Mikolov has done RNN LMs since 2010, or Sutskever in 2011. You might find even earlier examples of NN LMs.

(Before that, state-of-the-art was mostly N-grams.)


Thanks for posting some of the history... "You might find even earlier examples" is pretty tongue-in-cheek though. [1], expanded in 2003 into [2], has 12466 citations, 299 by 2011 (according to Google Scholar which seems to conflate the two versions). The abstract [2] mentions that their "large models (with millions of parameters)" "significantly improves on state-of-the-art n-gram models, and... allows to take advantage of longer contexts." Progress between 2000 and 2017 (transformers) was slow and models barely got bigger.

And what people forget about Mikolov's word2vec (2013) was that it actually took a huge step backwards from the NNs like [1] that inspired it, removing all the hidden layers in order to be able to train fast on lots of data.

[1] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, 2000, NIPS, A Neural Probabilistic Language Model

[2] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, 2003, JMLR, A Neural Probabilistic Language Model, https://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf


Ngram models had been superceded by RNNs by that time. RNNs struggled with long-range dependencies, but useful ngrams were essentially capped at n=5 because of sparsity, and RNNs could do better than that.


Very nice, I've been wanting to build something like this myself but haven't gotten to it. The coffee shop mode is great! My biggest feature request would be changing the font and cursor. The blinking cursor is both distracting and unnecessary as you should assume that you are at the end anyway (since you shouldn't edit)!


noted, thanks!

I'm VERY conservative with adding new UI elements, especially those introducing new possible sources of distractions, so I might hide it behind a bunch of menus. That said, I've spent ages yak shaving / working on those problems already :)


I really want a fixed-width font. I know most people dislike writing prose with monospace fonts. But I'm a developer, and proportional fonts always feel wrong.


I'm on the same camp with you, however when you're writing technical documents, you need both, and Inter [0] is a really nice proportional font.

[0]: https://rsms.me/inter/


Well, talk to a script writer, they only write on Courier typeface


Very impressive, but still has the same problem that seemingly all voice modes that I have tried have which is that the Cantonese voice has a Mandarin accent, and sometimes just straight up uses Mandarin pronunciations.


I have this complaint too! I was impressed that they included Cantonese but it's frustrating that I don't know when it's pronunciation/accent is off. Have you found any other tools that work well for learning Cantonese as an English speaker?


Sadly there isn't one perfect resource. I find Hambaanglaang kinda useful. The complete cantonese books are good. And I have just started making flash cards. But, I am still just a beginner so take it with a huge grain of salt!


Thanks. I'll check that out. Glossika has some good sentences you can use as flashcards.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: