Hacker Newsnew | past | comments | ask | show | jobs | submit | yababa_y's commentslogin

local laws forbidding facial recognition tech have never been wiser

I was somewhat surprised to learn that phi is _merely_ (1 + √5)/2, I didn't have a good conception of what it was at all but I didn't think it was algebraic.


Phi is conceptually defined like so:

    Suppose you have a rectangle whose side length ratio is ϕ. You draw a line across the rectangle which divides it into a square and another rectangle.

    Then the side length ratio of the new, smaller rectangle is also ϕ.
The diagram is straightforward to set up:

       a        b
    +-----+--------+
    |     |        |
    |  ϕa-|        |
    |     |        |-b
    |     |        |
    +-----+--------+
     \            /
      -----  -----
           \/
           ϕb
This gives us a system of two equations:

    ϕa = b
    ϕb = a + b
If you substitute b = φa into the other one, you get

    ϕ(ϕa) = a + ϕa
And since a is just an arbitrary scaling factor, we have no problem dividing it out:

    ϕ² = 1 + ϕ
Since we defined φ by reference to the length of a line, we know that it is the positive solution to this equation and not the negative solution.

(Side note: there are two styles of lowercase phi, fancy φ and plain ϕ. They have their own Unicode points.

HN's text input panel displays ϕ as fancy and φ as plain. This is reversed in ordinary text display (a published comment, as opposed to a comment you are currently composing). And it's reversed again in the monospace formatting. (Which matches the input display.)

The ordinary text display appears to be incorrect, going by the third usage note at https://en.wiktionary.org/wiki/%CF%95 )


HN's text input panel displays ϕ as fancy and φ as plain. This is reversed in ordinary text display (a published comment, as opposed to a comment you are currently composing). And it's reversed again in the monospace formatting. (Which matches the input display.)

I'm glad you posted this. I'm not a unicode expert and have always assumed these weird dichotomies were some sort of user/configuration error on my part. Realizing the unicode glitches are actaully at the website end instead of between my ears is quite a relief.


To be more specific, that usage note strongly suggests that the problem is in the font used by HN. The font is what complies or doesn't comply with the Unicode standard. We can also say that HN has a problem, but HN's problem is "they're using a noncompliant font for monospaced text".

(On further investigation, I got the characters backwards, and HN's ordinary display is correct while the monospaced display isn't.)


What's stopping you at pasting only a single file? I use the workflow Elon suggests (although I've never used it with Grok) predominately, it's well over 30% of my use of LLMs. I have a small piece of python called "crawlxml" that filters + dumps into <file> tags. And of course the LLM doesn't need your actual code in its context to do its job.


There's no way I'm going to go through my repo dependency tree and paste twenty files into grok one by one.


well, your loss then. clearly your work steps aren’t big enough to benefit from SoA LLM


My work steps are too big to sit around pasting my repo into a text box every time I have a task. This is why integrated IDEs are taking off.


everyone in this thread needs to read this paper: https://dl.acm.org/doi/abs/10.1145/3411497.3420225

Where’s Waldo as presented isn’t even a proof of knowledge


I think the Where's Waldo example, while not technically zero knowledge, gives a pretty good intuition of the idea behind it.

It certainly gives a "layperson" example of being able to prove you know something without revealing it, which isn't the whole definition of ZK but is the idea driving it.


Whoa there boss, extremely tough for you to casually assume that there is a consistent or complete metascience / metaphysics / metamathematics happening in human realm, but then model it with these impoverished machines that have no metatheoretic access.

This is really sloppy work, I'd encourage you to look deeper into how (eg) HOL models "theories" (roughly corresponding to your idea of "frame") and how they can evolve. There is a HOL-in-HOL autoformalization. This provides a sound basis for considering models of science.

Noncomputability is available in the form of Hilbert's choice, or you can add axioms yourself to capture what notion you think is incomputable.

Basically I don't accept that humans _do_ in fact do a frame jump as loosely gestured at, and I think a more careful modeling of what the hell you mean by that will dissolve the confusion.

Of course I accept that humans are subject to the Goedelian curse, and we are often incoherent, and we're never quite surely when we can stop collecting evidence or updating models based on observation. We are computational.


The claim isn’t that humans maintain a consistent metascience. In fact, quite the opposite. Frame jumps happen precisely because human cognition is not locked into a consistent formal system. That’s the point. It breaks, drifts, mutates. Not elegantly — generatively. You’re pointing to HOL-in-HOL or other meta-theoretical modeling approaches. But these aren’t equivalent. You can model a frame-jump after it has occurred, yes. You can define it retroactively. But that doesn’t make the generative act itself derivable from within the original system. You’re doing what every algorithmic model does: reverse-engineering emergence into a schema that assumes it. This is not sloppiness. It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅. That is a hard constraint. Humans, somehow, do. If you don’t like the label “frame jump,” pick another. But that phenomenon is real, and you can’t dissolve it by saying “well, in HOL I can model this afterward.” If computation is always required to have an external frame to extend itself, then what you’re actually conceding is that self-contained systems can’t self-jump — which is my point exactly...


> It’s making a structural point: a TM with alphabet Σ can’t generate Σ′ where Σ′ \ Σ ≠ ∅

This is trivially false. For any TM with such an alphabet, you can run a program that simulates a TM with an alphabet that includes Σ′.


from my experience washing some mediocre amphetamine cooks,

the fishy smell is not characteristic of pure amphetamine, but leftover methylamines from synthesis.

vyvanse adds a lysine but none of these amines are free. it’s odorless as well, but any lysine esthers leftover will stank.

it was shit product


semanticscholar does this!


it’s definitely a blog post or article and not a paper, this isn’t structured as a paper and is missing a lot of the things expected from a paper.

and it is so wonderful for it:)


You’re right — I thought it was one of the papers with better UX that has been coming through recently — it’s just a blog post but wow I wish all the papers read like this.


Well, if the exposure isn’t chronic, there might be a hormetic effect? Radiation dosage is the “classical” example of hormesis


i’ve vaped about 3g of DMT in my time, cumulatively. spotify (and other computer UI) looks very nice under the effects. it can supercharge spatial reasoning/visualization powers for a short time. i mostly haven’t met “entities” but it has happened a notable few times. once, an elf told me to “knock it off” after an afternoon of several trips in a row.

did it cure my depression? nah


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: