The canonical Q/A pair "Why does the Sun shine?"/"Fusion in its core" perhaps contributes confusion here? Where the question is silently swapped out for "Why is the Sun still shining after 4+ Gyr?". You're primed for a close connection between core and surface photons. Asking "Why is there fog over the uncovered corner of the pool?", one seems unlikely to appreciate "the fog comes from a small aquarium heater somewhere on the bottom!" (IIRC the magnitudes). "The Sun is hot, and hot things glow" creates less of that association between core and light.
> You could calculate how long it would take to notice anything if the core suddenly stopped fusing.
FW(little)IW (very not my field, just AI, quick&sloppy), for a Sun magically switched to contraction-dominated heating, I'm sloping order 10^6-7 yr for a 1% increase in surface temp, with core contraction dynamics being just one uncertainty.
> nowadays Google AI Overview's accuracy is so good
I felt similarly yesterday. This morning AIMode fabricated for me a diverse science education publishing and research effort around using generative AI to teach rough-quantitative reasoning. My face-nibbling canary was a linked cite to a book "Orders of Magnitude"... a sci-fi horror novella about space marines. Would be nice if the outlined work actually existed. There we some nice ideas. I look forward to it. The education stuff, not the space marines.
>> I really wish they'd cite the original Japanese.
Given the Japanese above, translate.google can do text to speech[1], and goog AIMode[3] and bing/chat[2][4] can give multiple translations with notes.
But finding that Japanese, given only the TFA's description? I only saw AIMode manage that, not vanilla search. Perhaps using the author's Japanese wikipedia page[5], or perhaps here, or?
> On the wiki, I have decided to expand Section 2(d), "AI tools used to rewrite an existing argument" to also include cases such as the one here, in which the AI tool indirectly caused the argument to be rewritten in a substantive way (beyond mere typos etc.) by identifying a non-trivial mathematical issue in the previous version of the text, which was then fixed by a human author.
Yesterday IMG tag history came up, prompting a memory lane wander. Reminding me that in 1992-ish, pre `www.foo` convention, I'd create DNS pairs, foo-www and foo-http. One for humans, and one to sling sexps.
I remember seeing the CGI (serve url from a script) proposal posted, and thinking it was so bad (eg url 256-ish character limit) that no one would use it, so I didn't need to worry about it. Oops. "Oh, here's a spec. Don't see another one. We'll implement the spec." says everyone. And "no one is serving long urls, so our browser needn't support them". So no big query urls during that flexible early period where practices were gelling. Regret.
Not the person you're responding to, but I think they mean sexps as in S-expressions [1]. These are used in all kinds of programming, and they have been used inside protocols for markup, as in the email protocol IMAP.
Yes. Not quite a decade before JSON and YAML, what's at hand for a human-readable interchange format for nested data? SGML (no XML yet), something FORTH-ish, make up your own thing, and...? Contemporary WAIS (search as a distinct non-HTTP protocol) shrugged off human-readable, and tried nightmarish binary ASN.1.
There's an old idea of adaptive media. Imagine a video drama that's composed of a graph of clips, like an old "choose your own adventure" book ("Do you X? If yes, goto page 45"). With gaze tracking, one can "hmm, the viewer is more focused on character A than B... so we'll give clips and subplots with more A".
Now, when reading, the eye moves in little jumps - saccades. They last 10's of ms, the eye is blind during them, and with high-quality tracking, you know quite early just where that foveal peephole is going to land. So handwave a budget of a few ms for trajectory analysis, a few for 200 Hz rendering latency, and you still have 10-ish ms to play with. At 20k tok/s, that's 200 tok.
So perhaps one might JIT the next sentence, or the topic of the next paragraph, or the entire nature of the document, based on the user's attention. Imagine a universal document - you start reading, and you find the document is about, whatever you wanted it to be about?
Hmm... TikTok has apparently long had "text enhanced with background" genres, and TIL, text posts since 2023. So text is ok. But non-independent items? For generative storytelling, "here is a next paragraph for the story", swipe left/right might work? Want to avoid "I don't much like this new paragraph, but I'm afraid to lose it and be stuck with something worse". Swipe left/right and up for continue? Swipe down to revisit old choices? Maybe present new text bolded, appended to old text, for context. Or a "next page of a picture book" idiom. A text field for direct creative or editorial intervention - speech to text. Maybe a side channel input for "story and background should now be soporific". Generative bedtime stories, but incrementally collaboratively created... Thanks for the brainstorming prompt.
> The right approach would have been to select a color appearance model (CIECAM02 is the standard), convert all our colors to this coordinate system, do the mixing in this coordinate system and then convert back to RGB. That being said, I did not want to deal with all the extra complexity that would have come along with this. Instead, I opted for a much simpler approach.
Python's nice `colour` package supports several color appearance models.[1]
But I'm glad for the ground-truthy approach taken. I suggest a pattern, of interesting data being unavailable, because it doesn't align with incentives around science or commerce. Often it exists, just sitting on someone's disk, because they think no one is likely to care.
> In his later years, Tinney grew philosophical about the future of illustration as a profession, noting that stock image databases had changed the economics of the field. But he remained upbeat about the value of artistic talent, comparing it in that 2006 interview to the skill of public speaking: “It’s a nice talent to have, but it isn’t easy to find someone who’ll pay you just to do it. You need to combine that basic talent with another skill to really have a marketable service.”
Perhaps something to ponder as AI stirs up what constitutes a marketable service.
"To whom have I given blue from my sunbeam?!?" might be another fun question. I explored it as a potential interactive, to see, geographically, where your direct sunlight is donating sky blue-ification. Especially around sunset - IIRC, think a 100 km neon tube, at 7ish km altitude, near-end 150 km up range, with a 15-ish km wide ground footprint with 3/4-ish of the ground-impinging light, and the rest of a 100 km wide path with the 1/4-ish.
Does that confuse sales staff when shopping for clothes? ... :) The general observation being that sometimes educational descriptions of things get hedges which wouldn't usually be applied in everyday life. Yes, the nice red shirt will look black under some lighting, like some meters underwater... but it usually isn't mentioned. Yet for example, colors of unfamiliar objects in education content can get an "appears" hedge -"it appears white", rather than the more usual, simpler concept of "if its light looks white, it's 'white'".
> You could calculate how long it would take to notice anything if the core suddenly stopped fusing.
FW(little)IW (very not my field, just AI, quick&sloppy), for a Sun magically switched to contraction-dominated heating, I'm sloping order 10^6-7 yr for a 1% increase in surface temp, with core contraction dynamics being just one uncertainty.
reply