Hacker Newsnew | past | comments | ask | show | jobs | submit | migueloller's commentslogin


I believe their target price is $20k. No idea if it costs that right now, I doubt it.


I did see that.

It'd be interesting if one of the Tesla car teardown YouTubers (i.e. Munro, Engineering Explained) did an Optimus humanoid robot first principles teardown.



Matt Levine’s writing on this is right along this line. It delineates between liquidity and solvency problems. Worth a read. https://www.bloomberg.com/opinion/articles/2022-11-10/ftx-is...


What are your thoughts on the Fluent Forever method, specifically the part that going through the effort of creating your own study materials is an integral part of recall and internalizing new words/knowledge? It's definitely attractive to have a lot of the "manual work" be automated, but maybe it's a necessary effort to cross the chasm that is intermediate language learning and perhaps most people stay at intermediate levels not because of the lack of a tool but because there's a natural filter with how much effort is required to go from intermediate to fluent.

I've been "stuck" in intermediate Japanese for years now after being on and off multiple times. Got to 1.2k kanji, "ok" grammar and ~3k vocab words. Perhaps something like this is what's needed. I've been wanting an "instructor" that can do this sort of indexing for all sorts of content like TV shows, movies, books, articles, etc.


I think it's conflating doing anything that might help with doing the most valuable things. As a concrete example, if you looked up 25k words once in a dictionary at 10 seconds each (this is speedy for a digital or paper dictionary), it would cost you >70 hours looking things up. You'd be hard-pressed to convince me that getting very good at finding stuff in a reference is directly improving my language skills.

The intermediate plateau is because of Zipf's law. In a 300-page book, there are ~5500 unique words and ~3000 of them occur once or twice. This isn't a big deal for native speakers because a 300-page book is about 100k words (1 day's worth of content), but for a language learner, that might take weeks or months to cover. To go further, that native speaker will probably encounter those words again in ~40 days, but it might be years before that learner re-encounters all of them (having long since forgotten them).

Your time is best spent focusing on the sentences (30% of the book) that contain those 3000 words because they use almost all of the rest of the words.


> Your time is best spent focusing on the sentences (30% of the book) that contain those 3000 words because they use almost all of the rest of the words.

This seems to assume: (i) that readers of a 300 page book in a foreign language (not typically a beginner task!) are choosing to do so primarily as a means to the end of learning/remembering unfamiliar words, and not because they want to understand the content of the book itself, develop their appreciation of literary phrasing, challenge themself etc., and (ii) that focusing on a [probably disjointed] subset of the sentences in the book won't deprive the reader of the necessary context to grok sentences even when the words are familiar. I'm not sure either is generally true.

Ultimately the alternative to using machine-selected sentences isolated from long form text for learning new words or fill-the-blanks exercises is using definitions and exercises specifically constructed to be accessible and relevant to language learners. The only obvious case where I can see the ML process generating more useful examples is if the language learners' needs are skewed heavily towards absorbing the sort of specialist technical/professional vocabulary conventional learning courses don't cover.

I also think that picking up common and uncommon idiomatic phrases would be at least as important as individual words too (though this is definitely something an ML tool can aid)


This scales up and down freely. I choose 300 pages because that is ~100k words or the amount a native speaker processes daily.

My process is just the opposite of (i). I want to read and understand a book, so I want something to show me where my deficiencies are. Then when I read the page, chapter, etc. it will go much more smoothly. This is also an iterative process where I'm constantly going back and forth between studying new words in a section and trying to read the section.

For (ii), every exercise or review sentence has a link directly back to the source material. I am playing with the idea of extending the context to +/- N sentences when showing an exercise as well.

> using definitions and exercises specifically constructed to be accessible and relevant to language learners

This is the prescriptivist view of language learning and how all classes and textbooks are created. It can be useful, especially at the earliest stages of learning a language. Still, I mostly reject it because when using a language, I have very little control over the content I have to consume. I don't get to choose how an article is written or how someone speaks to me. So the sooner I address that as a language learner, the faster I will become comfortable with arbitrary content.

Phrases are great, I agree. I don't have a vision of how that could work technically so it's just in the pile of ideas I'd love to do eventually.


> For (ii), every exercise or review sentence has a link directly back to the source material. I am playing with the idea of extending the context to +/- N sentences when showing an exercise as well.

This would be a good idea, but my point is more that content in general writing (as opposed to specifically constructed to be self-explanatory writing) is inferred from structure and callbacks to the words or tone of much earlier sections of the writing. Language learners do have to handle passages of text which aren't written with ease-of-comprehension in mind, but they don't have to try to fill in the blanks for "As seen in the previous chapter, x is an example of ______" without reading references to x in the previous chapter first. That's often an impossible task even for native speakers. Similarly, people are much more likely to correctly guess at meaning of a word describing a characters' emotional state (or internalise the meaning after looking it up) if they followed the narrative of the section six pages earlier which provided the context for their emotional state. Not stripping that context, or algorithmically isolating the sentences in a piece which don't require context to fully understand is a tough challenge.


Ah, okay. I understand now.

Yes, that can happen, but from my experience it is rare; Also because this is focused on learning the language I can give hints/affordances like the word, or definition, in their native language, so all user needs to do is produce the word in the target language and conjugation.


If you're looking for something like this for Japanese, Kou, has been doing great work on https://jpdb.io/ which contains many of the concepts OP's app is trying to accomplish specifically for Japanese.

In particular it has the SRS, parallel text for words, assisted study and % of content you already know.


Yeah, I'm familiar with this and it's a great resource.

The key difference is that it is still flashcard based. And you have to choose between a word or sentence cards that conflate the context with the word being studied.

By actually parsing the text, when you are studying the word "eat": "Where are we going to eat?", "I wanna eat a burrito the size of my head!", "No way. You just ate that ice cream cone."

* You separate the target of studying from the context/presentation. You can create an exercise from any of the many sentences using eat, ate, eating, etc. And change examples for every study repetition.

* You can focus on individual conjugations or overall knowledge of the word.

* You have an objective right/wrong signal instead of a subjective 0-4 scale.

* You can attach definitions to not only the target word but all of the words in the sentence on demand.

* You can track learning across all 25 of the words from my examples simultaneously. And those sentences can be used to generate exercises for any one of those 25 words.


Hi. I'm the dev behind jpdb.

> The key difference is that it is still flashcard based.

Oh I have a lot more ambitious plans than just flashcards. (: It's just that you have to start somewhere, and flashcards are basically the current gold standard. No need to immediately throw the baby out with the bathwater, especially since a huge chunks of code can be reused for other forms of learning.

> You have an objective right/wrong signal instead of a subjective 0-4 scale.

I also have this as an option (pass/fail mode). The con of this is that it lowers the quality of SRS predictions, so you end up having to do more reviews. But depending on how much time you gain by not having to think about how to grade it can become a net improvement, so it's a tradeoff. I haven't yet had the time to precisely measure it yet.

> And change examples for every study repetition.

For what's worth I have that too. (:

> You can focus on individual conjugations or overall knowledge of the word.

In Japanese I feel this would be counterproductive, since essentially all conjugations are regular. This means that once you're not a beginner you just want to learn a word without wasting time on conjugations. But for a beginner, sure. And if you want to make any money targetting beginners is the way to go.


Sounds great. If you're up for it, I'd love to chat with you about jpdb. It sounds like, and if nothing else, we'd have a great conversation about the challenges and ideas behind making a language learning tool.


I started reading Crafting Interpreters [1]. It's great so far!

[1] https://craftinginterpreters.com/


Oh super interesting! This goes onto the reading list.


the old monotonically growing reading list. I must figure out a way to organize mine better


Easy! Make another list of things, which are _excellent_ reads.


Hope you are having a great journey with it!


Since segments are determined by folders, you should be able to do `/layout/page.js` for a `/layout` route.


I agree 100%. It can’t just be a UI on too of CSS. At that point just write the HTML and CSS...


To me that's the best part, they haven't abstracted away HTML/CSS, so you have control over everything like you would if you built a site from scratch.

Creating HTML/CSS through a realtime updated UI is just so much faster and the better experience IMO. Its a great example of "Inventing on Principle," re: this great Bret Victor talk https://vimeo.com/36579366

That said, coding from scratch, while slower than using Webflow, isn't the real pain point for me. But I can see how someone less technical might see that as the real value.

The real hassle in building static sites from scratch is the nightmare of NPM, git, bad static site generator defaults, complicated headless CMSs configurations and fragile dependencies.


I know the HN comments section is notorious for criticizing solutions they don't like because they seem too complex or unnecessary for them [1] but I'm honestly surprised that's happening with Next.js/Vercel here.

So many are saying that this could've been a static site distributed by a CDN. That's what a Next.js app on Vercel does by default! Unless you use `getServerSideProps`, Next.js will create a static site at build time and when deployed to Vercel it will be hosted on their CDN. Spending just a bit of time to understand why Next.js is so popular would've made that clear to most people here. But I guess that's not as easy as just criticizing it for bloat without knowing how it works and moving on?

[1] https://news.ycombinator.com/item?id=9224


Javascript aside, more things can and should be static sites.

(More things should render their basic, static-site content without having to exec js, too.)

The web would be a lot better if this were the case.


I believe you can still do this in Next.js with some extra work to exclude the JS bundle. For example, you can use a custom `_document` page and exclude the `NextScript` component [1]. It would be nice if this was supported as a first-class concept, though.

[1] https://nextjs.org/docs/advanced-features/custom-document


It was a static site. That's what Vercel and Next.js does by default. If you don't use `getServerSideProps`, Next.js will generate static pages at build time and Vercel will serve them from their CDN.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: