Looks incredible! All this needs is a little bit of conversational AI magic in the background to filter and modulate the content according to plain-English student questions and its go time.
Note that this was finished in 2019, so now would be the perfect time for someone to polish this up and expand it to the rest of math! Assuming this is threeJS, you could get an open-source file format going for simulations, and even host crowdsourced applications of it to existing popular math textbooks by Figure/page #. I mean, linear algebra is cool, but the market for good free geometry education is limitless
Does anyone know if the big names in math education offer simulations yet, or is it all animations/images/videos still?
EDIT: definitely ThreeJS — love the vector chapter. What this needs is true spatial computing support - not pages with nested simulations, but site-wide (SPA-wide) simulated objects. What if every student in geometry class could have their own simulation on their Chromebook as they read/follow along? I can’t wait.
It really pains me to see someone suggesting adding AI to a book like this. Current AIs are infamously bad at math. The last thing we need is ChatGPT misplacing a minus sign and confusing readers or setting back their understanding by weeks.
I remember a friend reviewing some math before starting grad school was stymied by a typo in her textbook for an inordinate amount of time. It’s really vital that instructional materials avoid errors as much as humanly possible. AI right now ain’t it.
with a huge amount of misprints in formulas. I spent lots of time hunting for those misprints, and I think it really helped me understand and remember the material.
In my last homework the professor omitted a required assumption and I nevertheless proved the false assertion. Extremely embarrassing. This happened earlier in the semester and I correctly failed to finish the homework problem. I am getting tired I guess.
Yeah, that was my secret superpower. She was fresh out of engineering school and I’d been a nerd school drop out who hadn’t taken a math class in four years (and never studied the material she was reviewing), but I was able to look at the text with a critical eye at least in part because I was spending a lot of time on math typesetting and had learned that the math rarely never gets properly proofread.
The AI isn't doing math, the AI is curating the textbook material. In the same way that you have a host of different faculties that enable you to excel in all that you excel at, there is more to math (and math pedagogy) than arithmetical consistency
(Not the person you replied to, but) I just re-read it, and the "canned retort" still looks completely accurate and relevant. Can you elaborate on why you think that AI's (known, admitted, and inherent) propensity for hallucination _wouldn't_ be disastrous in the context of pedagogy?
If the original comment had _just_ proposed to direct students to locations _within_ the original content ("filter"), it would have been less-impactful - being directed to the wrong part of a (non-hallucinated) textbook would still be confusing, but in the "this doesn't look right...?" sense, rather than the "this looks plausible (but is actually incorrect)" sense. But given that the comment referred to "Conversational AI", and to "modulat[ing]" the content (i.e. _giving_ answers, not just providing pointers to the original content), hallucination is still a problem.
Hey it’s the original commenter himself! I appreciate you taking my comment seriously enough to analyze, but I think I missed the mark; I totally agree that LLMs shouldn’t be giving the answers to literal arithmetic problems, or be anywhere near designing the materials (digital textbooks) themselves.
I was indeed referring mostly to something like filtering, but I think there’s plenty of room for an LLM to help out there. With something as relatively complex as simulation parameters, theres lots of room for them to support the users choices by making changes to machine-readable formats.
Thus the LLM would be “tweaking” or “framing” or “instantiating” the content without getting near the fundamental signal, which here is the specific pedagogical intent of that diagram in the context of the current lesson. I used “modulate” to try to express this idea somewhat clumsily, would love suggestions on a better one though from lurkers!
IMO simulations are hard to justify as embedded content of a pedagogical site because they’re so engaging, which makes them dangerous in a situation where close attention to the teacher/problem set/text is the much more important goal in the background. They’d have to be low cognitive load to use individually during class time, ideally so low they’re practically ambient, and I think LLMs are the only practical path in that direction.
TL;DR I didn’t mean writing LaTEX pedagogical content, I meant writing JSON objects that do stuff like highlighting, variants, scaling, inputting specific equations to a general sim, etc.
Oh, fascinating! OK, yeah, I fully misunderstood your intent, then - I thought you were suggesting the LLMs should be summarizing the content in response to queries from students ("How do I find the determinant of a matrix?" // "Well, first you..."), which I think we both agree that they're not ready for (and, while hallucination remains a problem, never will be).
So if I'm understanding it right, your proposal is for the LLM instead to be a "control layer" over the simulation object, so that a student could say something like "what happens if I increase the scale factor by 2?" and the LLM interprets that natural-language request and outputs the simulation-control-variables that correspond with the student's request (and then either feeds them into the simulation directly, or outputs them for the student to read, understand, and input)? Makes sense to me!
GP's comment has been edited since my post. The original said something like "regenerate diagrams according to student questions". It's obviously a bad idea if you're trying to learn vectors and the entire diagram is flipped over the X axis, for example.
Nonetheless, today's AIs still regularly contradict themselves from one sentence to the next. Even if they're only generating text and "modulating" (which I take to mean rephrasing/summarizing), mistakes can and will happen. I stand by my comment even as it applies to the edited GP.
Filtering Modulating means selecting relevant excerpts. Think "AI for Search", not conversational chat generation.
This is what LLM has been exceedingly good at, in the Alpha (DeepMind) series of projects.
You clearly have no idea how effective an interactive conversation with a text can be. An AI doesn't have to be "good at math" to be useful. People (and programs) who are "good at math" are a dime a dozen. To be useful to a student, a language model just has to be good at answering questions about math.
That part works, right now. Try it. Go to ChatGPT4 and pretend you're a student who is having trouble grasping, say, what a determinant is. See how the conversation unfolds, then come back and tell us all how "infamously bad" the experience was. Better still, ask it about something you've had trouble understanding yourself.
Many people on HN formed their opinions on the basis of GPT3.x-generation models, though. They asked it a question, they got the nonsensical or hallucinated answer they expected, they drew the conclusion they wanted to draw all along, and by golly, that settles it, once and for all.
Just taking something like the threejs GLTFExporter and combining it with modelviewer.dev on the fly could enable a 'view in AR' button compatible with both SceneViewer and Quick Look (i.e. most mobile devices available today).
+1 for Spatial Computing here -- I see immersive here and just think 2D animations of 3D concepts, good start though it may be, is leaving possibilities on the table. 3D consumed inside a fully 6DOF 3D animated space is a better environment to transfer meaning. These collections of links could be piped into WebXR with just a little tweaking and really be immersive.
Note that this was finished in 2019, so now would be the perfect time for someone to polish this up and expand it to the rest of math! Assuming this is threeJS, you could get an open-source file format going for simulations, and even host crowdsourced applications of it to existing popular math textbooks by Figure/page #. I mean, linear algebra is cool, but the market for good free geometry education is limitless
Does anyone know if the big names in math education offer simulations yet, or is it all animations/images/videos still?
EDIT: definitely ThreeJS — love the vector chapter. What this needs is true spatial computing support - not pages with nested simulations, but site-wide (SPA-wide) simulated objects. What if every student in geometry class could have their own simulation on their Chromebook as they read/follow along? I can’t wait.