Hey it’s the original commenter himself! I appreciate you taking my comment seriously enough to analyze, but I think I missed the mark; I totally agree that LLMs shouldn’t be giving the answers to literal arithmetic problems, or be anywhere near designing the materials (digital textbooks) themselves.
I was indeed referring mostly to something like filtering, but I think there’s plenty of room for an LLM to help out there. With something as relatively complex as simulation parameters, theres lots of room for them to support the users choices by making changes to machine-readable formats.
Thus the LLM would be “tweaking” or “framing” or “instantiating” the content without getting near the fundamental signal, which here is the specific pedagogical intent of that diagram in the context of the current lesson. I used “modulate” to try to express this idea somewhat clumsily, would love suggestions on a better one though from lurkers!
IMO simulations are hard to justify as embedded content of a pedagogical site because they’re so engaging, which makes them dangerous in a situation where close attention to the teacher/problem set/text is the much more important goal in the background. They’d have to be low cognitive load to use individually during class time, ideally so low they’re practically ambient, and I think LLMs are the only practical path in that direction.
TL;DR I didn’t mean writing LaTEX pedagogical content, I meant writing JSON objects that do stuff like highlighting, variants, scaling, inputting specific equations to a general sim, etc.
Oh, fascinating! OK, yeah, I fully misunderstood your intent, then - I thought you were suggesting the LLMs should be summarizing the content in response to queries from students ("How do I find the determinant of a matrix?" // "Well, first you..."), which I think we both agree that they're not ready for (and, while hallucination remains a problem, never will be).
So if I'm understanding it right, your proposal is for the LLM instead to be a "control layer" over the simulation object, so that a student could say something like "what happens if I increase the scale factor by 2?" and the LLM interprets that natural-language request and outputs the simulation-control-variables that correspond with the student's request (and then either feeds them into the simulation directly, or outputs them for the student to read, understand, and input)? Makes sense to me!
I was indeed referring mostly to something like filtering, but I think there’s plenty of room for an LLM to help out there. With something as relatively complex as simulation parameters, theres lots of room for them to support the users choices by making changes to machine-readable formats.
Thus the LLM would be “tweaking” or “framing” or “instantiating” the content without getting near the fundamental signal, which here is the specific pedagogical intent of that diagram in the context of the current lesson. I used “modulate” to try to express this idea somewhat clumsily, would love suggestions on a better one though from lurkers!
IMO simulations are hard to justify as embedded content of a pedagogical site because they’re so engaging, which makes them dangerous in a situation where close attention to the teacher/problem set/text is the much more important goal in the background. They’d have to be low cognitive load to use individually during class time, ideally so low they’re practically ambient, and I think LLMs are the only practical path in that direction.
TL;DR I didn’t mean writing LaTEX pedagogical content, I meant writing JSON objects that do stuff like highlighting, variants, scaling, inputting specific equations to a general sim, etc.