I’ve enjoyed making interactive explainers for complex concepts as a teaching tool. But they used to be extremely time consuming to make until AI reduced the cost of programming dramatically. Each concept requires a different and complex set of primitives to explain well.
I made this one about the Giant Component in social networks recently, but this had me realize that making these is great practice for using AI in coding.
Sometimes I start with very general prompts and am pleasantly surprised by the results — AI Dev Tips - Ask dumb questions be pleasantly surprised. Sometimes I have to refine endlessly to get something intuitive. In general, it’s great practice for pushing the limits of how
I communicate something complex to a model.- I often use the LLM for testing out the text content. I’ve found Claude to be very good at intuitive, analogous explanations, but also quite unnecessarily wordy. A great test of how a model communicates something complex to a human.
- I like that they’re small isolated projects, so I get to try them for very different things. — Giant component tested out graph layouts
- Entropy components tested out the LLM’s ability to do an intuitive text explanation
- Normal distribution emergence tested out making a very specific interactive UI using simple primitives like tables
- ChatGPT stats tested out LLM humor. It was very mid and interpolative.
I’m sure there are more patterns I could use, but I’m already spending a lot of time on them.