Back to Blog

AI-to-Human Knowledge Distillation

2 min read

LLMs learned from us. Call that Human-to-AI knowledge distillation: books and the internet, human labels (SFT), preferences (RLHF), tacit knowledge in video, and human-designed RL environments.

As models surpass median human performance across more domains, the reverse becomes the next frontier: AI-to-Human knowledge distillation.

How can we systematically compress model knowledge into human skill through coaching, simulation, and deliberate practice loops?

Using LLMs is not just cheaper than a personal tutor: near‑infinite, on‑demand expert time at near‑zero marginal cost.

It can also better much better:

  • Interactive: multi‑turn reasoning enables coaching, not just content.
  • Roleplay and simulators: safe sandboxes for sales, medicine, law, ops, and management.
  • Personalized: target the zone of proximal development with adaptive curricula.

Textbooks still teach a lot of rote recall and lookups, long arithmetic, encyclopedic facts better left to external memory.

LLMs can teach problem decomposition, social reasoning, and emotional mastery.

Open questions

  1. When models are superhuman, what remains worth teaching vs renting on demand?
  2. Which skills benefit most from simulators and roleplay that weren’t feasible before?
  3. How do we measure transfer and ensure gains persist without the coach in the loop?
  4. Can adaptive curricula make learning meaningfully more engaging and fun at scale?