Back to Blog

AI-to-Human Knowledge Distillation

1 min read

LLMs learned from us. Call that Human-to-AI knowledge distillation: books and the internet, human labels (SFT), preferences (RLHF), tacit knowledge in video, and human-designed RL environments.

As models surpass median human performance across more domains, the reverse becomes the next frontier: AI-to-Human knowledge distillation.

How can we systematically compress model knowledge into human skill through coaching, simulation, and deliberate practice loops?

Using LLMs is cheaper than a personal tutor: near‑infinite, on‑demand expert time at near‑zero marginal cost. And it can also better much better:

  • Interactive: multi‑turn reasoning enables step by step coaching, not just Q&A.
  • Roleplay and simulators: safe sandboxes for sales, medicine, law, ops, and management.
  • Personalized: target the zone of proximal development with adaptive curricula.

Open questions

  1. When models are superhuman, what remains worth teaching?
  2. Which skills benefit most from simulators and roleplay that weren’t feasible before?
  3. How do we measure transfer and ensure gains persist without the coach in the loop?
  4. How can we make learning meaningfully more engaging and fun?