This page is under construction. Content coming soon.
Key Concepts
- World Models: Neural networks that predict future states
- Model Predictive Control (MPC): Planning with learned dynamics
- Dyna Architecture: Combining real and simulated experience
- Latent Space Models: Learning compressed state representations
References
- Lillicrap, T., Hunt, J., Pritzel, A., Heess, N., Erez, T., et al. (2015). Continuous control with deep reinforcement learning.
- Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., et al. (2013). Playing Atari with Deep Reinforcement Learning.
- Schmidhuber, J. (2015). On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models.

