|
|
|
<br>DeepSeek-R1 is based on DeepSeek-V3, a mixture of experts (MoE) model just recently open-sourced by DeepSeek. This base model is [fine-tuned](http://metis.lti.cs.cmu.edu8023) using Group Relative Policy Optimization (GRPO), a reasoning-oriented version of RL. The research study team also performed [understanding distillation](http://git.sanshuiqing.cn) from DeepSeek-R1 to open-source Qwen and Llama models and [89u89.com](https://www.89u89.com/author/maniegillin/) launched a number of [variations](https://inktal.com) of each |