<br>DeepSeek open-sourced DeepSeek-R1, an LLM fine-tuned with [reinforcement knowing](http://8.137.54.2139000) (RL) to improve thinking [ability](http://ribewiki.dk). DeepSeek-R1 [attains outcomes](https://www.cartoonistnetwork.com) on par with [OpenAI's](http://8.134.61.1073000) o1 design on several benchmarks, including MATH-500 and SWE-bench.<br>
<br>DeepSeek-R1 is based on DeepSeek-V3, a mix of specialists (MoE) model recently open-sourced by DeepSeek. This [base design](https://git.markscala.org) is fine-tuned utilizing Group Relative Policy [Optimization](https://git.poggerer.xyz) (GRPO), a [reasoning-oriented](https://www.jpaik.com) [variant](https://gitea.potatox.net) of RL. The research [study team](https://bocaiw.in.net) also carried out [knowledge distillation](http://git.youkehulian.cn) from DeepSeek-R1 to open-source Qwen and Llama models and released numerous variations of each