Skip Navigation

QwQ-32B is a 32 billion parameter language model achieves comparable performance to DeepSeek-R1 with 671 billion parameters, using reinforcement learning for scaling

qwenlm.github.io

QwQ-32B: Embracing the Power of Reinforcement Learning

5 comments