Qwen2.5-Math is a series of mathematics-specialized large language models in the Qwen2 family, released by Alibaba’s QwenLM. It includes base models (1.5B / 7B / 72B parameters), instruction-tuned versions, and a reward model (RM) to improve alignment. Unlike its predecessor Qwen2-Math, Qwen2.5-Math supports both Chain-of-Thought (CoT) reasoning and Tool-Integrated Reasoning (TIR) for solving math problems, and works in both Chinese and English. It is optimized for solving mathematical benchmarks and exams; the 72B-Instruct model achieves state-of-the-art results among open source models on many English and Chinese math tasks.
Features
- Supports both Chain-of-Thought (CoT) reasoning and Tool-Integrated Reasoning (TIR)
- Available in multiple sizes: 1.5B, 7B, 72B parameters for base; same for instruction-tuned; plus a mathematical reward model (RM)
- Bilingual (Chinese & English) math problem solving capabilities
- Significant performance improvement over previous Qwen2-Math on many benchmarks (e.g. GaoKao, AIME, AMC etc.)
- Compatible with Hugging Face Transformers and standard LLM inference pipelines, includes examples of usage, evaluation scripts etc.
- Best suited for math problems; less recommended for tasks outside mathematical reasoning
License
Apache License V2.0Follow Qwen2.5-Math
You Might Also Like
Gen AI apps are built with MongoDB Atlas
MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of Qwen2.5-Math!