# The Path to Open-Sourcing the DeepSeek Inference Engine
A few weeks ago,
during [Open Source Week](https://github.com/deepseek-ai/open-infra-index?tab=readme-ov-file#202502-open-source-week),
we open-sourced several libraries.
The response from the community has been incredibly positive - sparking inspiring collaborations, productive
discussions, and valuable bug fixes.
Encouraged by this, we’ve decided to take another step forward: contributing our internal inference engine back to the
open-source community.
We are deeply grateful for the open-source ecosystem, without which our progress toward AGI would not be possible.
Our training framework relies on [PyTorch](https://github.com/pytorch/pytorch), and our inference engine is built
upon [vLLM](https://github.com/vllm-project/vllm),
both of which have been instrumental in accelerating the training and deployment of DeepSeek models.
Given the growing demand for deploying models like [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3)
and [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1), we want to give back to the community as much as we can.
While we initially considered open-sourcing our full internal inference engine, we identified several challenges:
- **Codebase Divergence**: Our engine is based on an early fork of vLLM from over a year ago. Although structurally
此文件已被截断。 显示原始文件