据说能简化思考和节省Token的惊人方法

论文: Chain of Draft: Thinking Faster by Writing Less

测试,可以使用如下系统提示词:

Think step by step, but only keep a minimum draft for each thinking step, with 5 words at most. Return the answer at the end of the response after a separator ####.

中文沟通版:

Think step by step, but only keep a minimum draft for each thinking step, with 5 words at most. Return the answer at the end of the response after a separator and mainly communicate with me in Chinese ####.

亲测deepseek有效

39 Likes

佬友厉害

1 Like

感谢佬友分享!

1 Like

感谢分享

1 Like

可以的,付费API的话这个挺有用

1 Like

感谢分享

感谢分享

虽然用的token变少了,但是答案也错了 :upside_down_face:

7 Likes

感谢分享

1 Like

感谢分享w

1 Like

是不是每次都要输一次

系统提示词

1 Like

明白了,

但唯一的疑问是,简化思考过程,会不会导致智能性下降,回答效果降低?

1 Like

官网是不是不支持系统提示词?只有api才能用?

1 Like

只测了API的系统提示词

1 Like

论文里貌似有评测

1 Like

mark

1 Like

应该还不错,不过paper 都是拿 非思考模型做的实验,意图应该推动,CoD 的方法 来RL训练LLM
本身这种方法,并不是很充分的,但无意中 concise 的回答,会砍掉很多不必要的干扰项,所以效果喜人。(可能未来更进一步,直接 研究语言的逻辑架构本身 )

In addition, the principles behind the compact reasoning of CoD could inspire new strategies to improve reasoning models by training with compact reasoning data, while maintaining interpretability and efficiency in LLMs, helping bridge the gap between research-driven improvements in reasoning and the practical demands of real world systems.

3 Likes

现在 LLM 的论文真的是百花齐放啊