1

Chain-of-Thought Reasoning Without Prompting

 7 months ago
source link: https://arxiv.org/abs/2402.10200
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Computer Science > Computation and Language

[Submitted on 15 Feb 2024]

Chain-of-Thought Reasoning Without Prompting

Download PDF

In enhancing the reasoning capabilities of large language models (LLMs), prior research primarily focuses on specific prompting techniques such as few-shot or zero-shot chain-of-thought (CoT) prompting. These methods, while effective, often involve manually intensive prompt engineering. Our study takes a novel approach by asking: Can LLMs reason effectively without prompting? Our findings reveal that, intriguingly, CoT reasoning paths can be elicited from pre-trained LLMs by simply altering the \textit{decoding} process. Rather than conventional greedy decoding, we investigate the top-k alternative tokens, uncovering that CoT paths are frequently inherent in these sequences. This approach not only bypasses the confounders of prompting but also allows us to assess the LLMs' \textit{intrinsic} reasoning abilities. Moreover, we observe that the presence of a CoT in the decoding path correlates with a higher confidence in the model's decoded answer. This confidence metric effectively differentiates between CoT and non-CoT paths. Extensive empirical studies on various reasoning benchmarks show that the proposed CoT-decoding substantially outperforms the standard greedy decoding.
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2402.10200 [cs.CL]
  (or arXiv:2402.10200v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2402.10200

Submission history

From: Xuezhi Wang [view email]
[v1] Thu, 15 Feb 2024 18:55:41 UTC (752 KB)

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK