
Details of Think-and-Execute That You Don't Want to Miss
21 Mar 2025
We highlight some components of code prompt that would be helpful in describing the underlying reasoning logic.

Think-and-Execute: The Experimental Details
21 Mar 2025
We use several LLMs, including GPT-3.5-Turbo and GPT-4, which are available via OpenAI API[4], and open-source LLM, CodeLlama

Think-and-Execute: The Takeaway
21 Mar 2025
In this paper, we present THINK-AND-EXECUTE, an algorithmic reasoning framework that generates a logic for solving the given task into a pseudocode

Think-and-Execute: The Limitations That We Face
21 Mar 2025
A possible limitation of our approach is that we focus on algorithmic reasoning, as we believe it is the best setting to assess LLMs’ capabilities

What Is Chain-of-Thought Prompting?
21 Mar 2025
Chain-of-thought (CoT) prompting evokes LMs to generate intermediate reasoning steps that guide and explain the solution.

Our Analysis on Think-and-Execute and Pseudocode
20 Mar 2025
We conduct experiments to address the following research questions: Is task-level pseudocode more helpful than instance-specific pseudocode?

Think-and-Execute Improves Algorithmic Reasoning: Here's How
20 Mar 2025
We start by comparing our framework with direct prompting and zero-shot CoT Kojima et al. (2022) in Table 1.

How We Curated Seven Algorithmic Reasoning Tasks From Big-Bench Hard
20 Mar 2025
We curate seven algorithmic reasoning tasks from Big-Bench Hard, including dyck languages, geometric shapes, navigate, and reasoning about colored objects

What Is Think-and-Execute?
20 Mar 2025
In this section, we introduce THINK-AND-EXECUTE and provide a detailed explanation of how LLMs perform reasoning with it.