Feedback

Chat Icon

Generative AI For The Rest Of US

Your Future, Decoded

Prompt Engineering: Efficiency in the Age of AI
71%

Tree of Thought Prompting

Tree of Thoughts (ToT) reasoning is a framework developed to enhance the problem-solving capabilities of large language models such as GPT models. It builds upon the concept of Chain of Thought prompting and introduces improvements by enabling the LLM to explore multiple paths of reasoning and make informed decisions based on previous evaluations.

Traditionally, LLMs follow a token-level, left-to-right decision-making process during inference. This approach can be limiting, particularly in tasks requiring exploration, strategic lookahead, or where initial decisions significantly impact outcomes. To address these limitations, ToT introduces a paradigm where the LLM is guided through a tree-like structure of coherent units of text, referred to as "thoughts," representing intermediate steps toward problem-solving. ToT's key features include enabling the LLM to explore multiple paths of reasoning, self-assess its choices at each step, make informed decisions based on previous evaluations, and consider global context by looking ahead or backtracking when necessary.

Experimental results demonstrate ToT's effectiveness in enhancing LLM problem-solving abilities across tasks requiring non-trivial planning or search. For instance, in tasks like the Game of 24, where CoT-based prompting yielded low success rates, ToT achieved significantly higher success rates. This technique requires technical skills to fully implement it in practice. However, to apply it to AI assistants such as ChatGPT, you can use the following prompt imagined by Dave Hulbert:

Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realises they're wrong at any point then they leave.
The question is: [insert question here]

The author of the prompt used the following example:

Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realises they're wrong at any point then they leave.
The question is:
Bob is in the living room.
He walks to the kitchen, carrying a cup.
He puts a ball in the cup and carries the cup to the bedroom.
He turns the cup upside down, then walks to the garden.
He puts the cup down in the garden, then walks to the garage.
Where is the ball?

And here's the response from ChatGPT 3.5:

Generative AI For The Rest Of US

Your Future, Decoded

Enroll now to unlock all content and receive all future updates for free.

Unlock now  $20.99$15.74

Hurry! This limited time offer ends in:

To redeem this offer, copy the coupon code below and apply it at checkout:

Learn More