In this video, I will not only show you how to get smarter results from GPT 4 yourself, I will also showcase SmartGPT, a system which I believe, with evidence, might help beat MMLU state of the art benchmarks. This should serve as your ultimate guide for boosting the automatic technical performance of GPT 4, without even needing few shot exemplars. The video will cover papers published in the last 72 hours, like Automatically Discovered Chain of Thought, which beats even 'Let's think Step by Step' and the approach that combines it all. Yes, the video also touches on the OpenAI DeepLearning Prompt Engineering Course but the highlights come more from my own experiments using the MMLU benchmark, and drawing upon insights from the recent Boosting Theory of Mind, and Let’s Work This Out Step By Step, and combining it with Reflexion and Dialogue Enabled Resolving Agents. Prompts Frameworks: Answer: Let's work this out in a step by step way to be sure we have the right answer You are a researcher tasked with investigating the X response options provided. List the flaws and faulty logic of each answer option. Let's work this out in a step by step way to be sure we have all the errors: You are a resolver tasked with 1) finding which of the X answer options the researcher thought was best 2) improving that answer, and 3) Printing the improved answer in full. Let's work this out in a step by step way to be sure we have the right answer: Automatically Discovered Chain of Thought: Karpathy Tweet: Best prompt: Theory of Mind: Few Shot Improvements: Dera Dialogue Paper: MMLU: GPT 4 Technical report: Reflexion paper: Why AI is Smart and Stupid: Lennart Heim Video:
Hide player controls
Hide resume playing