31

How I Won Singapore’s GPT-4 Prompt Engineering Competition

 8 months ago
source link: https://towardsdatascience.com/how-i-won-singapores-gpt-4-prompt-engineering-competition-34c195a93d41
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

How I Won Singapore’s GPT-4 Prompt Engineering Competition

A deep dive into the strategies I learned for harnessing the power of Large Language Models

1*RAI4cBXe1_zaxVykHz79oA.jpeg

Celebrating a milestone — The real win was the priceless learning experience!

Last month, I had the incredible honor of winning Singapore’s first ever GPT-4 Prompt Engineering competition, which brought together over 400 prompt-ly brilliant participants, organised by the Government Technology Agency of Singapore (GovTech).

Prompt engineering is a discipline that blends both art and science — it is as much technical understanding as it is of creativity and strategic thinking. This is a compilation of the prompt engineering strategies I learned along the way, that push any LLM to do exactly what you need and more!

Author’s Note:
In writing this, I sought to steer away from the traditional prompt engineering techniques that have already been extensively discussed and documented online. Instead, my aim is to bring fresh insights that I learned through experimentation, and a different, personal take in understanding and approaching certain techniques. I hope you’ll enjoy reading this piece!

This article covers the following, with 🔵 referring to beginner-friendly prompting techniques while 🔴 refers to advanced strategies:

1. [🔵] Structuring prompts using the CO-STAR framework

2. [🔵] Sectioning prompts using delimiters

3. [🔴] Creating system prompts with LLM guardrails

4. [🔴] Analyzing datasets using only LLMs, without plugins or code
With a hands-on example of analyzing a real-world Kaggle dataset using GPT-4

1. [🔵] Structuring Prompts using the CO-STAR framework

Effective prompt structuring is crucial for eliciting optimal responses from an LLM. The CO-STAR framework, a brainchild of GovTech Singapore’s Data Science & AI team, is a handy template for structuring prompts. It considers all the key aspects that influence the effectiveness and relevance of an LLM’s response, leading to more optimal responses.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK