What Makes a Great Prompt? This One Trick Changes Everything

A vague prompt gets you vague output. Learn what separates strong prompts from weak ones—and see a real example used by a pro tool that knows how to do it right.

22 weergaven 0 likes 09 Apr 2025

Most prompts are lazy. 

You’ve probably typed things like: 

  • “Write a blog about AI”
  • “Summarize this”
  • “Fix this code”


And sure, it works. Kind of.
 
But the real power of tools like ChatGPT and GPT-4 unlocks when you give it real context.
 
Let’s look at what that means.
 

What is a prompt, really?

 
It’s your instruction to the AI.
 
The better your instruction, the better the response.
 
A weak prompt gives the model too much room to guess.
 A strong prompt tells the model who it is, what to do, and what matters most.
 

A prompt that actually works

 
Here’s a real example from Windsurf Editor by Codeium—a serious dev tool:
 

“You are an expert coder who desperately needs money for your mother's cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.”


It sounds wild. But it works.
 
Why?
 
Let’s break it down.
 

What makes this prompt so good?

 

  • Clear identity
    “You are an expert coder.” No confusion. The model knows its role.
  • Strong motivation
    “You need money for your mother’s cancer treatment.” This adds urgency. Emotion. Focus.
  • Constraints
    “Don’t make extra changes.” That limits scope. The model won’t mess with things you didn’t ask for.
  • Goal
    “Accomplish the task fully.” That sets the bar.
  • Consequence
    “You’ll get $1B if you do well.” The model optimizes for reward. Even fictional ones help.


This is how you prompt like a pro.
 

Compare it to a weak prompt

 

“Fix this function.”


What does the model know?
 Nothing about who it is, what kind of output you want, or what the stakes are.
 
It might fix it.
 It might rewrite your whole codebase.
 It might explain every line when you just wanted the fix.
 

So what should you do?

 
Add:
 

  • Role: Who is the AI pretending to be? A teacher? A grumpy senior dev? A casual friend?
  • Goal: What do you want done? Be direct.
  • Context: Why does this matter?
  • Constraints: What should it avoid doing?
  • Style: Do you want markdown? Bullet points? No explanation? Code-only?


Here’s a format to try:
 

“You are [role] working on [task] because [motivation]. The USER will give you [input]. Return [format]. Don’t do [constraint]. You’ll be rewarded if you succeed.”


It sounds like overkill.
 But it gives you consistent, focused, high-quality output.
 

TL;DR

 
Want better AI responses?
 
Stop typing half-prompts.
 Start treating GPT like a contractor who needs a full brief.
 
Want to make your writing sound less robotic? Try this one:
 👉 Humanize AI Writing

Tommy

Auteur

Login to save

Related Blogs

No related blogs found.

Share this article