ryzome tree
← Back to Blog

The Equation That Predicts Whether Your AI Output Will Be Useful

Thierry Bleau · March 23, 2026

Sam's company is debating whether to switch from ChatGPT to Claude. A colleague swears the new model is "way better." Sam tried both on the same task and the outputs were roughly the same.

This confused Sam until last week. Now it makes sense. If both models are predicting from the same raw material, they're going to land in roughly the same place. The engine wasn't the problem. The fuel was.

There's a simple equation behind every AI output. Most people have never seen it written down. And the ones who have now which variable to optimize.

Three Variables, One Equation

Three variables determine the quality of what an LLM produces. The AI output is the result of:

  • Model is the engine. GPT, Claude, Gemini, Llama. The thing doing the prediction.
  • Direction is the prompt. What you're asking it to do. The instruction, the format, the constraints you write out.
  • Context is the raw material. The documents, examples, data, and specifics you load into the model's working memory before it starts generating.

Most of the public conversation about AI quality focuses on two of these: which model is best, and how to write better prompts. Forums are full of model comparisons. Courses teach prompt frameworks. People switch providers chasing marginal improvements.

Almost nobody talks about context. And context is the variable that actually moves the needle.

Why Models and Prompts Have a Ceiling

Model improvements are real but honestly they are already more than good enough. The gap between the top five models shrinks every quarter. Switching from one to another might improve your output in a somewhat meaningful way, but not transformative.

Prompt engineering is useful but has a ceiling. There are only so many ways to say "be concise and use bullet points." Once you've learned the basics then you already know most of it.

Context has the highest ceiling of the three, and it's the one you control. Your raw material is unique to you. Your company's data, your style guide, your past decisions, your domain constraints. No one else has this material, and no prompt trick can substitute for it.

The model is shared infrastructure. Everyone has access to the same engines. The prompt is a commodity skill. Anyone can learn the frameworks in an afternoon. The context is yours. It's the only variable where more investment always produces more return.

Low Context vs. High Context

Let's see what happens when Sam ran an experience to compare the difference between low context and high context response/

Low Context

"Write a customer update email about our Q1 results."

The model guessed at the numbers, invented a tone, picked a generic structure. Technically an email. Matched nothing about Sam's company.

High Context

"Write a customer update email about our Q1 results. Here's last quarter's email for tone [pasted]. Here are the actual Q1 numbers [pasted]. The CEO wants these three points emphasized [listed]. Our audience is enterprise customers who care about uptime metrics, not revenue."

The same model and same instruction but the prediction engine was constrained by Sam's actual data. The output landed on the first try because the model had less room to guess wrong.

The difference wasn't the prompt. The prompt was almost identical. The difference was the environment the model was operating in.

The 5-Minute Investment You're Skipping

Gathering context feels slow. You have to find the right files, paste the relevant sections, maybe pull up last week's version. It takes five minutes of prep before you even start prompting.

So people skip it. Sam used to skip it. Fire off a quick prompt, get a mediocre result, spend 25 minutes editing the output by hand, and call it a win because the "AI part" only took 10 seconds.

This is backwards. The five minutes of context gathering isn't overhead. It's the actual work. Skipping it doesn't save time. It moves the time from input preparation to output repair.

Sam tracked this for a week. Prompts with context: average 5 minutes prep, minimal editing after. Prompts without context: average 30 seconds prep, 25-30 minutes of rewriting. The "fast" approach cost six times more in total time.

The "inefficiency" of providing context is load-bearing. It's carrying the weight that would otherwise land on you after the output arrives.

Three Things to Paste Before Your Next Prompt

Before your next prompt, paste three specific things:

  1. An example of what good output looks like. A past email that nailed the tone. A report format you liked. Show, don't describe.
  2. The actual data the output should reference. Real numbers, real docs, real constraints. Not a summary. The thing itself.
  3. One constraint the AI wouldn't know without being told. A legal requirement. A banned phrase. An audience detail. Something specific to your situation that no training data could cover.

Compare the result to what you normally get with a prompt alone. The difference will make the equation obvious.

When Context Alone Stops Being Enough

Sam now has a framework. Bad output? Check the equation. Model is probably fine. Prompt is probably fine. Context is probably the gap.

But "paste more stuff" only gets you so far. How do you think about context systematically? Not as a pile of documents you shove into the window, but as an environment you engineer deliberately?

That's the shift from prompt engineering to context engineering. Part 3: You graduated from prompt engineering. Now start context engineering. covers the mental model that makes it stick.