Two people can use the same AI to perform the same task and get wildly different results. The difference lies in how they use the AI.
Here, I outline five techniques to consistently get better outcomes from Claude, ChatGPT, or other common models.
Before we begin, a brief disclaimer. Nothing in this article constitutes investment advice, or advice of any other kind. Always back up your files, double-check the AI’s outputs, be selective about which tools you trust with your data, and, when in doubt, consult a qualified expert.
1) The Success Criteria Loop
Have you ever asked ChatGPT to do something, felt unsatisfied with the result, and then needed several follow-up instructions before it matched what you had in mind?
When this happens, it is usually because you described a task, not what success looks like.
Consider the following example: “Please draft a one-page investment memo on Company X using the latest annual report.” This prompt specifies a task, not an outcome, so the AI executes the task without caring much about the outcome. The output is therefore likely to be mediocre.
But there is a better way. What if, after delegating a task, we also specified an outcome? For example: “Please draft a one-page investment memo on Company X using the latest annual report. Ensure it is clear, decision-oriented, factual, and verified, and suitable for an investment committee.” This already improves the result and makes it more likely to match what you want.
But we can push this further. Consider: “Please draft a one-page investment memo on Company X using the latest annual report. Keep revising it until it is clear, decision-oriented, factual, verified, and suitable for an investment committee.” This is what I call a Success Criteria Loop. It instructs the AI to perform a task, then evaluate the outcome against explicit success criteria, and keep revising it until they are met.
We can make it even more specific: “Please draft a one-page investment memo on Company X using the latest annual report. Then, score it on clarity, decision-usefulness for an investment committee, and factual accuracy and verification, from 0 to 10. Keep revising it until it scores at least a 9.5/10, or you have completed at least five revisions.” This is likely to produce the best results.
In fact, when I tried it, ChatGPT produced a mediocre first draft, then it autonomously wrote the following, without me doing anything: “Score: 8.3. What I will improve next: tighten to true one-page length, make the thesis explicit (base/bull/bear), separate facts from management commentary, and add a minimal verification checklist tied to the filing.” Then it produced four more versions, and finally produced an output that is much better and more trustworthy. Of course, nothing here is financial advice; always check the outputs.
I repeat: all of the iteration and improvement happened autonomously, with me having to do nothing except a good initial prompt.
So, technique #1 is the Success Criteria Loop: specify the task, define what success looks like, and instruct the AI to iterate until the result meets that criterion.
2) Use Reflections
Have you ever asked an AI to do something that should have taken a few seconds, only to spend twenty minutes going back and forth because it did not understand what you wanted, or because you only figured out what you wanted along the way?
When this happens, the temptation is to close the chat as soon as you finally get a usable result. After all, you have already wasted enough time.
Do not.
Instead, before closing the chat, do one last thing. Ask the AI to “reflect” (use that exact word). It will look back at the conversation, identify what went wrong, learn from your feedback and its own mistakes, and suggest how to reach the same result faster next time. It will also surface reusable patterns and propose turning them into skills it can use in the future.
Reflection is the feedback loop that turns a generic AI producing generic outputs into a tailored assistant producing high-quality results. If you start today as a complete AI beginner using generic AI tools, in just a few weeks, this single habit can make you a meaningfully more advanced user using a tailored AI tool.
For best results, either create a dedicated Reflect skill, or add the following to your AI’s Custom Instructions: “When told to reflect, review the current conversation, analyze tasks, errors, and user feedback, and extract opportunities for learning, skill creation, and skill improvement.” Note: after you do that, make sure you read the AI’s output after you ask it to reflect, as it may ask you for confirmation to save the learnings.
So, technique #2 is to use reflections: when a task takes longer than expected, ask the AI to reflect before closing the chat, so that future interactions will be better.
Tip: use flagship models
For the best results, use flagship models. Whether you prefer ChatGPT, Claude, or Gemini is up to you, but once you choose a model family, try its flagship model. The performance gap compared to its default model can be significant.
You do not need the most expensive model for every task: trivial work is often fine on cheaper options. But if you find yourself disappointed with the output, your first move should be to switch to the flagship model and, if available, enable extended thinking.
Many thanks to Marcelo P. Lima for the tip.
3) Use Skills
If you find yourself repeatedly asking the AI to perform the same task, consider turning it into a reusable procedure. In the AI world, these are called skills.
Creating a skill is simple. Just ask the AI to “create a skill to do X.” It will produce a reusable procedure written in plain language, not code, so you can review it, correct it, and improve it.
How skills are stored and reused depends on the interface you are using:
If you use Claude Code, OpenAI Codex, or most terminal-based AI tools, skills are typically saved and reused automatically when you ask them to create a skill.
If you use Claude Desktop, you additionally need to ask the AI to present the skill to you, then click the Copy to your skills button.
If you use Claude Cowork, skill handling may follow either of the above patterns, depending on the exact setup.
If you use web-based interfaces, such as Claude, ChatGPT, or the ChatGPT desktop app, skills are not saved automatically. In that case, ask the AI to suggest the best way to manually reuse the skill.
Automatic skill reuse is the main reason I personally favor Claude Code and Claude Cowork over other apps. You can find a deeper, practical guide to using skills effectively with Claude in this setup guide.
The advantage of using skills is not only that they make it faster to tell the AI what to do and how to do it, but also that they make those procedures easy for the AI to improve, refine, and compound over time.
So, technique #3 is to use skills: when a task repeats, turn it into a reusable procedure, so you stop re-explaining yourself and start compounding improvements.
4) Provide Context
Imagine asking a stranger to “suggest a good birthday gift for your spouse.” The suggestion will likely be poor, but not because the stranger is dumb. It will be poor because they know nothing about you or your spouse.
The same is true for AI. The less context you provide, the more generic its answers will be, and the dumber it will appear. Give it enough relevant context, and that same AI can produce much smarter, more tailored outputs.
Here are a few ways to provide context, from simplest to most effective:
When prompting the AI, describe the desired output before giving instructions, then add any relevant context about you and the task. Depending on the situation, this may include your business, customers, products, values, preferences, history, or long-term goals.
Keep a few small files on your computer with context about you. For example, one about your history, your goals, and your values, one about your job and company, and one about other recurring domains such as your health or finances. When chatting with the AI, upload the relevant files as needed.
If you use Claude Cowork or Claude Code, create a dedicated context folder on your computer and place those files inside it. Then, in that folder, start a chat session and tell the AI: “Create a CLAUDE.md file with a mandatory instruction to, at the start of each session, read the file names in the working folder and load them as context when relevant to the task at hand.”
So, technique #4 is to provide context: the more relevant information you give the AI about you and your situation, the less generic and more useful its outputs become.
5) Think Higher Level
Once you start using the previous techniques, two thoughts tend to arise. The first is: “Wow, this really improves what I get out of AI.” The second is: “Now I have too many skills and too much context, and it is starting to feel overwhelming.”
The solution is to think one level higher. You do not need to be the one creating and maintaining skills and context. You can ask the AI to manage them for you.
How well this works depends on the interface you use:
If you are using a chat-only interface, once a month, upload your skills and context files and ask the AI to consolidate and simplify them without losing meaning.
If you are using Claude Cowork, Claude Code, OpenAI Codex, or a similar app, you can go further. Ask the AI to “Create a Tidy skill that builds an INDEX.md at the top level of my context folder, listing the folder structure (up to three levels deep), all files, and all available skills with their length. Then, identify overlaps, redundancies, or overly long files, and suggest consolidation, reorganization, or refactoring without losing meaning.” From then on, periodically ask it to tidy.
What I just described is a minimal setup. In my own workflow, I rely on four cornerstone meta-skills:
Tidy. I run it regularly to index my skills and context files and to surface opportunities for consolidation, reorganization, and simplification.
Reflect. I run it after any session that lasts more than a few minutes. It automatically extracts learnings and identifies opportunities to create or improve skills and context.
Manage-skills. It runs automatically whenever a skill is created or updated. It enforces standards, such as requiring success criteria loops, and applies a success criteria loop itself to keep skills concise and effective.
Learn. I run it on public documents or third-party setups. Its job is to extract useful ideas and integrate them into my own skills and context when appropriate.
So, technique #5 is to think higher level: instead of managing skills and context yourself, ask the AI to manage, organize, and improve them for you.
Pro tip: use voice dictation
A recurring theme of this article is that spending a bit more time writing a clearer, more detailed prompt pays off. The problem is that typing is slow and effortful.
The workaround is voice dictation. Use a dictation tool, such as the one built into macOS, or an app like Wispr Flow. You press the fn key, speak naturally, release it, and see your words transcribed directly into the AI interface. This dramatically reduces the friction of adding context and writing good prompts, making it much easier to invest those extra few seconds that improve results.
Conclusions
While there are real limits to what AI can do, from 2026 onward, as long as you are using flagship models, the main bottleneck is rarely the AI itself. It is how well you set it up for success.
At the level of a single task, the two lowest-hanging fruits are using success criteria loops and providing sufficient context.
At the level of building a capable personal assistant, the next two are creating skills and asking the AI to reflect at the end of each meaningful conversation.
And at the level of maintaining that assistant over time, the final low-hanging fruit is to let the AI manage itself, by creating meta-skills that handle organization and improvement: tidy, reflect, manage-skills, and learn.
If you are considering hiring vendors or consultancies to set up AI workflows in your organization, but worry about being oversold or ending up with systems your people will not use, third-party independent advisory on change initiatives, acting as a sounding board, is one of my professional services. Learn more on my advisory page.