Digital Sisco | Blog

17 Hard-Won Lessons from My AI Journey

Written by Chris Sisco | Oct 21, 2025 3:25:16 PM

If you're just starting your AI journey, I've spent countless hours testing, breaking, and rebuilding my workflows with LLMs. Here's what actually works and what doesn't.

The Foundation: Tool Selection and Mindset

Choose your primary tool wisely. After extensive testing, Claude gives the best overall experience. GPT and Gemini are valuable for different perspectives, but if I had to choose just one tool, it's Claude every time.

Adjust your expectations. Treat your LLM like an intern with very specific tasks, not like the PhD student companies claim they are. Break everything into mini steps. Even when LLMs get dramatically smarter over the next 5-10 years, you won't regret this approach. It's about building good habits around problem decomposition.

Structuring Your Prompts

Context is king. Front-load your prompts. Put your most important instructions and constraints at the beginning, not buried at the end. This simple change alone will dramatically improve your results.

Examples beat descriptions every time. Instead of explaining what you want abstractly, show 1-2 concrete examples. The difference in output quality is night and day.

Force sequential thinking. Ask your AI to think step-by-step and consider previous variables. This creates more consistent output that goes beyond just pattern matching from examples.

Building Better Workflows

Create "pre-flight checklists" in your prompts. Have the LLM confirm it understands the requirements before executing. This catches misalignment early and saves you from wasted iterations.

Build in checkpoints for longer workflows. Break complex tasks into stages where you can review and course-correct. This prevents error propagation, where one mistake early on cascades into complete failure.

Save and version your best prompts like code. When you nail a working prompt, treat it like valuable infrastructure. Keep a library and track improvements over time. Your prompts are training data for your future self.

Advanced Techniques

The "explain your reasoning" trick is underrated. Even when you don't need the explanation, asking for it improves answer quality. Something about forcing the model to articulate its logic makes the final answer better.

Ask for input before proceeding. Don't just throw instructions at your AI. Ask it what it thinks about your inputs, ask for advice, riff with it, and ask it to push back on your assumptions. The collaboration produces better results.

Use the "role rotation" technique. Run important problems through different personas (skeptical analyst, optimistic strategist, domain expert). The spread between these perspectives reveals blind spots you didn't know you had.

Managing Freedom and Constraints

Let AI color outside your lines occasionally. Allow it to come up with its own variables and actions. That's where the lightning happens: the unexpected insights you wouldn't have thought of. But too much freedom equals slop. It's a delicate balance.

Use LLMs to generate edge cases and break your own work. Fire up fresh sessions and ask them to find holes and failure modes you missed. They're surprisingly good at adversarial thinking.

Reality Checks

AI Agents are cool in theory but extremely difficult to build with reliable outcomes. Without human intervention, things go wrong fast. Right now, they're only viable for super micro tasks, which aren't too different from traditional automated workflows. The promise is there, but the execution isn't yet.

Watch for capability cliffs. LLMs can be brilliant at 90% of a task, then face-plant on the last 10%. Map out where these cliffs are for your use cases and optimize around them rather than assuming uniform capability.

MCPs make outputs look authoritative but can be deceivingly wrong. Once connected to your actual tools and data, LLMs produce answers that seem more credible because they're pulling "real" information. But they can still misinterpret, cherry-pick, or miss critical context. MCPs make your verification practices MORE important, not less. The polish makes the errors harder to spot.

In Summary

The mental model you develop from working with AI (how to decompose problems, structure information, and verify outputs) is invaluable regardless of how much the models improve. You're not just learning to use a tool; you're learning to think more clearly about complex problems.

Start with these lessons, but don't be afraid to experiment. The field is moving fast, and what works today might be obsolete tomorrow. But the fundamentals: clear communication, structured thinking, and healthy skepticism. Those will always matter.