How to Train an LLM to Think Like Your Product
When we first started building CouchPeel, a mobile app that recommends local activities and events, I knew we needed a recommendation engine. Something smart. Context-aware. Helpful, not spammy. But also… something we could prototype fast.
I didn’t have a machine learning team. I didn’t have time to spin up a backend model. And I didn’t want to drop 10K into a black box recommendation API I couldn’t explain.
So, I built it without writing a single line of code.
Here’s how.
Start With Context, Not Code
Before anything else, I mapped out all the real-world variables that influence a user’s decision to try something new:
- What’s the weather like?
- Are they solo, a couple, or a family?
- What’s their budget?
- Are they on foot, transit, or driving?
- Is it Saturday night or Tuesday morning?
This became my user context model and the foundation for everything else.
I built a variable tracker in Notion, categorized each one by type, format, and feasibility, and then synced it with a Google Sheet to simulate real users. Just four or five personas gave me a ton to work with.
Tag Everything, Then Tag It Again
Next up: the activities.
Each event or experience in CouchPeel needed to be tagged and categorized so it could match real user needs. Think of it like metadata for vibes:
- A go-kart track?
Family-Friendly,High-Energy,Indoor Activity,Rainy-Day - An outdoor art walk?
Visual Arts,Scenic Views,Self-Guided,Sunny
I created a Tag Dictionary, Category Intelligence Table, and Sub-Category System not because I’m obsessed with organization (though, maybe a little), but because without a shared language between user needs and activity traits, there is no recommendation engine. There’s just chaos.
Scoring Without Servers
With context and tags in place, I needed a way to evaluate matches.
So I built a scoring model in a spreadsheet.
Each user variable (e.g. “Weather = Rainy”) was compared against an activity’s metadata (e.g. “Weather Suitability = Sunny, Cloudy”). A match scored 1. Partial match? 0.5. No match? 0.
I weighted variables, 20% for weather, 15% for budget, etc., then calculated a score out of 100.
It looked like a recommendation engine. It acted like a recommendation engine. But it ran entirely on formulas and filters in Sheets.
The AI That Knows Your Product
Then came the fun part: I trained an LLM (ChatGPT, in my case) to do this work for me.
Using structured prompts, I asked it to:
- Evaluate a user persona
- Read activity metadata
- Score each match using my logic
- Explain why
It worked. With a single prompt, I could simulate smart recommendations that felt contextual, personal, and just… right.
What I Learned
- Recommendation engines aren’t about AI. They’re about understanding people and matching them with structured data.
- Prompt engineering is product thinking. If an LLM can’t make sense of your inputs, your users won’t either.
- You don’t need code to prototype intelligence. You just need structure, context, and a little creativity.
The best part? I now have a validated framework I can hand off to developers, plug into APIs, or scale with real-time data without starting from scratch.
If you’re building a recommendation engine, try skipping the model and starting with meaning.
Sometimes, the smartest product move isn’t machine learning.
It’s just clarity.
Want the spreadsheet scoring template I used?
Reach out - happy to share.
