One of the more surprising things I’ve learned building software applications is how far you can get with a large language model (LLM) when you give it enough structure to work with.
LLMs aren’t just great for writing marketing copy or answering questions. With the right prompts, they can also simulate product logic — helping you prototype features, test ideas, and even recommend content. But they won’t do it out of the box.
To get an LLM to “think” like your product, you have to teach it the way your product sees the world.
Here’s how I approached that.
Step 1: Turn Your Product into a Set of Rules
I started with a simple question: What do users care about when choosing an activity?
That led to a list of real-world variables like:
- Weather
- Group type and size
- Time of day
- Budget
- Travel mode
Then I looked at the activity data we had and asked: What attributes can we collect or infer to match those needs?
That gave me the structure for a recommendation engine — not in code, but in logic. A scoring model. I assigned weights to each variable (e.g. weather = 20%, budget = 15%), then created rules for matching:
- If the user’s weather is “Rainy” and the activity is suitable for “Sunny” only → score = 0
- If the activity is “Indoor”, which works in rain → score = 1
- If the activity supports “All Ages” and the user has a family → partial match = 0.5
I did this in a spreadsheet first. No code. Just logic.
Step 2: Give the LLM Structure and Constraints
The next step was packaging this into a prompt. I gave the model:
- A user context (weather = rainy, group = family, budget = <$20, etc.)
- A list of activity entries, each with fields like category, tags, weather suitability, etc.
- Clear instructions on how to compare, score, and explain
The prompt looked something like:
“For each activity, assign a score from 0–100 based on the following criteria… Use these weights… If a match is exact, score 1. Partial match = 0.5. No match = 0. Explain mismatches briefly.”
The results were consistent and often shockingly good. The model was able to reason through dozens of activities and highlight the best fits — not randomly, but using the same logic I would have applied manually.
Step 3: Align Tags and Metadata
What made this work wasn’t just the prompt — it was the prep.
We had a tag dictionary, standardized categories and sub-categories, and weather suitability labels that were clean and consistent. That meant the model wasn’t guessing at language. It was matching known values.
Without structured metadata, the model gets vague. With it, it can behave like a product manager.
Step 4: Iterate Like You Would With a Real Team
This process worked best when I treated the LLM like a junior product teammate. I’d ask for recommendations, look at how it explained its choices, then improve the structure.
Sometimes I realized the issue was my tags. Other times the scoring logic needed adjustment. Every improvement made the next result better.
Why This Matters
You don’t need to build a custom ML model to get smart results. You just need to:
- Define how your product sees the world
- Structure your data around that worldview
- Teach your LLM to follow the same logic
If you do that well, the LLM can be more than a writing assistant — it can become a thinking assistant. And that’s a pretty good place to start.
Let me know if you want a walkthrough of the exact prompt structure I used — or a look at the scoring logic that powers it. Happy to share.