back
platform
Topics
The Product Map
AI
Log in
Sign Up
back
platform
Topics
Context Engineering
Artificial Intelligence

Context Engineering

How to structure, version-control, and maintain your product knowledge so Claude Code, Codex, Gemini, and Cursor work like teammates who read the docs. Covers core layers, setup steps, and a maintenance cadence.

Strategic Impact
Software Engineering
Wyndo (AI Maker) × Product Map
Wyndo (AI Maker) × Product Map

Why Context Engineering Matters

Prompt engineering was about crafting the perfect question. Context engineering is about designing the environment that makes every question produce a reliable answer.

This matters because the model is never working from the prompt alone. It is working from an environment: instructions, memory, files, retrieved context, examples, tool outputs, and everything else that shapes the task.

When that environment is messy, stale, or overloaded, even a strong model produces weak output. When that environment is structured well, the same model becomes much more reliable.

Why the payoff is commercial, not just operational

For product teams, the payoff compounds across every artifact, decision, and handoff:

  • Consistent strategy: every PM reasons from the same source of truth, not a private interpretation.

  • Less drift across tasks: repeated prompts return the same shape of answer, not a fresh interpretation each time.

  • Defensible artifacts: PRDs and briefs stay reproducible, easy to review and reuse.

  • Lower re-explanation cost: shared context carries the background, not someone repasting it into every chat.

  • Faster onboarding: new hires inherit working product memory, instead of mining chat threads.

  • Tight cross-functional alignment: every team pulls from one shared context, not their own version.

  • Reusable AI systems: files and workflows become operating leverage, not one-off prompts.

The deeper point is simple: AI is a multiplier. If the underlying context is weak, AI multiplies confusion. If the underlying context is strong, AI multiplies useful work.

There is also a practical adoption problem here. Many teams understand the idea, but still freeze at the "blank folder problem." You install the tool, open a project, and stall because you do not yet have a working structure for project memory.

Start with a small, durable setup that teaches the agent your project, then add layers only when the workflow justifies them.

What Context Engineering Is

Context engineering is the deliberate practice of deciding what an AI agent sees, in what format, and in what order, so it can produce consistent, reliable, defensible outputs.

It is broader than prompt engineering because it covers instruction files, memory layers, retrieval patterns, skills and commands, examples and templates, tool access, and update workflows.

Context engineering vs. Prompt engineering

DimensionPrompt EngineeringContext Engineering
FocusCrafting the right questionDesigning the information environment
ScopeOne interactionMany interactions across time
OwnershipIndividualTeam or organization
DurabilityEphemeralPersistent and versioned
OutcomeBetter single responseBetter system behavior

Context engineering vs. Context stuffing

This is one of the most important distinctions in the whole topic.

  • Context stuffing means dumping a PRD, Slack threads, strategy docs, old notes, and screenshots into the window and hoping the model sorts it out.

  • Context engineering means choosing the smallest set of high-signal context that makes the task succeed.

An unfocused wall of context does not make the model smarter. It makes attention worse. The working rule is simple: every line of context has to justify its existence against the model's attention budget.

A simple PM framing

Good context engineering answers five questions:

  1. What should the model know?

  2. What should it ignore?

  3. What should persist across sessions?

  4. What should be retrieved only when needed?

  5. Who updates the source of truth when the product changes?

Context Engineering Guide
Context Engineering Guide
Context engineering is the process of designing and optimizing instructions and relevant context so LLMs and advanced AI models can perform their tasks effectively.
article
promptingguide.ai

Product Knowledge Drift

Product context starts rotting almost immediately. Features, messaging, priorities, and constraints shift constantly, and if none of it gets folded back into shared context, AI assistants keep answering from an older version of reality.

This is how PM teams end up with effectively "private product philosophies." Each person works from their own prompts, chats, and assumptions.

What happens without systematic context

The output looks plausible, but sales tells a different story than marketing, support contradicts the roadmap, and new hires spend months piecing together the real state of things. The model did not forget.

The team never built a reliable memory system in the first place.

Context Engineering for PMs

PM work is contextual synthesis across customers, roadmap, business, and tech. Context engineering is that same discipline made machine-readable. Done well, sales stops mis-qualifying, support stops contradicting the roadmap, AI drafts get sharper, and new hires ramp faster.

How to make it work

  • One source of truth: descriptions, pricing, release notes, FAQs, and prompts in one repo.

  • Start small: a basic structure beats a perfect system that never ships.

  • Anchor to existing rhythms: tie updates to planning and pre-release checkpoints, not new ceremonies.

  • Document the why: capture reasoning and intent, not only capabilities.

  • Prune ruthlessly: archive stale pages so the repo stays trustworthy.

  • AI as a consistency checker: flag pricing mismatches, version drift, and contradictions across docs.

  • Treat context as a growth lever: the payoff is shorter sales cycles, smoother launches, faster onboarding.

Your product cannot scale faster than its context.

AI-native TASK framework for agentic product management
AI-native TASK framework for agentic product management
TASK turns PMs into orchestrators. Agents, grounded in your product context, operate across the entire product lifecycle, so you focus on strategy and make better decisions.
article
productmap.io

Setting the PM Product Context

Retrieval, Memory, and Instructions

PM teams often misuse memory for things that should be retrieval, and misuse prompts for things that should be instructions. The clean distinction is:

  • Instructions define behavior

  • Memory stores durable preferences or persistent context

  • Retrieval brings in the right document at the right time

Useful rule of thumb

  • Stable and policy-like → Instruction file

  • Durable but factual → Knowledge file

  • Large and changing → Retrieval source

  • Personal and stylistic → Personal memory

  • Task-specific → Runtime prompt

If a fact changes every week, store the source and retrieve the latest version instead of hard-coding it into memory.

Project Memory Layers

The cleanest mental model is the four-layer stack. It is useful because most teams mix these layers together and then wonder why outputs drift.

Layer 1: Canonical project knowledge

Durable facts and reference docs that live in Git or a controlled knowledge base: product one-pager, ICP, roadmap, glossary, decision log, approved templates and playbooks.

Layer 2: Agent instructions

Files that tell a specific tool how to behave and where to look: AGENTS.md, CLAUDE.md, GEMINI.md, .cursorrules and etc.

Layer 3: Runtime context

Only what the current task needs: the active PRD, today's interview notes, this sprint's dashboard, the branch or ticket being discussed.

Layer 4: Personal memory

User-specific preferences: output format, writing tone, meeting style, favorite templates.

Why the separation matters

Each layer has a different shelf life. Stable and policy-like content belongs in instructions, durable facts in knowledge files, large and changing material in retrieval sources, preferences in personal memory, and task-specific material in the runtime prompt.

Treat vendor memory as a cache, not the database.

Version Control

Every meaningful context change should create a traceable change in Git. That gives you:

  • Audit trail: who changed what and why.

  • Branching: safe space for context experiments.

  • Review: catch bad context before it reaches production use.

  • Rollback: undo updates that introduce confusion.

  • Debuggability: trace bad outputs back to the context that produced them.

Context repositories as a pattern

Treat agent memory like a Git-backed filesystem where files can be pinned, versioned, and reorganized. The principle holds even without the exact implementation: memory should be inspectable, editable, reviewable, and have history.

text
main production context feature/ new knowledge being added update/ updates to existing context experiment/ alternative context structures or prompts

The flow for shipping a context change mirrors a normal code change:

  1. Create a branch for the context change

  2. Update the relevant files

  3. Test the change with the AI tool

  4. Review the change like any other important system input

  5. Merge when the output quality improves

Repo-First Setup

The strongest shared argument in both guides is repo-first context engineering. Git should sit underneath the whole system: it gives you version history, reviewable changes, rollback, branching for experiments, portability across tools, and a plain-text format LLMs can actually use.

If your project memory lives only inside one vendor's UI, you do not own your context.

The two-bucket rule

Document only what the agent cannot infer from the codebase and artifacts it can already inspect.

  • Skip what the agent can already see: file structure, dependencies, code patterns, obvious implementation details.

  • Capture what it cannot infer: product strategy and positioning, decision rationale, team conventions, customer context and ICP, release procedures, non-obvious trade-offs, known gotchas.

text
project-root/ _knowledge/ INDEX.md 00_company/ # one-pager, team, resources, product roadmap, backlog 01_strategy/ # business model, PMF, MVP/roadmap, segmentation, OKRs 02_generation/ # user research, design/UX, marketing, growth/sales 03_analysis/ # KPIs, analytics, unit economics, finance 04_delivery/ # backlog, agile, lean experiments, development, risk 05_people/ # talent, communication, negotiation 06_operations/ # product ops, refactoring plan, tone of voice 07_tools/ # 90+ AI tools across 11 categories 08_frameworks/ # RICE, JTBD, OKRs, agentic PM (TASK) 09_templates/ # PRD, AI PRD, user story, ADR, persona, OKR 10_data/ # analytics, feedback, interviews, research, slides .codex/ skills/ .claude/ skills/ commands/ .github/ copilot-instructions.md AGENTS.md CLAUDE.md

INDEX.md is the most important file in the setup. It sits at the root of the knowledge folder and lists every subfolder and resource with a short description, so the agent navigates straight to the right file instead of guessing or loading everything. Treat it as the table of contents for the whole knowledge base and keep it current.

How agent instruction files plug into the index

AGENTS.md (Codex) and CLAUDE.md (Claude) are tool-specific entry points that reference INDEX.md as the source of truth. Put shared repo rules in AGENTS.md, then have CLAUDE.md import it so both agents inherit the same instructions instead of drifting apart.

Start with the non-negotiables: INDEX.md, AGENTS.md, CLAUDE.md, and the .claude/ folder. Commands, rules, skills, and agents are layers you add when the project actually needs them.

Context Files

Good context files are explicit, small, linked, and maintained. The root file should act like an operating manual, not a giant wiki page: a short page that explains what the product is, points to the knowledge index, summarizes the repository structure, names the critical tools and systems, and defines the working conventions and most important skills or commands.

CLAUDE.md pattern

markdown
# Project Context ## Product @_knowledge/02_product/onepager.md ## Knowledge @_knowledge/INDEX.md — master index for product context and frameworks. ## Repository Structure - `_knowledge/` — source of truth - `_data/` — exports and working data - `scripts/` — automations ## Working Conventions - Check `_knowledge/INDEX.md` before answering product questions - Prefer concise, actionable outputs - Reference frameworks from `08_frameworks/` - Use templates from `09_templates/`

Why linking beats stuffing

This is where context engineering becomes practical. Keep the root file lean, and link to the deeper material instead of loading everything up front.

That pattern mirrors how modern agent tooling actually spends context:

  • Root instructions: always read at session start.

  • Knowledge files: loaded when the task makes them relevant.

  • Skills: only the body loads when the skill is actually used.

  • Supporting references: stay outside the main skill file until needed.

At session start, Claude loads CLAUDE.md, auto memory, MCP tool names, and skill descriptions. Every file read and response after that consumes more of the window. Small root files, selective retrieval, on-demand skills, and subagents are how you keep working context useful over time.

What not to include

Keep the root file clear of secrets, exhaustive code commentary the agent can read directly, generic advice the model already knows, large changing datasets that belong in retrieval, and copied internet rules you have not validated in your own workflow.

Maintaining Context

The best maintenance advice in the source material is the friction-driven approach. This is how project memory gets better without turning into a giant speculative documentation exercise.

The friction-driven loop

  1. Notice friction: the output is wrong, generic, stale, or inconsistent.

  2. Diagnose the gap: what context was missing or misleading?

  3. Write the fix: turn that gap into a specific instruction or knowledge update.

  4. Put it in the right place: root file, knowledge file, skill, or playbook.

  5. Test and iterate.

Maintenance cadence

  • Weekly: update active priorities and open decisions.

  • Monthly: prune stale files and refresh important guidance.

  • Per release: update product facts, screenshots, metrics, and examples.

  • Per major initiative: create a temporary initiative context pack.

  • Per incident: document the gotcha while it is still fresh.

Archive and prune aggressively

Move stale documents out of the active path, remove rules for deprecated features, consolidate duplicates, and keep file sizes under control. A lean repo people trust beats a comprehensive one they bypass.

Context rot is not just about outdated facts. It is also about bloated files that make it harder for the agent to find the signal.

Team Context Engineering

The biggest gains do not come from one PM getting better at prompting. They come from the whole team sharing context and reducing handoffs.

The shared context advantage

A well-maintained context repository becomes the memory layer for AI tools, the onboarding layer for teammates, the review layer for decisions, and the alignment layer across product, design, engineering, marketing, and support.

Keep the shared context window clean

  • Decompose prompts into focused skills instead of mega-prompts.

  • Offload state to files, not long threads.

  • Curate each turn: load only the relevant subset.

  • Repeat critical facts in long workflows to counter "lost in the middle."

Distributing context across a team

  • Use git as the single source of truth for shared context.

  • Prefer relative paths. Absolute paths do not travel across machines.

  • Symlink skills into the repo so ~/.claude/skills stay versioned and portable.

  • Review like code: branch, diff, merge.

Onboarding with context

A strong context repo doubles as onboarding. A new teammate clones the repo, reads INDEX.md, and starts using AI tools with the same shared context as everyone else.

Agent Skills and Playbooks

Not every task needs the same instructions. This is why skills matter. A skill is a repeatable task of work with a goal, workflow, inputs, output format, and guardrails. Common PM skills: PRD reviewer, user research synthesizer, competitor analyst, experiment planner, roadmap critic, launch brief drafter.

How skills work with context

Skills are not just a command format. They are a context-budgeting tool. This loading pattern keeps the main context window clean:

  • Discovery: the agent sees a short description first, not the whole playbook.

  • Activation: SKILL.md loads only when the task matches or you invoke it directly.

  • Expansion: templates, examples, references, and scripts stay outside the main file until the skill asks for them.

A good skill contains the workflow and quality bar for one repeatable job. Long background material belongs in linked references, templates, or supporting files that load only when needed. This is what lets a team run complex PM workflows without overloading any single session. The model reads only what the current task requires.

TASK framework for Agentic PM

TASK framework gives product managers a practical way to use AI:

  • Topics define the product management domain

  • Agents are scoped roles for agentic executors

  • Skills are human competences that guide agents

  • Knowledge is the tangible output written back to the repo

Each lifecycle area becomes an agent with its own skill set: discovery, strategy, delivery, growth, launch. Skills then chain into self-improving pipelines, like interview-synthesis → opportunity-framing → PRD draft.

The shared knowledge layer is what makes chaining work: every skill reads from and writes back into the same repo, so each step builds on prior output instead of starting from scratch.

Claude Code, Codex, Gemini, Cursor

The repo should be shared. The delivery layer should not be identical, because each tool handles instructions, memory, and retrieval differently. Some tools are repo-native and treat files as project memory; others sit on top of that repo as drafting or editor layers.

Comparison summary

ToolBest project-context mechanismWhat persists wellWhat not to treat as canonBest use case
Claude CodeCLAUDE.md, imports, skills, subagents, MCPRepo instructions and file-based project memoryOverloading CLAUDE.md with all knowledgeDeep project work and multi-step agent tasks
OpenAI CodexAGENTS.md, .codex/skills/Scoped repo instructions and skill-based workflowsRelying on chat state aloneRepo-native work, structured task execution
GeminiGems, uploaded files, Drive-linked files, saved infoCompact domain-specific context packsTurning saved info into team memoryGoogle Workspace-heavy teams
CursorRules plus pinned docs and memory layersLocal coding guidance and editor-native patternsTreating IDE memory as the canonical PM systemDay-to-day execution inside the editor

Own the knowledge outside the model. Then project it into each tool in the format that tool handles best.

Claude

Repo-native and agent-oriented. Treat CLAUDE.md as a short, durable root that routes into _knowledge/INDEX.md. Move long procedures into skills and push exploratory work into subagents so the main context window stays clean.

  • Keep CLAUDE.md short; durable knowledge lives in normal markdown files.

  • Use skills and commands for repeatable workflows.

  • Use subagents and MCP for task isolation and live tool access.

Codex

Treat AGENTS.md as the project-level contract for how the agent behaves inside the repo. Use .codex/skills/ for reusable workflows. Do not rely on chat history as durable memory.

  • Shared instructions in AGENTS.md; nested scopes for deeper directories.

  • .codex/skills/ for repeatable PM jobs like PRD review or roadmap critique.

  • Best fit when context, tasks, and outputs all live in plain text alongside the code.

Gemini

Cursor

Strongest when it sits on top of a repo that already contains clear context. Use Cursor rules to steer local implementation behavior inside the editor.

  • Keep rules focused on implementation patterns, not product knowledge.

  • Pin the shared context repo so the IDE reads from the same source as everyone else.

  • Never treat IDE memory as the canonical PM system.

The shared pattern across all tools

Own the knowledge outside the model. Then project it into each tool in the format that tool handles best.

Getting Started

Step 1: Foundation

Create a Git repository for product context, write a one-page product one-pager, add a minimal AGENTS.md or CLAUDE.md, and define the initial _knowledge/ structure.

Step 2: Knowledge base

Create _knowledge/INDEX.md, then write your ICP, glossary, roadmap summary, and team conventions, and add the first decision log entry.

Step 3: Skills and playbooks

Identify the three most repeated AI tasks, create the first 3 to 5 skills or playbooks, and add at least one reusable template.

Step 4: Cross-tool setup

Adapt the shared knowledge layer into the tools your team actually uses, test the same product question across Claude, Codex, ChatGPT, and any other relevant tools, then compare where outputs diverge.

Step 5: Review and operating cadence

Share the repo with the team, define ownership for the most important files, set a weekly or monthly review rhythm, and document the first context improvement based on real friction.

Execute and refine

When PMs drive the standards, the structure, and the operating cadence, the quality of the context stops being optional and starts being a product advantage.

Every function should invest a small amount of time to keep the shared context accurate: sales updates the ICP validation, support updates recurring issues, engineering updates constraints, and marketing updates messaging.

Those small, routine updates are what make the next tasks faster and better, because the agent starts from the same current truth as the team.

Related Topics
View all topics
Product Discovery
Backlog Management
Minimum Viable Product Development
MVP
Minimum Viable Product Development
MVP development focuses on building the simplest version of a product that delivers value and enables fast learning. This guide covers key principles, MVP types, and AI tools to help teams validate ideas and make better product decisions.
VIEW TOPIC
Strategic Impact
Software Engineering
Prompt Engineering
Artificial Intelligence
Prompt Engineering
Prompt engineering helps teams get better results from AI by writing clear, structured instructions. This prompting guide covers basic prompt structure, general tips, and advanced techniques to guide output and improve performance.
View all topics