Talking to Machines, Part 3: Designing the AI’s World with Context Engineering
How to approach context for reliable, cross-conversation collaboration with AI.
Talking to Machines is a three-part series exploring the rich changes in interacting with AI and what that means for improving how we work with it.
In this series, I’m looking into practical ways of bringing generative AI into our workflows from my perspective as a human-centered designer, and my experience leading product design for teams, products, portfolios, and global information systems.
Part 1 offered a conceptual anchor for where we are in AI’s evolution, a bird’s eye view of how LLMs work, and a foundation for us to explore prompt and context engineering in the next two parts.
If you haven’t read Part 1 yet, you may want to start here:
Part 2 covered prompting in three sections: first, where prompting began. Second, what prompt engineering actually is and how to think about it with simple, accessible analogies. And third, a deeper dive into how to get better at prompting over time. It includes useful ways of thinking, some practical techniques, and where human-AI collaboration is headed.
Here, Part 3 carefully unpacks how context engineering is becoming a core competency for people who use AI — one that is important for designers, builders, PMs, writers, and strategists who want to make AI truly useful in their workflows.
This piece will help you immediately have a more intuitive sense of AI, answering two essential questions:
What does the machine “know” and “remember” when we talk to it?
How can we influence machine attention and reasoning in practical ways through context engineering, getting us to better outcomes that meet our needs?
If you are a leader or builder who wants to think and work smarter with AI, or an engineer who wants a stronger base of human-centered understanding to inform your approaches, please read on. Feel free share this with anyone who you think will get value too.
Table of Context:
These are exciting times
We’re off to the races with generative AI
Context engineering is the next evolution
How we can “think with” AI as peer collaborators
Approaching context engineering as a craft
A three-layer framework for understanding machine context
More is not better: Dynamically managing context windows
Don’t feed the animals (too much)
12 non-technical tips for skillful context engineering
Context engineering as an art of systematized metacognition
Please subscribe to get future essays and support this work.
Or simply give this post a like :)
These are exciting times
We’re off to the races with generative AI
AI is moving at a blistering pace, but that shouldn’t scare you off. The floor is getting lower with more accessible capabilities while the ceiling gets higher. It’s a great time to lean in.
With some basic understanding of how AI works and what that means for how to approach it, anyone can start using it effectively for basic tasks. With just a bit of skill, it quickly becomes a turbocharged research and reasoning tool, allowing you to rapidly expand your information views and converge on actionable structure, especially in areas you already have expertise. Sprinkle in some curiosity, and you’re off to the races.
Working from there, the approaches that help you collaborate with generative AI are largely ones you already know. Framed in the right way, skills you’ve intuitively and intentionally grown over your life and career so far are immediately transferrable. You simply need to experiment with bringing them into your conversations with AI, using deliberate prompting techniques.
Generative AI follows along increasingly well — until it doesn’t. Later in this post, we will get a bit under the hood into context rot to understand more about filling the context window with just the right information. When the AI drifts, it can be anchored and reminded of topics, goals, frameworks to follow, and anything else that helps you get to outputs that serve your specific needs.
Or you can start anew to refresh the context. It’s all up to you how to influence what is “front of mind” and “seen” by the machine.
It’s the same context-setting and reorienting you do facilitating any collaborative conversation, just with a gleeful Golden Retriever on the other side of the table. And how do you turn your Golden Retriever into an Australian Shepherd? That’s where the road into complexity starts, paved with context.
To reinforce a point from Part 2, you can still get far using prompts alone. The skill is a surface level of context engineering when approached in a certain way. But the second you want the efficiency of repeatable systems, context that lives across projects and connected conversations, workflows to be referenced and reused, and abilities like searching custom corpuses or connecting to external tools — that’s the full breadth of context engineering.
As a leader or manager guiding priorities and working with teams, or as a builder rolling up your sleeves to get the system built, you’ll want to learn the concepts and tools available to reap the benefits of this growing discipline. The true power of generative AI gets unlocked when you start designing and building up context outside of the prompt.
We’ll start with a definition and a real-world example to ground us. Then I’ll share a mental model for thinking about the layers of context, walk through specific common methods of context setting today, and explore how it all informs this emerging discipline moving forward.
Context Engineering is the next evolution
Put simply, Context Engineering is the craft of shaping what the AI “knows” at the moment of interaction. It expands from Prompt Engineering, determining what information is available, when, and how, while you prompt.
As a discipline, it involves designing and building dynamic systems that optimize instructions, available tools, supporting environment, and informational context for LLMs, so they can produce outputs within and across projects and conversations that stay more consistent and connected for the user.
When done well, it augments the original training of an LLM, ensuring the AI has the right information and tools, in the right format, at the right time to create outputs that meet your needs. And when combined with an understanding of AI “thinking” concepts like context windows and context rot, and what they mean for your own way of engaging, it can make for much more intuitive human-machine collaboration in your own workflows.
The craft enables us to move beyond the ephemeral nature of a single conversation, into systems of conversation that lead towards a common goal. We can get more specific, working from proprietary knowledge and curated information about the specific task we want done.
Our preferred approaches, built on our own expertise and experience, become available to the AI in-the-moment. And in systems built for teams, those resources and workflows become more accessible to everyone, bringing more consistency and shared understanding. Established ways of working and thinking are available directly in-the-flow.
As we do that, prompt formats can become simpler and more focused. They don’t need to hold as much higher level intent or guidance. Prompting as a skill then becomes more accessible across roles and skill levels as the bar of complexity is lowered. More of your team can actively participate, working from their role expertise, while keeping track of the overarching thread.
Context Engineering is here to stay. In a big way, it’s integrated knowledge management with the potential to be the connective tissue in your personal work and across teams.
For an effective and focused example of this, check out NotebookLM. There’s a reason it’s getting attention in AI communities. Collecting a reference corpus to query is an advanced aspect of context engineering. Google productized that, adding tools on top to explore and synthesize the outputs. Products like this will continue to appear and evolve. Keep an eye out!
We are at the very beginning.
How we can “think with” AI as peer collaborators
As the methods of context engineering become embedded in more products and features, what was once complicated engineering is gradually becoming something any AI user can do. The principles are still fundamentally complex, just abstracted and made accessible to you.
As this “naturalization” of AI collaboration happens via simplification and conversationality, the skills that logically become more important for operators are those like critical thinking, problem solving, and systems thinking. These skills shape how we “think with” AI to solve problems more efficiently over time, gain knowledge through our efforts, and habitually capture reusable information, structures, and solutions that we can recursively apply and adapt.
This should sound familiar — the same challenge exists managing initiatives and projects as they move forward. Most modern work is filled with parallel conversations that create, add, or challenge existing states, understanding, approaches, and work in progress. In these parallel work scenarios, the complexity inevitably snowballs, driven by evolution through discovery and experimentation.
Entropy ensues with no deliberate action taken to contain it. A vital central process emerges of keeping things collected, clean, sensible, connected, and available to influence any conversation at hand. It’s the plumbing that allows complex work in learning organizations to function and scale without losing alignment. In the world of AI, context engineering systematizes all of that to enable human-machine collaboration.
In the next section, I’ll make the picture a bit more tangible by offering a three-layer mental model for thinking about context engineering. Building on a theme so far in this series, the base layer of context is something you are very familiar with — the conversation.
Approaching context engineering as a craft
A three-layer framework for understanding machine context
The quality of structures, processes and knowledge made available to the machine through engineered context can determine the success or failure of an AI use case, especially as complexity increases. Great context engineering enables AI to more readily “think with you” in your personal workflows. In more robust service implementations, it can support everything from fully automated processes, to touch-points with internal roles, to front-line interactions with your customers.
As AI interactions have more specificity, the outputs become more reliable and trustworthy, unlocking momentum for you. Here, I’ll focus on the individual generative AI use case (e.g., ChatGPT) for simplicity. The approach also applies to more complex agentic systems, so serves as a great foundational mental model.
To better understand context engineering, it helps to separate context into three layers, drawing parallels to how we work in teams. Understanding these layers will enable you to influence them more intentionally, so you can bring in AI more seamlessly.
Here are the three layers, going from the simplest to the most complex:
Context layer 1:
Conversation
This first layer is your live interaction and thread-level dialogue — all via the prompt. This layer is where you execute the task in-the-moment and interface with other layers, such as calling on specific tools or pre-saved context chunks, as needed. In a workshop, this would be much like a facilitator (you) bringing in relevant situational context, starting a discussion with shared understanding, asking for relevant thoughts to add, selecting a generative exercise, and facilitating the ideation and convergence.
Examples in this layer include live prompting, reframing questions, surfacing previous thread references, iterating on outputs in real time, and memory of conversation history.
Context layer 2:
Project
This second layer contains what is selected and made available to the project at hand. Relevant information and approaches are curated as references to inform and guide execution. If you imagine setting up for a workshop, this would mirror having a clearly articulated intent, and providing the right materials, background information, and exercises for thinking and problem solving.
Examples in this layer include templated prompts, a well-articulated goal or audience, reference documents or notes to influence approach, and examples to guide output formats.
Context layer 3:
Implementation
This third layer contains the infrastructure, rules, and collected and categorized information that determine how, and constrain what, context can be made available. It includes the functionality and behavior of the base model, along with any augmentations specific to the AI product you’re using. It’s akin to the ways of working, workflows, resources, and knowledge made available to enable people and teams in an organization as they get projects done.
Examples in this layer include system-level instructions and prompts, connected 3rd party tools and APIs, custom memory systems, and RAG (retrieval-augmented generation) setups.
Want a full breakdown of methods by layer and skill level?
I’m publishing that soon as a reference post and free download.
Subscribe to get it in your inbox.
Now that you have those layers in mind, let’s connect a few concepts that will help you keep context manageable and clear throughout your conversations. As we’ve discussed and you’ve likely experienced, context has a tendency to drift towards entropy and make the AI befuddled.
Navigating that dynamic through the conversation layer is an art of understanding two interplaying concepts — context window and context rot. Let’s walk through how those concepts interact in your conversation layer.
More is not better: Dynamically managing context windows
As we covered in Part 1, each conversation you have with generative AI is constrained by a context window — a limited-capacity of information storage that is unique to each conversation and resets as you start a new chat.
In essence, it’s your AI’s memory in the conversation layer. It’s filled with a mix of persistent memory (e.g., user preferences, ongoing projects), context from dialogue so far, and any manually added context from the project or implementation layers.
Uploaded project files generally serve as a “memory bank” of reusable information that must be retrieved on demand by prompts in the conversation layer. This approach saves window space while improving convenience and consistency.
Most models today also have conversation layer memory — a way of reducing the window space used by prior dialogue, with summarization techniques that preserve key context in less space.
To collaborate most effectively with AI, you need to understand these levers and be aware of how they impact the current state of the context window. It’s your way of shaping what your AI “knows” at any given moment. What you decide to bring in and what’s already there has a direct influence on outputs, and managing the context state is a bit of an art.
It’s dynamic and largely intuitive as you sense and respond. As more context gets added, there is more in the conversation layer for the AI to make sense of. As the window becomes full, earlier context will get pushed out to make room for any new context. You are in a constant juggling act managing what is known in order to influence output quality.
There’s currently no direct feedback on the current state of the context window, though maybe there should be.
We make do.
Don’t feed the animals (too much)
There’s a helpful and memorable analogy I like to use — the context window works a lot like your stomach.
If you feed it poorly, too much, or too many different things, it might not go so well. There’s a “just right” you need to feel out that’s a balance of quality and quantity.
With a poor context diet, you’ll start seeing a degradation in response quality known as context rot. Some of your choices will accelerate this decay, while others will help delay or prevent it.
Conversations that get too long or meandering, or too multi-topic and multi-intent, won’t go as well as focused chats “fed” carefully. If you front-load with dense context, there will be less room for a longer conversation because more of the window is filled from the start.
Going back to how LLMs work, context rot makes sense. It happens as the model’s probabilistic next-word generation loses track of in-the-moment intent within our conversation layer. The AI response loop, driven by the prompt and informed by the full context window, can’t accurately parse what you’ve provided into clear, actionable intent.
If you’ve fed the model too high a quantity of words, too much variety or unclear intent, or lost vital anchors from the early conversation simply due to conversation length, the context is not as clear. The AI’s next-word prediction has a larger potential response space that is less constrained, leading to vague and less relevant outputs.
Your role, as the facilitator, is to carefully provide context that is succinct, focused, and relevant. Then, through the conversation, to intuitively sense the quality of responses to adjust as needed.
We touched on simple techniques like chunking and re-anchoring conversations in Part 2, both of which help navigate the challenges of window limits and context rot. In the next section, we’ll get into a larger list of specific actionable tips.
12 non-technical tips for great context engineering
Advanced context engineering often involves tools in the implementation layer like RAG, APIs, or model fine-tuning — all of which require coding. This piece stays intentionally grounded in non-technical methods. If you do want to get into advanced skillsets, there are plenty of resources out there, and the foundation you are building here will help you greatly in that pursuit.
Here are 12 massively useful tips I and others have picked up. You can begin applying them yourself today and pass them on to help others out as well.
This list will help you manage context rot, improve conversation quality, and build up reusable context that compounds the value of your work over time.
Start fresh for new tasks
Even if the theme is related, start a new conversation when the task intent shifts to a new output or outcome. Given context windows operate at the conversation layer, restarting helps you maintain clarity by removing unnecessary conversation context. A fresh start can be all it takes.Enter every conversation with a clear intent
Treat each conversation like a working session with a specific goal, even if that goal is divergent thinking to go wide on the breadth of a topic. Multi-threaded conversations tend to go poorly. If you’re juggling too many objectives or being vague, the AI will reflect that, resulting in scattered logic and weaker responses.Ask what the AI already knows
When you start a new conversation, the AI usually has bits of context automatically there. At the start of a session, especially for ongoing projects, ask something like: “What context do you currently have about what I’m doing?” This reveals memory, uncovers assumptions, and helps you adjust before diving in. It can also be a helpful trick mid-conversation if you sense response drift and want to fix or extract context to start fresh again, or if you want to try shifting gears in the same conversation.Minimize input to maximize clarity
Remember the stomach analogy — don’t feed the context window too much. When bringing context into the conversation layer, only include what’s essential to the task and nothing more. You’re optimizing for relevance over coverage to protect the “clarity” of the machine’s thinking. Treat the prompt and other inputs like a focused working brief with no extra fluff.Re-anchor your conversations
Sometimes you’ll feel you are in a good flow and you don’t want to start fresh, or you may want to attempt to reign in some context rot. That’s okay — just be deliberate about it. If you’re mid-thread, ask what the AI knows, correct it where necessary, and restate the essentials: what you’re doing, where you are in the process, and what outcome you’re working toward. This does help keep the model aligned, though you need to trust your gut if you simply need a reset.
Create and reuse your prompting approaches
Save approaches and snippets you use often, such as questions, techniques, and machine guidance that fits your communication style and expertise. It doesn’t matter if you use Google Docs, Apple Notes, or something more advanced, just consider capturing them as reference for yourself. Treat them like building blocks or templates you can quickly assemble for new conversations, or to guide responses. And share them so other people can benefit!
Break work into modular parts
Divide complex tasks into smaller pieces that can be done in sequence or across parallel conversations. This approach isn’t just a personal productivity hack — it gives you more control over context, improves output quality, and makes outputs easier to collect, reuse, and refine.
Use parallel threads with purposeful hand-offs
Split large or multi-part tasks across separate threads. Use the structured output of one as a clean input for another, just like handing off outputs of subtasks within a project team. This strategically isolates context windows and improves focus and modularity in each step.
Capture structured outputs as you go
Context compounds if you are intentional about it. Your conversations contain thinking worth capturing, building on, and refining, especially early on. You should always be collecting and organizing useful project knowledge as references — for yourself, the machine’s own context, and any current or future teammates. Save summaries, drafts, frameworks, and definitions as you work. These give you a safety net against context rot while providing reusable components for future tasks.
Build and maintain project references
Create short, reusable blurbs or reference docs for your core projects, team structures, or audience needs. Pull these into project and conversation layers as needed instead of rebuilding context from scratch every time. These are all the same things you have likely been collecting or using in higher performing teams already, especially those that build knowledge and structure as they go. You don’t need to start from scratch!
Repurpose old assets to create new context
Your old strategy decks and creative briefs are your new context assets. Borrow structures, approaches, and content from these references. Be clear and concise bringing them into your context for the AI, then save tightened versions in project files to reuse and refine over time. Aimed brevity sharpens intent and amplifies clarity.
Use custom instructions to set defaults
Custom instructions let you define tone, role, formatting, and priorities once, then apply them across all threads. Use this like a “default prompt” that reduces friction and keeps your sessions consistent. Remember that you are shaping a system of interrelated conversations with context, not just the current conversation at hand.
Want this list as a reference for your work?
Feel free to each out — I’m working on a free download.
To wrap this up, I want to leave you with something to think about as you build or refine your own context system. Context engineering requires intention and care. There’s a real responsibility in designing systems that reverse engineer and shape how we think with AI — especially when those systems are used by other people.
Context engineering is the art of thoughtful, systematized metacognition
On the surface, context engineering is setting up reusable context and workflows to buy efficiency via repeatability. But as anyone who’s worked in operations will tell you — there’s a subtlety there. By doing this we are shaping our thinking and team culture, and ways of working.
Context engineering is the art of thoughtful, systematized metacognition, for individuals and teams. How we approach context embeds our philosophies of work, reinforces culture, and shapes how we “think with” AI. It’s the environment, paths, guardrails, and deliberate open spaces. We’re designing external thinking systems that both mirror and influence how we want to think and act, in specific situations over time.
The simplest definition of metacognition I’ve seen is thinking about how you think. It’s an introspective practice of understanding how our mental patterns, strategies, and workflows shape how we make decisions and take action.
Bringing in a perspective from behavioral psychology, who we are is seen through our behavior, and shaped by how we think and feel. And in a group, such as a team, company or society, that translates up to culture — a culture is best understood by observing how people behave in different situations. For the individual, there is a constant recursive loop of solidification or shifting of “self” through action. For a group, that same dynamic shapes the evolution of shared values and collective norms over time.
Metacognition allows us to step back and observe that chain using the lens of situational sequences and pairings of thought, feeling, and action. In context engineering, we are breaking down and standardizing that which shapes our thinking and action, in pursuit of outcomes. We are guiding ourselves and others, using tools like frameworks, workflows, curated information, and framing effects, to systematically influence and shape the “why”, “how”, and “what” in our human-machine collaboration.
There’s a spectrum of rigidity to openness to think about here.
To help imagine that, think of the context difference that would exist to support a robust multinational supply chain producing and distributing doorstops vs. an innovation team operating at the cutting edge of applied knowledge. Both might use context engineering to systematize their work, but with very different degrees of careful constraint.
The stark difference? The first will be leaning much more towards prescriptive precision to support quality control at scale. The second will be much looser to allow for curious exploration and intentional discovery of opportunity areas to create new value.
In the larger solution portfolios of much more mature companies, you might have pockets of each, even in the same vertical or department. It all depends on the type of decisions being made and the nature of outcomes being targeted.
The point being we need to think about that spectrum both for ourselves and for larger context systems — where we want to be open, where we need to be rigid, where we can explore risk-free, and where we need to tread lightly. Too much rigidity in the wrong places can kill creativity and innovation, leading to slow obsolescence. Too much looseness in the wrong places can kill honed processes and quality control, risking your base structures and fundamental value.
It’s a tightrope walk with fine balances that requires understanding the context in which you are creating context.
Context engineering is an art. Design it with intent.
If you are thinking about how to use AI or set up context in your own work,
and want help turning possibilities into action,
feel free to reach out with a problem or project in mind.
Let’s see how I can help.
I want to hear from you
Thank you for spending time with this piece. I’m genuinely happy you made it here. This series comes from an intertwining set of fascinations and a joy in finding useful seeds for better thinking and making.
If you have a moment, I’d love to hear from you.
Thinking about how you currently use AI…
What was your biggest takeaway?
What approach are you thinking of doing more or introducing?
What is your tip to someone getting started?
This Talking to Machines post is the third and last of a foundational three-part series. Though I can’t promise there won’t be more posts under the same label :) This is a personal area of interest that’s constantly evolving, and so learning will inevitably continue.
If this post resonated, hitting the like button is a simple way to let me know. And if you want to follow along with future posts, or help this thinking reach more people, please consider subscribing and sharing.
Keep thinking and making, and be well.
Have a wonderful day.
Your friendly designer & innovator,
- Peter






Since Part 1, I’ve been following your writing. I love how you break things down so normal people and non‑techies like me can understand—without adding complexity to the main argument you’re making.
Love the post! Context engineering is the next level for people ready to move from prompting to an AI that can think with you, know you, and help you achieve your goals by leveraging a set of tools and resources.
The closest experience I’ve had with context engineering is using Claude Code—when the AI “knows” me, I can direct it to help me achieve my goals. It’s no longer about prompting or any specific prompt technique. It’s about building a system that multiplies your efforts and helps you do your work better.
Wisdom. Just learn the lesson and move on.