If you run your work through circles, roles and consent-based governance already, this set-up guide is for you. Every step, the first prompts, and how to hand a role over to an AI agent once you are ready. That is what this guide is.
This is written for existing Nestr customers. It assumes you have an active workspace, at least one circle with populated roles, and a current rhythm of tactical and governance meetings. If any of that is missing, the companion guide on getting started with role-based work covers the one-hour workshop that gets you there first.
Everything else here is the how.
The Nestr MCP supports two fundamentally different ways AI can work inside your organisation. The distinction is not technical. The connector is the same, the tools are the same. What changes is the level of autonomy.
You are the role-filler. You prompt Claude. It reads and writes your Nestr workspace on your behalf to help you do your own role faster. "Prep me for the Product Circle tactical." "Summarise the last three comments on that project." "Draft a proposal I have been thinking about." "Whats the highest impact action I can do right now?" You stay in the driver's seat. Every action is triggered by you.
This mode is useful from day one and it stays useful. Most people who start with the MCP live here first, and most of them keep using it this way even after they have AI-energised roles in the team.
An AI energises a role itself. Not helping you do your role, but actually filling a role the way a human role-filler would. The agent reads its role's purpose, accountabilities, domains, policies, and skills, and acts accordingly.
Here is the important part, and it is the thing most articles on this topic get wrong: the same role rules apply to an AI role-filler that apply to a human role-filler. Focus on your accountabilities. Respect other roles' domains. Process tensions through the normal tactical and governance process. Post updates visible to the circle. The only difference is the entity behind the role.
The consequence matters. A role energised by an AI does not need special handling. If the role is well-defined, the agent works. If the role is vague, the agent drifts, exactly the way a human in a vague role would drift.
When you want to test an AI-energised role on a small scale without the whole Circle having to adjust, a comfortable pattern is to take that role, split it into a small circle with two or three sub-roles, and let an AI energise one sub-role while you keep the others.
This is a testing pattern, not a requirement. You retain the overview role, and the AI gets a sub-role with a narrow enough scope that you can tell quickly whether it is working and adjust. Once you are comfortable, it can energise more roles. The safe-testing approach is covered in more depth in how to safely start experimenting with AI agents in your team.
Open Claude at claude.ai or in Claude Desktop. Go to Settings, Connectors, Add custom connector. Name it Nestr. Set the Remote MCP URL to https://mcp.nestr.io/mcp. Click Add, then Authenticate. Claude opens a browser tab, you sign in to Nestr, you grant access, and the connector activates. Both web and desktop share the same connector once authenticated. That's it!
For other clients (ChatGPT, Gemini, Cursor, VS Code, Claude Code, Copilot Studio), the step-by-step instructions live on mcp.nestr.io.
Two sensible choices, each serving a different purpose.
OAuth is the right default for individual use. The agent inherits your permissions inside the workspace. It can only access what you can already access. If you leave a circle, the agent loses access to it too.
API keys give full workspace access regardless of the user running the agent. This is the right choice for scheduled automations, cloud Routines, and anything that needs to run without a human present. API keys live in Settings, Integrations, Workspace API access. Treat them as production credentials and rotate them on a cadence.
Start with OAuth for your own experiments. Only move to an API key when you are running scheduled tasks that need to fire when you are not at your desk.
Once the connector is active, ask your LLM: "What circles do I have in my Nestr workspace?"
If the list matches what you see in the Nestr app, you are connected. If not, Cl. Ninety percent of failed first setups are an OAuth flow that was started and not completed.
Resist the urge to start with a complex task. Run these four in sequence. Each tests a specific capability, and each one often surfaces a small governance-clarity issue worth fixing before you go further.
Prompt one: capture a tension. Tests writing into your workspace and routing against your structure. If the routing looks wrong, the fix is usually a clearer role purpose rather than a cleverer prompt.
Prompt two: prepare a tactical meeting. Tests retrieval across a circle. Often surfaces stale projects as a useful side-effect. The full tactical meeting format is covered in the practical guide to tactical meetings.
Prompt three: add to the daily plan. Tests that the agent respects role boundaries when writing. If it adds the item to the wrong role or to a circle instead of a role, you have found a small clarity issue worth cleaning up.
Prompt four: draft a governance proposal. The agent drafts a proposal. Adoption still runs through consent-based governance. Authority to adopt or object stays with the circle. See the practical guide to governance meetings for humans and AI agents for the full flow.
When using the MCP as your assistant starts to feel like a chore, you are prompting for the same things every day, the agent keeps doing adjacent work that should be its own role, or a slice of your work is well-specified enough that the agent could take the first pass without you starting every run, you are ready for the next step.
The core move is simple: identify a role, confirm it is well-defined, and have an AI energise it. Whether that is a role you currently fill, a role that is vacant, or a sub-role created as a testing step (see "split a role into a small circle" above) is up to you (and of course up to the Circle Lead or another role responsible for assigning roles).
Because an AI energising a role follows the same role rules as a human, the question "what does the role need before an agent can fill it?" is really "what does the role need to be well-defined?" The answer is the same as it has always been in role-based work. The requirements are simply more visible when it is an AI in the role, because a human can paper over gaps with intuition and an AI cannot.
Work through this list. Skipping any of it produces a role-filler, human or AI, that drifts, asks too much, or quietly does the wrong thing.
None of the above is specific to AI. All of it is role-based work done well. If you read the list and thought "most of my roles do not have that," that is the honest, useful finding. The MCP did not cause it. The MCP only made it visible.
A role-filler activating to do work goes through a predictable sequence. Humans do it intuitively. For AI role-fillers, writing the sequence down on the role (as a skill which you can directly add to Nestr in a Circle or Role in the "Other items" tab) means the agent has something to follow on every activation.
Below is an example pattern with seven steps:
Your first version of this sequence can be half a page. You can write it as a skill on the role, or as a skill on the circle if more than one role will use it. The point is that it exists in writing and the agent can read it. Evolve it as you notice the agent making the same mistake twice.
Three options, differing in one thing: whether your machine needs to be on.
Claude Cowork desktop scheduled tasks are the easiest way to start. Inside the Claude desktop app, go to the Scheduled section in the sidebar, or type /schedule in a Cowork session. Pick hourly, daily, weekdays, or weekly. Each run is a fresh Cowork session with full MCP access, so your Nestr connector is available. Caveat: tasks only fire when the desktop app is open and the computer is awake. Skipped runs execute when the machine wakes.
Claude Routines run on Anthropic's infrastructure, independent of your machine. Set them up at claude.ai/code/scheduled or by using /schedule inside Claude Code. Minimum interval is one hour. Each routine can expose an HTTP endpoint for external triggers. This is the right option for any role that needs to run reliably overnight or over the weekend.
Claude Code /loop is session-scoped and terminal-based. Useful for developers polling a deployment; not the right tool for a role that runs for months.
For most Nestr customers experimenting with their first agent-energised role, start with Cowork desktop scheduled tasks on a daily or twice-daily cadence. Graduate to a Routine once the role's output is stable enough that you do not need to watch it run.
Pattern one: per-role activation. The most common and the easiest to reason about. One scheduled task, one role, one cadence. The prompt activates the role, points at the boot sequence, and stops.
Everything the agent needs is either in the prompt or in the workspace. No role-specific instructions live in the prompt itself. Those live on the role, where they belong, and where you can evolve them once and have all runs pick up the change.
Pattern two: circle sweep. One scheduled task, one circle, picks the role that currently has the highest-priority next action. Use this when several (or all) roles in a circle are AI-energised and you want an hourly pulse that activates whichever one is most needed.
The circle sweep is more sensitive to governance quality than the per-role activation. If priorities across roles are not explicit, the agent will pick one that feels reasonable to it and not necessarily the one you would have picked. Start with the per-role pattern and move to the sweep only when a circle has two or three mature AI-energised sub-roles.
Three things, in order of importance.
Read the Role update comments. They are a direct window into how the agent is reasoning about the work. If every run has high confidence and the output is good, you have a calibrated agent. If every run has high confidence and the output is wrong, the skill is misleading. If confidence bounces between medium and low, something in the role's context is ambiguous. Look for the pattern.
Check the status-vs-tension test. Count the tensions the agent raised in the first week. If it is zero, the agent is probably swallowing signal it should surface. If it is more than a handful, the agent is dumping run logs into the tension inbox. Tune the skill.
Notice what the agent stopped doing. Scheduled agents that keep doing things even when the queue is genuinely empty produce work the circle does not need. A well-configured agent runs, finds nothing to do, posts a short "queue empty" update, and stops. If your agent never posts that update, it is working harder than the circle needs.
Do this in one sitting and you will be running your first agent experiment before the afternoon is out. Tick items as you complete them.
When all twenty items are done, you are running your first agent experiment. Commit to raising every tension you notice in your next tactical. A single loop of that kind is worth more than a month of reading.
Three categories, in this order.
Context tensions. The agent will ask, implicitly or explicitly, for information it cannot reach. Access to a tool, visibility into another circle's work, a policy it is not sure applies. Each is a governance signal. Some resolve operationally (grant a read-only access, clarify a policy). Some surface a structural question (should this sub-role have its own domain?). Both belong in the normal process. Context engineering for AI agents covers why context is the dominant constraint on agent performance.
Boundary tensions. The agent will occasionally produce work adjacent to its role rather than inside it. A research analyst that starts drafting strategy. A meeting-prep helper that starts summarising governance. The response is not to tighten the prompt, it is to tighten the role definition. A role with explicit accountabilities and policies produces an agent that stays in its lane.
Skill tensions. Over time the patterns settle into recurring shapes: how a weekly update looks, what a good research brief looks like for your circle. These belong in the skill layer, not as loose prompt conventions. The do's and don'ts for deploying your first AI agent covers the skill-layer pattern in more depth.
A Nestr account, an AI assistant that supports MCP (Claude is the most common starting point, but any will do), and about five minutes for the OAuth setup. You also need at least one circle with populated roles. Without that, there is very little for the agent to work with. If your workspace is new, run the role-mapping workshop first, or ask the MCP for help directly.
It is a fundamental difference. As an assistant, the AI helps you fill your role. You are still the role-filler, you are still prompting, and you are still accountable for the work. As an agent energising a role, the AI fills the role itself, following the same role rules a human would: purpose, accountabilities, domains, policies, skills. One mode is a tool you use. The other is a colleague in the circle.
OAuth for personal use and most team rollouts. The agent inherits your permissions. API keys for scheduled automations, Claude Routines, and anything running without a human present. API keys grant full workspace access, so treat them as production credentials and rotate them on a cadence.
No. The agent operates with your permissions and can draft proposals and surface tensions, but structural changes still flow through your consent-based governance process. A proposed accountability change becomes a tension, enters the governance meeting, and is adopted only through the normal process. The agent has no different authority then any role-filler.
When your assistant use starts to feel like a chore. Concretely: when you are prompting for the same things every day, when the agent keeps doing adjacent work that should be its own role, or when a task in your workflow is well-specified enough that the agent could take the first pass without you starting every run. Most teams reach this point within two to three weeks of active MCP use.
A purpose written as a future state, accountabilities written as recurring -ing-verb work, at least one explicit domain where boundary clarity matters, a skill of 300 to 500 words, and a convention for structured updates (the Role Update Protocol). Anything less and the role-filler, human or AI, will drift or stall.
Cowork desktop scheduled tasks for the first few weeks. Easy setup, works with your existing Nestr connector, you can watch runs happen in the app. Move to Claude Routines when you want the agent to fire reliably overnight or when you are not at your machine. For developer-heavy setups, Claude Code /loop is useful for in-session polling but not for durable automation.
Short, workspace-pointing, role-scoped, and deferring to the role's own skill file. The prompt names the role and circle, points at the Boot Sequence, asks for one action, requires a Role Update Protocol comment, and stops. Everything specific to the role lives on the role, not in the prompt. That is what makes the setup maintainable as the role evolves.
Yes. The MCP works with whatever self-organisation concepts you use: circles, roles, accountabilities, domains, policies. Whether that is Holacracy, Sociocracy 3.0, classical sociocracy, Teal, or a custom blend, the agent reads the structure you have put in place.
Yes, in two ways. OAuth authentication means the agent only sees what you see. Permissions excluded from your access are excluded from the agent's. You can also choose which MCP tools to enable or disable in your AI client, which restricts which Nestr actions the agent can take. Both controls are user-facing. Neither requires Nestr admin involvement.
Queries and the relevant workspace context are processed by your chosen AI provider (Anthropic for Claude, OpenAI for ChatGPT, and so on). The MCP itself does not store your conversations. Review each provider's privacy policy before scheduling unattended runs, and match provider choice to your jurisdiction if data residency is a hard requirement.
What is Agentic AI? A Complete Guide to AI Agents for Organisations
AI Agent Governance: The Organisational Readiness Gap
How to Safely Start Experimenting with AI Agents in Your Team
Do's and Don'ts for Deploying Your First AI Agent
The Tactical Meeting: A Practical Guide
The Governance Meeting: A Practical Guide
Context Engineering for AI Agents
Getting Started with Role-Based Work
Nestr MCP documentation and setup for all AI clients
Model Context Protocol specification
Anthropic announcing the Model Context Protocol