As of November 2025, GitHub Copilot has evolved from a “smart autocomplete” tool into a full coding agent that can take on multi-step tasks. The most powerful way to steer that agent is with an agents.md file. Based on GitHub’s recent analysis of more than 2,500 public repositories, the difference between an agent that flails and one that feels like a senior teammate comes down to how you define its persona, executable commands, and boundaries.
This evergreen guide walks you through building a custom GitHub Copilot @test-agent using agents.md. You’ll learn how agent instructions work in Copilot, how to encode your testing workflows and tech stack, and how to write an agents.md test persona that actually improves coverage instead of breaking your build. The goal: a tailored test-writing agent that becomes the foundation for automating your team’s entire QA workflow.
How agents.md fits into GitHub Copilot’s agent ecosystem
GitHub now supports two closely related concepts:
- Custom agents: Named Copilot coding agents defined by Markdown “agent profiles” (for example,
.github/agents/test-specialist.agent.md) that specify name, description, tools, prompts, and optional MCP servers. These are documented in “About custom agents” and “Custom agents configuration” (GitHub Docs, 2025). - Agent instructions via
AGENTS.md: Repository-level guidance files described in “Adding repository custom instructions for GitHub Copilot”. Copilot will read the nearestAGENTS.mdfile to shape how AI agents behave in that part of the tree.
In November 2025, GitHub published “How to write a great agents.md: Lessons from over 2,500 repositories,” showing that effective agent instruction files share the same patterns:
- They define a narrow, specialist role (like
@test-agent). - They list executable commands early (for example,
npm test,pytest -v,cargo test --coverage). - They set hard boundaries (for example, “never modify
src/” or “never delete failing tests”). - They include real code examples and output samples instead of vague prose.
In practice, you’ll often combine:
- A custom agent profile like
.github/agents/test-agent.agent.mdto register the@test-agentin Copilot’s UI and CLI. - An
AGENTS.mdoragents.mdfile in the repo (or subtrees) to give deeper, test-focused instructions tied to the project structure and commands.
Plan your @test-agent: scope, stack, and workflows
Before you touch agents.md, decide what “success” looks like for your GitHub Copilot @test-agent in your codebase.
Define a narrow, high-value mission
From GitHub’s 2,500-repo analysis, vague agents like “You are a helpful coding assistant” consistently underperform. For a test agent, define one clear mission, for example:
- “Write and maintain Jest unit tests for React 18 + TypeScript components in
src/components/.” - “Generate Pytest test cases with edge coverage for business logic in
app/services/.” - “Extend Playwright end-to-end tests for the existing flows in
tests/e2e/without touching application code.”
Inventory your testing stack (with versions)
As of late 2025, Copilot’s models are tuned on modern stacks. You should still spell out exactly what you use, including versions and entrypoints. For example:
- JavaScript/TypeScript: Node 20, React 18, Jest 29, Playwright 1.49
- Python: Python 3.12, Pytest 8.x, factory-boy, HTTPX
- Rust: Rust stable (2021 edition),
cargo testwith--all-features
You’ll encode this under a “Project knowledge” or similar section in agents.md and in the custom agent profile.
Map your test workflows to concrete commands
According to GitHub’s article, successfully used agents.md files always bring commands to the top. List the real commands your team runs, for example:
npm testfor unit testsnpm run test:watchfor local debuggingpytest -qfor fast feedbackpytest -m "e2e" tests/e2e/for slow end-to-end suitescargo test --all-features -- --nocapturefor Rust
Agents that know exactly how to run tests will fail less often and iterate faster when they update or add tests.

@test-agent uses agents.md instructions to interact safely with your repo.Structure of an effective agents.md for testing
GitHub recommends covering six core areas in strong agent instruction files: commands, testing, project structure, code style, git workflow, and boundaries. For a GitHub Copilot @test-agent, you’ll bias heavily toward testing and boundaries.
| Section | Purpose for @test-agent |
|---|---|
| Persona / role | Define the agent as a QA/test specialist, not a general coder. |
| Project knowledge | Tell it which frameworks and versions your tests use. |
| Commands | List exact commands to run tests and lint tests. |
| Code style & examples | Show “good” and “bad” test examples in your stack. |
| Git & workflow notes | Explain how tests relate to CI and branching. |
| Boundaries | Hard rules: where it can write tests and what it must never touch. |
1. Persona and role
Your opening should read more like a job description than a generic prompt. For example:
You are a senior QA engineer for this repository.
Your responsibilities:
- Analyze existing tests and identify coverage gaps.
- Write new unit, integration, and e2e tests in the existing style.
- Run the appropriate test commands and interpret failures.
- Suggest test refactors to improve clarity, not to change behavior.
Your scope:
- READ application code from `src/`, `app/`, and `lib/`.
- WRITE tests only to `tests/` (and language-appropriate test dirs).
- NEVER modify production code unless the user explicitly asks you to.2. Project knowledge
Immediately follow with concrete details. GitHub’s analysis shows that “React project” is too vague; “React 18 + TypeScript + Jest” works better.
## Project knowledge
- Tech stack:
- Frontend: React 18, TypeScript 5, Vite
- Tests: Jest 29 with @testing-library/react, Playwright 1.49 for e2e
- File structure:
- `src/` – source code (React components, hooks, utilities)
- `tests/unit/` – Jest unit tests for components and hooks
- `tests/e2e/` – Playwright specs
- `tests/utils/` – test helpers and custom matchers3. Commands (put these early)
Following GitHub’s recommendation, list real, runnable commands early. Include flags and context so Copilot can make good choices.
## Commands you can use
- Run all unit tests: `npm test`
- Run unit tests for a single file: `npm test -- MyComponent.test.tsx`
- Run Playwright e2e tests: `npx playwright test`
- Run only e2e smoke suite: `npx playwright test --grep @smoke`
- Type check: `npm run typecheck` (must be clean before merging)4. Test style and examples
GitHub’s 2,500-file study highlighted that “one real code snippet beats three paragraphs of description.” Show your test style with good vs. bad examples.
## Testing practices
Always:
- Use descriptive test names describing behavior, not implementation.
- Prefer `screen.getByRole` with accessible roles over test IDs when possible.
- Avoid mocking internal implementation details.
### Good Jest example (follow this style)
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { SaveButton } from '../../src/components/SaveButton';
describe('<SaveButton />', () => {
it('disables itself and shows spinner while saving', async () => {
const user = userEvent.setup();
const onSave = vi.fn().mockResolvedValue(undefined);
render(<SaveButton onSave={onSave} />);
await user.click(screen.getByRole('button', { name: /save/i }));
expect(onSave).toHaveBeenCalledTimes(1);
expect(screen.getByRole('button', { name: /saving/i })).toBeDisabled();
});
});
### Bad example (avoid this)
it('works', () => {
// vague name, no assertions
});
agents.md file tuned for a GitHub Copilot @test-agent.5. Git workflow and CI alignment
If your CI runs a specific test pipeline, tell the agent how its work relates:
## Git & CI workflow
- Our CI runs:
- `npm ci`
- `npm run lint`
- `npm run typecheck`
- `npm test`
- `npx playwright test --grep @smoke`
- When adding or updating tests, prefer small, focused changes that will pass all of the above.
- Do not rename or delete snapshot files unless explicitly asked to.6. Boundaries and safety rails
This is where you prevent Copilot from silently “fixing” failing tests by deleting them or rewriting production logic. GitHub’s article calls out “Never delete failing tests” as a critical rule for @test-agent-style personas.
## Boundaries
Always do:
- Write new tests into `tests/unit/` or `tests/e2e/`.
- Prefer extending existing suites before creating new describe blocks.
- Run the relevant test command after making changes and include failing output in your explanation.
Ask first:
- When a test failure suggests a bug in production code.
- Before introducing new test dependencies or libraries.
Never:
- Modify files in `src/`, `app/`, `lib/`, or `config/` unless the user explicitly instructs you.
- Delete or skip failing tests to "make CI green".
- Touch environment-specific config (e.g., `.env*`, `playwright.config.*`).Create the @test-agent profile file
To register a GitHub Copilot @test-agent that you can select in the agents panel, you create a custom agent profile. As documented in “Creating custom agents” (GitHub Docs, 2025), an agent profile is a Markdown file with YAML frontmatter.
In your repo, create .github/agents/test-agent.agent.md (or use the UI at github.com/copilot/agents to generate it). A minimal test-focused profile might look like this:
---
name: test-agent
description: Writes and maintains automated tests for this repository without modifying production code
tools: ["read", "edit", "search", "shell"]
target: github-copilot
---
You are a testing specialist focused on improving code quality through comprehensive automated testing.
Your responsibilities:
- Analyze existing test suites and identify coverage gaps.
- Write new unit, integration, and end-to-end tests following the patterns in this repository.
- Run the appropriate test commands and summarize failures.
- Propose refactors to flaky or brittle tests while preserving behavior.
Scope and boundaries:
- READ application code from `src/`, `app/`, and `lib/`.
- WRITE new or updated tests only in `tests/` and language-appropriate test directories.
- NEVER delete or skip failing tests in order to make CI pass.
- NEVER change production code unless the user explicitly instructs you to do so.This aligns with the “Testing specialist” examples shown in both the custom agents how-to and the custom agents configuration reference. You can then deepen this with content echoed from your agents.md file (tech stack, commands, examples, boundaries).

@test-agent: define the profile, wire in agents.md instructions, then use it across GitHub, IDEs, and the Copilot CLI.Hooking @test-agent into your workflow
Once committed to your default branch:
- Go to the Agents tab at github.com/copilot/agents and confirm
test-agentappears. - In GitHub issues, assign Copilot coding agent to a “write tests” issue and choose
test-agentfrom the dropdown. - In VS Code or JetBrains, open Copilot Chat, pick your
test-agentfrom the agent dropdown, and run prompts such as “Increase unit test coverage forsrc/services/BillingService.ts.” - With the Copilot CLI, use an agent override, for example:
gh copilot /agent test-agent "Add Jest tests for src/utils/slugify.ts"(syntax may vary slightly as Copilot CLI evolves).
End-to-end example: Jest + Playwright @test-agent
This example ties everything together for a modern TypeScript + React app with Jest and Playwright. Adjust names and commands to your stack.
Example agents.md focused on testing
---
name: test-agent
description: Senior QA engineer who writes and maintains automated tests for this repo
---
You are a senior QA engineer for this project.
## Your role
- Focus exclusively on tests and test infrastructure.
- Increase confidence and coverage without changing app behavior.
- Prefer readable, maintainable tests over clever ones.
## Project knowledge
- Tech stack:
- React 18 + TypeScript 5 (Vite)
- Jest 29 with @testing-library/react
- Playwright 1.49 for e2e
- File layout:
- `src/` – application code (React components, hooks, services)
- `tests/unit/` – Jest unit tests
- `tests/e2e/` – Playwright specs
- `tests/utils/` – helpers (factories, custom render, test data)
## Commands you can use
- Run all unit tests: `npm test`
- Run tests for a specific file: `npm test -- MyFile.test.tsx`
- Run Playwright e2e suite: `npx playwright test`
- Run Playwright smoke tests: `npx playwright test --grep @smoke`
Always choose the narrowest command that validates the changes you made.
## Test style
- Use Jest and @testing-library/react patterns shown below.
- One assertion group per test; name should describe behavior.
Good example:
```ts
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { LoginForm } from '../../../src/components/LoginForm';
describe('<LoginForm />', () => {
it('shows an error when credentials are invalid', async () => {
const user = userEvent.setup();
const handleSubmit = vi.fn().mockRejectedValue(new Error('Invalid'));
render(<LoginForm onSubmit={handleSubmit} />);
await user.type(screen.getByLabelText(/email/i), '[email protected]');
await user.type(screen.getByLabelText(/password/i), 'bad-password');
await user.click(screen.getByRole('button', { name: /sign in/i }));
expect(await screen.findByText(/invalid email or password/i)).toBeInTheDocument();
});
});
```
## Git & CI workflow
- CI runs `npm ci`, `npm run lint`, `npm run typecheck`, `npm test`, and `npx playwright test --grep @smoke`.
- Structure test changes so they pass the same pipeline locally.
## Boundaries
Always:
- Add or update tests under `tests/` only.
- Update or create tests to reflect the intended behavior documented in code and README.
- Run the most relevant test command after changes.
Ask first:
- When a test failure implies a bug in production code.
- Before introducing new test libraries, matchers, or global helpers.
Never:
- Modify code in `src/` unless the user explicitly directs you to.
- Delete, skip, or weaken failing tests to make CI pass.
- Touch configuration files, secrets, or deployment-related content.Using @test-agent to automate QA over time
- Start small: Open a single “thin slice” issue such as “Add unit tests for
src/utils/dateRange.ts.” Use@test-agentand review the PR it opens. - Refine instructions: When the agent makes an undesirable change (for example, creating overly broad tests), add a concrete counter-example and a rule into
agents.md. - Scale up: Once happy with the behavior, create more issues for coverage gaps, or use the Copilot CLI to iterate through modules or directories.
- Link to metrics: Combine Copilot’s output with coverage reports in CI. For example, require “coverage not decreasing” for PRs where
@test-agentis the actor.
Key takeaways and next steps
GitHub’s November 2025 guidance, grounded in more than 2,500 agents.md files, shows that effective GitHub Copilot @test-agent setups are not about clever prompts; they are about encoding your testing reality as an operating manual. By giving your agent a concrete persona, stack-aware commands, realistic examples, and strict boundaries, you turn Copilot from a noisy suggestion engine into a repeatable QA teammate.
To move forward, create a minimal .github/agents/test-agent.agent.md, then draft an agents.md that covers persona, commands, testing style, and boundaries for your stack. Start with one well-scoped testing task, refine based on the PR it opens, and iterate. Over time, this custom @test-agent becomes the backbone of an automated testing workflow that can raise coverage, stabilize releases, and free your human engineers to focus on truly hard problems.