Skip to main content

Cursor Rules: How to Keep AI Aligned With Your Codebase

Learn how Cursor rules guide AI coding, from defining project-level context and file-matching patterns to maintaining accurate, up-to-date rules over time.
Mar 11, 2026  · 8 min read

You open a new chat in Cursor and paste the same block of instructions you've pasted a hundred times before. Your stack, your naming conventions, and the folder structure the AI keeps getting wrong. Then it generates a 40-line class with three layers of abstraction for something that needed a function.

Time to wake up from this nightmare. Cursor rules are project files that make pasting unnecessary. Write your conventions once, and the AI's output matches your codebase instead of fighting it.

This tutorial builds a set of .mdc rule files for a Python web project using FastAPI and pytest, covering project context, code simplicity, and API error formatting. If you're new to Cursor, I recommend taking our Software Development with Cursor course first, then open a project and follow along.

What Are Cursor Rules?

Cursor rules are markdown files with MDC-specific frontmatter that live in your project at .cursor/rules/. Before your prompt reaches the model, Cursor checks which rules match and prepends their content to the context window. Your instructions land first, every time.

Each .mdc file has three frontmatter fields that control activation: description, globs, and alwaysApply. A rule can fire on every conversation or only when certain files are open. A third option, Apply Intelligently, lets the agent read the description and decide for itself whether the rule is relevant. The next section covers each type hands-on.

Flowchart showing how Cursor evaluates each mdc rule file through three decision points: alwaysApply check, glob pattern matching against open files, and agent-based description relevance scoring

Cursor supports four categories of rules:

Type

Location

Scope

Shared via

Best for

Project Rules

.cursor/rules/*.mdc

Current repo

Git

Team-wide AI behavior per project

User Rules

Cursor Settings > Rules

All projects

Not shared

Personal preferences

Team Rules

Cursor dashboard

All team members

Dashboard (paid plans)

Org-wide enforcement

AGENTS.md

Project root or subdirs

Current repo

Git

Simple projects, cross-tool compatibility

Project rules replaced the older .cursorrules file, a single root-level file with no scoping or metadata, by splitting rules into multiple .mdc files with frontmatter.

AGENTS.md is the simpler alternative for projects that don't need glob scoping. It's plain markdown recognized by AI coding tools beyond Cursor, and deeper files in subdirectories override parent ones.

Cursor Settings showing the Rules, Skills, Subagents page with project rules listed including rule names and glob patterns

You can learn more about how agent-based workflows connect to your development process in our AI Agent Fundamentals skill track.

Creating and Configuring Cursor Project Rules

The rest of this tutorial builds rule files for a FastAPI project using pytest. The patterns transfer to any stack.

Setting up the rules directory and files

Create a .cursor/rules/ directory in your project root. Every .mdc file you place here becomes a rule that Cursor picks up instantly, no restart needed.

The first rule file, project-context.mdc, gives the AI persistent awareness of your stack and folder layout:

---
description: ""
alwaysApply: true
globs:
---
 
# Project Context
 
This is a FastAPI web application with pytest for testing.
 
## Tech Stack
- Python 3.12, FastAPI, Pydantic v2, SQLAlchemy 2.0
- pytest with pytest-asyncio for async test support
- Alembic for database migrations
 
## Project Structure
- app/main.py: FastAPI application entry point
- app/routers/: API route handlers
- app/models/: SQLAlchemy models
- app/schemas/: Pydantic request/response schemas
- tests/: pytest test files mirroring app/ structure
 
## Patterns
- All endpoints are async
- Use dependency injection for database sessions
- Response models are always Pydantic schemas, not raw dicts

Typing /create-rule in an Agent chat is another way in: Cursor drafts one for you through a chat panel.

Cursor rule creation panel showing questions about what the rule should enforce with options for coding style, language conventions, and framework patterns

The alwaysApply: true frontmatter means this rule loads in every conversation. Name files in kebab-case by concern (project-context.mdc, simplicity.mdc, error-responses.mdc) and keep each one under 500 lines. A bloated rule file eats context window space you need for your actual code and prompts. Commit the .cursor/rules/ directory to git.

Configuring rule metadata for accurate matching

The project-context.mdc rule above uses alwaysApply: true, the simplest mode: every conversation loads it regardless of what files you're working on. That's the right choice for project-wide context.

Some rules apply only to certain parts of the codebase. Set alwaysApply: false and add glob patterns as a comma-separated list to scope them. A rule with globs: app/routers/**/*.py fires only when matching files (here: Python files in app/routers/ or subfolders) appear in context, keeping API conventions out of unrelated chats.

Apply Intelligently skips file patterns entirely. You leave globs empty and write a detailed description instead, and the agent decides whether the rule is relevant to the current conversation. Description quality matters: "Error handling rules" is too vague for the agent to match reliably, but "Conventions for FastAPI error responses including HTTP status codes, error body structure, and exception handler patterns" has clear signals.

Rules with no globs, no meaningful description, and alwaysApply: false become manual-only, loaded when you type @rule-name in chat. Good for reference material you pull in on demand.

Cursor .mdc rule file open in the editor with the application type dropdown expanded showing Always Apply, Apply Intelligently, Apply to Specific Files, and Apply Manually options

Writing rule content and constraints

Rules written with soft language ("try to keep functions small," "prefer async endpoints") give the AI permission to ignore them. Write commands instead: "Functions must be under 30 lines. All endpoints must be async."

The intro mentioned the AI generating a 40-line class with three layers of abstraction. The rule that stops that behavior is simplicity.mdc:

---
description: ""
alwaysApply: true
globs:
---
 
# Code Simplicity
 
Write the simplest code that solves the problem. Every abstraction must earn its place.
 
## Constraints
- No base classes with one child class
- No interfaces or abstract classes with one implementation
- No factory or strategy patterns when an if statement works
- No wrapper functions that just call another function
- No configurable parameters that nothing passes
- No "just in case" error handling for impossible conditions
 
## When modifying existing code
- Match the patterns already in the codebase
- Do not refactor surrounding code unless asked
- Do not add type annotations to unchanged code
- Do not add docstrings to unchanged functions
 
## Code size
- Functions under 30 lines
- If a function needs a comment explaining what a block does, extract that block into a named function instead

The difference is immediate. The AI stops reaching for abstract base classes when there's only one implementation, and uses if-statements when there are two cases.

The third rule, error-responses.mdc, is glob-scoped to fire only when working with API route files:

---
description: "Standard error response format for all API endpoints"
alwaysApply: false
globs: app/routers/**/*.py
---
 
# API Error Responses
 
All error responses must follow this structure:
 
{
  "detail": "Human-readable error message",
  "code": "MACHINE_READABLE_CODE",
  "field": "optional_field_name"
}
 
## Rules
- Never return raw exception messages to clients
- Use HTTPException with a detail dict matching the structure above
- 400 for validation errors, 404 for missing resources, 409 for conflicts
- Log the full traceback server-side before returning the sanitized response

When possible, point to existing files in your codebase instead of copying code into rules. A line like "See app/routers/users.py for the canonicalendpoint structure" stays accurate as the code evolves, while a copied snippet goes stale the moment someone refactors the original.

Testing and verifying rule attachment

Cursor has no built-in indicator showing which rules are active in a conversation. You verify by testing behavior.

Open a new Agent chat and write a prompt that should trigger one of your rules. For error-responses.mdc, try: "Add a DELETE endpoint for removing a user by ID in app/routers/users.py." If the generated error handling looks nothing like your rule's JSON structure or skips HTTPException with the detail dict, something in the matching is off.

Glob-scoped rules need a matching file referenced in your prompt or open in the editor. Testing error-responses.mdc without mentioning any file under app/routers/ means the rule never activates. 

"Apply Intelligently" rules are topic-based instead: try a prompt related to the rule's description, then one that's unrelated, and compare the outputs. The agent should follow the rule on the first and ignore it on the second.

Keeping Your Cursor Rules Effective

Rules that worked last month can mislead the AI after a refactor or a folder rename.

The Cursor rule lifecycle showing five stages from Write to Test to Reinforce to Maintain to Prune with a feedback loop back to writing new rules when mistakes recur

Troubleshooting matching failures

The most common reason a rule doesn't fire is the file extension: rules must use .mdc, and plain .md files in .cursor/rules/ get ignored silently.

Beyond that:

  • Overly broad globs like **/*.py attach the rule to every Python file in the project. Narrow to app/routers/**/*.py or tests/**/*.py, so the rule only fires where it belongs.

  • You'll notice "Apply Intelligently" rules missing conversations when the description is too generic. The good/bad description examples from the metadata section above apply here, too.

  • Conflicting rules are harder to spot. When two rules give contradictory instructions for the same files, the AI picks one (usually whichever is loaded last). Fix by narrowing glob scope so rules don't overlap, or consolidate related guidance into a single file.

One quirk worth knowing: changes to .mdc files sometimes disappear without saving. If edits aren't sticking, close Cursor completely, select "Override" on the unsaved changes pop-up, and reopen.

Making rules stick mid-conversation

A rule can fire correctly at the start of a conversation and still get ignored five messages later. Context window recency bias is the cause: as the conversation grows, the model prioritizes recent messages over instructions injected at the top. This is the most common complaint in the Cursor community, and there's no setting that fixes it.

Diagram comparing the token cost of lean Cursor rules versus bloated rules showing how 20 always-apply rules consume six times more context window space than three focused mdc files

The workaround is reinforcement. When output starts drifting from your rules, add a line like "follow the project rules for error responses" to your next prompt. That moves the instruction back into the model's recent context. 

For rules that matter on every response (formatting, naming), shorter and more direct phrasing holds up better across long conversations than detailed paragraphs.

Rules with code examples also resist drift better than text-only instructions. Instead of writing "use early returns for error handling," include a short before/after block showing the pattern you want. The model anchors to actual code more reliably than prose descriptions do, especially as context builds.

Maintaining and pruning rules

Review rules after any major refactor, dependency upgrade, or folder restructure. A rule that references deprecated APIs or removed files will feed the AI wrong instructions. Stale globs are even quieter about failing: rename app/api/ to app/routers/ and a rule scoped to the old path just stops firing with no warning.

Pruning goes the other direction, too. As models improve, they pick up common conventions without being told. If the AI is already following a rule's guidance unprompted, delete the rule and reclaim the tokens. A healthy rule set shrinks over time, not only grows.

Scaling rules for team environments

Treat rules like application code. Changes go through pull requests, and the team agrees on a pattern before it lands in .cursor/rules/.

For organizations on Business or Enterprise plans, Team Rules let admins enforce rules across all members from the Cursor dashboard. "Enable immediately" makes a rule active by default while letting users toggle it off. "Enforce this rule" removes the opt-out, which covers non-negotiable conventions such as security practices and compliance requirements.

Conclusion

Start with one or two rules for the patterns you correct most often. If you keep editing the same kind of AI output every session (wrong error format, unnecessary abstractions), that's a rule waiting to be written. Add more as you spot recurring mistakes, and trim ones that stop pulling their weight.

The .mdc files from this tutorial fit a FastAPI project. Yours will look different for a Next.js frontend or a Rust CLI tool, and that's the point: they encode your team's conventions, not generic best practices. 

If you want to go deeper with Cursor, check out our Cursor 2.0 guide. For a comparison to other AI coding assistants, I recommend reading our comparative pieces:

Cursor Rules FAQs

What are Cursor rules, and how do they work?

Cursor rules are markdown files with YAML frontmatter stored at .cursor/rules/ in your project. Before your prompt reaches the model, Cursor checks which rules match and prepends their content to the context window. Each .mdc file has three frontmatter fields (description, globs, alwaysApply) that control when it activates.

What types of Cursor rules are available?

Cursor supports four rule categories: 

  • Project Rules live in .cursor/rules/*.mdc and are shared via git.
  • User Rules are set in Cursor Settings and apply to all projects.
  • Team Rules are managed from the Cursor dashboard on paid plans.
  • AGENTS.md is a simpler alternative using plain markdown in the project root or subdirectories.

How do I choose between Always Apply, Apply Intelligently, and glob-scoped Cursor rules?

Use alwaysApply: true for project-wide context, like your tech stack and folder structure. Use glob patterns (e.g., app/routers/**/*.py) for rules that only apply to specific file types. Use Apply Intelligently with a detailed description when the rule is topic-based rather than file-based. Rules that don't include any of these become manual-only, loaded via @rule-name in chat.

Why do Cursor rules stop working mid-conversation?

Context window recency bias causes rules to fade as conversations grow. The model prioritizes recent messages over instructions injected at the top. The workaround is reinforcement: add a line like "follow the project rules" to your prompt when output drifts. Shorter rules with code examples resist this drift better than long prose descriptions.

How should I maintain Cursor rules over time?

Review rules after refactors, dependency upgrades, or folder renames. A rule referencing removed files or old paths will give wrong instructions or silently stop firing. Also, prune the rules the model has internalized: if the AI already follows a convention without the rule, delete it to reclaim context window tokens.


Bex Tuychiev's photo
Author
Bex Tuychiev
LinkedIn

I am a data science content creator with over 2 years of experience and one of the largest followings on Medium. I like to write detailed articles on AI and ML with a bit of a sarcastıc style because you've got to do something to make them a bit less dull. I have produced over 130 articles and a DataCamp course to boot, with another one in the makıng. My content has been seen by over 5 million pairs of eyes, 20k of whom became followers on both Medium and LinkedIn. 

Topics

Top AI Courses

Track

AI Agent Fundamentals

6 hr
Discover how AI agents can change how you work and deliver value for your organization!
See DetailsRight Arrow
Start Course
See MoreRight Arrow
Related

blog

Cursor vs. GitHub Copilot: Which AI Coding Assistant Is Better?

Learn how Cursor and GitHub Copilot work, how they compare on real-world tasks, and which one fits your workflow and budget.
Khalid Abdelaty's photo

Khalid Abdelaty

15 min

blog

Claude Code vs Cursor: Which AI Coding Tool Is Right for You?

Compare Claude Code vs Cursor side by side. Discover key differences in pricing, agentic features, and workflow fit, and find out which tool is right for you.
Derrick Mwiti's photo

Derrick Mwiti

10 min

Tutorial

Cursor AI: A Guide With 10 Practical Examples

Learn how to install Cursor AI on Windows, macOS, and Linux, and discover how to use it through 10 different use cases.
François Aubry's photo

François Aubry

Tutorial

Cline vs. Cursor: A Comparison With Examples

Learn about the key differences between Cursor and Cline to decide which tool best suits your project, budget, and preferred workflow.
Bex Tuychiev's photo

Bex Tuychiev

Tutorial

Cursor 2.0: A Complete Guide With Python Project

Learn Cursor 2 by building a currency converter web app using Python and Streamlit.
François Aubry's photo

François Aubry

Tutorial

Cline AI: A Guide With Nine Practical Examples

Learn what Cline is, how it compares to other AI coding assistants like Cursor, and how to use it through nine practical examples.
Bex Tuychiev's photo

Bex Tuychiev

See MoreSee More