# Task Sizing for Cursor IDE: Research-Based Implementation
## Summary
Context exhaustion in AI development tools occurs when assistants lose track of their instructions and project patterns during development sessions. This resource presents a research-based implementation of task sizing constraints that prevent context exhaustion by aligning task complexity with AI capability boundaries. The approach uses 15-40 minute subtasks affecting maximum 3-4 files with single architectural layer focus to maintain AI instruction adherence and development velocity.
## Details
## Context Exhaustion in AI Development Tools
Context exhaustion occurs when AI assistants lose track of their instructions and project patterns during development sessions. This manifests as:
- Custom mode instructions being ignored
- Inconsistent code patterns
- Gradual drift from established architectural guidelines
- Need for frequent re-prompting
The root cause is task sizing that exceeds AI context management capabilities.
## Research Findings on AI Task Completion
Recent analysis reveals AI performance boundaries:
- **50-minute completion horizon**: Performance degrades significantly beyond this threshold
- **File count sensitivity**: Tasks requiring >5 files show consistent failure patterns
- **Context fragmentation**: Cross-cutting architectural changes cause instruction drift
- **Optimal range**: 15-40 minute focused tasks achieve highest success rates
## Implementation Strategy
### Core Sizing Constraints
**Time Window**: 15-40 minutes per subtask
- Below 15 minutes: Too granular for meaningful progress
- Above 40 minutes: Approaches AI capability limits
**File Count**: Maximum 3-4 files per subtask
- Reduces context switching complexity
- Maintains pattern consistency
**Architectural Focus**: Single layer per subtask
- UI layer: Components, styling, interfaces
- API layer: Handlers, routing, contracts
- Service layer: Business logic, validation
- Data layer: Repositories, queries, models
**Dependency Isolation**: Minimal external context required
- Use mocks for unimplemented dependencies
- Focus on current layer implementation
## Cursor Rules Implementation
### AI Context Management Rule
```markdown
---
description: AI context management and task sizing guidelines for optimal development workflow
globs: **/*
alwaysApply: true
---
#### Task Size Constraints for AI Development
- Maximum implementation time: 40 minutes per subtask (optimal AI capability window)
- File count limit: Maximum 3-4 files per subtask to maintain context clarity
- Context complexity: Avoid cross-cutting architectural changes in single tasks
- Single layer focus: Each subtask should focus on one architectural layer only
#### Task Decomposition Rules
- Over-sized task trigger: Any task estimated >40 minutes MUST be decomposed further
- Context switching minimization: Group related subtasks to reduce AI context loading
- Dependency isolation: Minimize external dependencies within individual subtasks
```
### Planning Mode Instructions
```markdown
**Task Sizing Validation Criteria:**
- File Count: Maximum 3-4 files per subtask to maintain context clarity
- Time Estimate: 15-40 minutes per subtask (optimal AI capability window)
- Context Complexity: Low to medium - avoid cross-cutting architectural changes
- Dependency Depth: Maximum 2 levels of dependent components per task
**Atomic Subtasks** (15-25 minute focused implementation):
1. Acceptance Tests: Gherkin scenarios (20min, 1-2 files)
- Files: activity-details.feature, step-definitions.ts
- Context: BDD testing patterns only
2. Component Interface: ActivityDetail props & types (15min, 1 file)
- Files: ActivityDetail.tsx (interface definition only)
- Context: Component architecture patterns
3. Component Implementation: ActivityDetail with MUI (25min, 1-2 files)
- Files: ActivityDetail.tsx, ActivityDetail.module.css
- Context: MUI patterns, existing component structure
```
### Feature Implementation Mode Instructions
```markdown
## 1. Context-Aware Task Implementation
- **Validate task size FIRST**: Check that current task follows AI Context Management guidelines
- **File count verification**: Ensure subtask affects ≤4 files maximum
- **Time estimation check**: Confirm implementation target is 15-40 minutes
- **Context isolation**: Verify minimal external dependencies required
- **Stop and escalate**: If task violates constraints, request decomposition before proceeding
**CRITICAL CONSTRAINT: All subtasks must conform to AI Context Management guidelines to prevent context exhaustion and instruction drift. If a subtask violates size constraints (>40 minutes, >4 files, cross-layer changes), STOP implementation and request task decomposition.**
```
## Task Breakdown Examples
### Well-Sized Subtasks
```text
✅ Frontend Component (25 minutes, 2 files)
Task: "Implement ActivityDetail component with MUI styling"
Files: ActivityDetail.tsx, ActivityDetail.module.css
Context: MUI patterns, component testing patterns
✅ Backend Handler (20 minutes, 2 files)
Task: "Implement GET /activities/{id} handler"
Files: activity_handler.go, activity_handler_test.go
Context: Go handler patterns, HTTP error handling
✅ Service Logic (20 minutes, 2 files)
Task: "Add GetActivityByID business logic"
Files: activity_service.go, activity_service_test.go
Context: Go service patterns, validation logic
```
### Oversized Tasks Requiring Decomposition
```text
❌ Cross-Layer Feature (2+ hours, 8+ files)
Task: "Implement complete activity management system"
Problem: Spans multiple architectural layers
Solution: Decompose into layer-specific subtasks
❌ Complex Integration (1+ hour, 6+ files)
Task: "Add activity detail feature with frontend and backend"
Problem: Context switching between frontend/backend patterns
Solution: Separate into frontend and backend tracks
```
## Error Handling and Context Management
### Context Limit Protocols
```markdown
**When Context Limits Are Approaching:**
1. Immediate Assessment: Check if current subtask violates size constraints
2. Complete Current Work: Finish minimal viable implementation and commit progress
3. Request Decomposition: Contact Planning mode for task restructuring
4. Document Context Issues: Update subtask with context constraint violations
5. Reset Session: Start fresh with properly-sized subtask
```
### Scope Expansion Prevention
```markdown
**When Subtask Scope Expands:**
1. Stop Implementation: Do not continue if scope exceeds original constraints
2. Document Scope Creep: Note additional requirements discovered
3. Request Planning Review: Get task decomposition or scope clarification
4. Protect Context: Avoid loading additional files or dependencies
```
## Core Standards Integration
```markdown
### Context-Aware Task Sizing Standards
**CRITICAL: All task breakdown must follow these sizing constraints to prevent Feature Implementation mode context exhaustion and AI instruction drift.**
#### Quality Task Creation Standards
- Atomic tasks: Each subtask completable in 15-40 minutes with focused context
- Layer separation: Separate subtasks for UI, API, business logic, data (Outside-In methodology)
- File specification: List specific files affected in subtask description
- Context boundary definition: Specify what context/patterns are needed
- Implementation isolation: Ensure subtask can be completed with minimal external context
```
## Results
This approach prevents context exhaustion by:
- Maintaining AI instruction adherence throughout development sessions
- Reducing context switching complexity
- Enabling focused, high-quality implementations
- Providing clear escalation paths for oversized work
The system aligns task complexity with AI capability boundaries while maintaining development velocity through appropriately-sized, focused work units.
## 🔗 Related Resources
- [[Cursor Rules Guide]] - Foundation for rule system design
- [[Custom Modes in Cursor IDE]] - Understanding custom mode architecture
- [[Planning Mode for Cursor IDE]] - Strategic task breakdown implementation
- [[Feature Implementation Mode for Cursor IDE]] - Focused implementation workflow
- [[Outside In Development]] - Core development methodology
- [[Guide to Coding with AI]] - Broader AI-assisted development strategies