# Outside In Development Implementation Guide
## Summary
This guide outlines the key information and prerequisites needed before starting feature development using the outside-in methodology. It serves as a practical companion to [Outside In Development](Outside%20In%20Development.md).
## Required Information
### Feature Context
- Business value and goals
- User personas involved
- Current system state
- Related features or dependencies
- Any technical constraints
- Performance requirements
- Security considerations
### User Scenarios
- Complete user journeys
- Success scenarios
- Error scenarios
- Edge cases
- Validation requirements
- User feedback requirements
### Technical Boundaries
- Entry points (API, UI, etc.)
- System interfaces involved
- External service dependencies
- Data persistence requirements
- Transaction boundaries
- Caching requirements
### Testing Infrastructure
- Available testing frameworks
- Mocking capabilities
- Test data requirements
- Environment needs
- CI/CD integration points
- Test isolation requirements
## Implementation Checklist
### Before Starting
1. Confirm business requirements:
- Feature scope
- Acceptance criteria
- Success metrics
- Priority level
- Required approvals
2. Review technical context:
- System architecture
- Available services
- Data models
- Interface contracts
- Authentication/authorisation
3. Prepare testing environment:
- Test frameworks configured
- Mock frameworks available
- Test data setup
- Database cleaner strategy
- Test isolation approach
### Development Setup
1. Feature test structure:
- Scenario descriptions
- Step definitions
- Helper methods
- Custom matchers
- Shared contexts
2. Test data management:
- Factories/fixtures
- Test data builders
- Database cleaner config
- Transaction handling
- Data reset strategy
3. Mock preparation:
- Interface definitions
- Mock responses
- Stub configurations
- Test doubles
- Verification approaches
## Common Challenges
### Test Organisation
- Balancing unit and integration tests
- Managing test data complexity
- Handling asynchronous operations
- Maintaining test isolation
- Managing test performance
### Mock Management
- Avoiding over-mocking
- Maintaining mock fidelity
- Handling complex interactions
- Managing mock setup
- Verifying mock usage
### Integration Points
- External service simulation
- Database interaction
- File system operations
- Network requests
- Time-dependent operations
## Best Practices
### Information Gathering
- Document assumptions
- Validate requirements
- Check edge cases
- Verify constraints
- Confirm priorities
### Test Structure
- Keep scenarios focused
- Use clear descriptions
- Maintain isolation
- Handle cleanup
- Document prerequisites
### Mock Usage
- Mock at architectural boundaries
- Use consistent patterns
- Document mock behaviour
- Verify interactions
- Clean up resources
## Implementation Flow
### 1. Initial Setup
- Create feature file
- Define scenarios
- Set up test helpers
- Configure test environment
- Prepare mock framework
### 2. Layer Implementation
- Start with outer layer
- Define interfaces
- Create test doubles
- Implement minimally
- Verify behaviour
### 3. Integration
- Replace mocks progressively
- Verify interactions
- Test boundaries
- Handle errors
- Validate behaviour
## Tools and Templates
### Feature Template
```gherkin
Feature: [Feature Name]
As a [persona]
I want [action]
So that [benefit]
Background:
Given [context]
Scenario: [scenario description]
Given [preconditions]
When [actions]
Then [outcomes]
```
### Mock Template
```ruby
describe ServiceUnderTest do
let(:dependency) { double('dependency') }
before do
allow(dependency).to receive(:method)
.with(expected_args)
.and_return(response)
end
it 'behaves as expected' do
expect(service.action).to eq(expected_result)
expect(dependency).to have_received(:method)
.with(expected_args)
end
end
```
## Common Pitfalls
- Insufficient requirement gathering
- Unclear test boundaries
- Over-complicated scenarios
- Brittle test data
- Poor mock management
- Unclear layer separation
- Missing edge cases