Flaky Test Detection
Set up automated flaky test detection to identify unreliable tests in your CI pipeline and improve test suite reliability.
Flaky test detection helps you identify tests that produce inconsistent results on the same code, allowing you to improve the reliability of your test suite and reduce false positives in your CI pipeline.
Understanding Flaky Tests
Section titled Understanding Flaky TestsA flaky test is one that has different conclusions on the same SHA1. For example, if a test runs twice on the same commit and once fails while the other succeeds, it’s considered flaky because the outcome is not consistent with the same code.
Below is a simplified visualization of flakiness: two executions of the same tests on the exact same commit (same SHA1) produce different results. This inconsistency is what flags the test as flaky.
Flaky tests are problematic because they:
- Create false positives that block legitimate deployments
- Reduce confidence in your test suite
- Waste developer time investigating non-issues
- Can mask real bugs when they fail intermittently
How Flaky Detection Works
Section titled How Flaky Detection WorksFlaky test detection works by running your test suite multiple times on the same commit. When tests produce different results (pass/fail) across these runs on identical code, they are flagged as flaky. This approach helps identify tests that are unreliable and may cause false positives or negatives in your CI pipeline.
Setting Up Flaky Test Detection
Section titled Setting Up Flaky Test DetectionSee CI‑specific setup guides:
Interpreting Flaky Test Results
Section titled Interpreting Flaky Test ResultsOnce your flaky test detection is running, CI Insights will analyze the results and identify patterns:
In the CI Insights Dashboard
Section titled In the CI Insights Dashboard- Tests View: Navigate to the Tests section in CI Insights to see tests flagged as flaky
- Consistency Metrics: View the success/failure ratio for each test across multiple runs
- Timeline Analysis: See when flakiness was first detected and how it trends over time
- Impact Assessment: Understand which tests are causing the most CI instability
What to Look For
Section titled What to Look For- High Flakiness Rate: Tests that fail inconsistently across runs on the same commit
- Recent Flakiness: Newly introduced flaky behavior that may indicate recent code changes
- Critical Path Tests: Flaky tests in important workflows that could block deployments
- Patterns: Flakiness that occurs under specific conditions (time of day, load, etc.)
Taking Action
Section titled Taking ActionWhen flaky tests are identified:
-
Prioritize by Impact: Focus on tests that affect critical workflows first
-
Investigate Root Causes: Look for timing issues, external dependencies, or race conditions
-
Improve Test Reliability: Add proper waits, mocks, or test isolation
-
Monitor Progress: Use CI Insights to verify that fixes reduce flakiness over time
Common Causes of Flaky Tests
Section titled Common Causes of Flaky TestsUnderstanding common causes can help you fix flaky tests more effectively:
Timing Issues
Section titled Timing Issues-
Race conditions: Tests that depend on timing between operations
-
Insufficient waits: Tests that don’t wait long enough for operations to complete
-
Timeouts: Tests with hardcoded timeouts that may vary in different environments
External Dependencies
Section titled External Dependencies- Network calls: Tests that make real HTTP requests
- Database state: Tests that depend on specific database state
- File system: Tests that read/write files without proper cleanup
Test Isolation
Section titled Test Isolation- Shared state: Tests that affect each other’s state
- Order dependencies: Tests that only pass when run in a specific order
- Resource conflicts: Tests competing for the same resources
Environment Variations
Section titled Environment Variations- System load: Tests sensitive to CPU or memory usage
- Date/time dependencies: Tests that depend on current time
- Random data: Tests using non-deterministic random values
Best Practices for Reliable Tests
Section titled Best Practices for Reliable TestsTo prevent flaky tests:
- Use deterministic data: Replace random values with fixed test data
- Mock external dependencies: Isolate tests from network, database, and file system
- Implement proper waits: Use explicit waits instead of fixed sleeps
- Clean up after tests: Ensure each test starts with a clean state
- Make tests independent: Each test should be able to run in isolation
- Use stable selectors: In UI tests, use reliable element selectors
- Handle async operations: Properly wait for asynchronous operations to complete