Flaky Test Detection

Set up automated flaky test detection to identify unreliable tests in your CI pipeline and improve test suite reliability.


Flaky test detection helps you identify tests that produce inconsistent results on the same code, allowing you to improve the reliability of your test suite and reduce false positives in your CI pipeline.

A flaky test is one that has different conclusions on the same SHA1. For example, if a test runs twice on the same commit and once fails while the other succeeds, it’s considered flaky because the outcome is not consistent with the same code.

Below is a simplified visualization of flakiness: two executions of the same tests on the exact same commit (same SHA1) produce different results. This inconsistency is what flags the test as flaky.

flaky cluster_commit3 Commit SHA1 ghi789 cluster_commit2 Commit SHA1 def456 cluster_commit1 Commit SHA1 abc123 commit3 test_something (SHA1 ghi789) run5 Tests FAIL commit3->run5 Run #1 run6 Tests FAIL commit3->run6 Run #2 stable2 Consistent (Not Flaky) run5->stable2 run6->stable2 commit2 test_something (SHA1 def456) run3 Tests PASS commit2->run3 Run #1 run4 Tests PASS commit2->run4 Run #2 stable Consistent (Not Flaky) run3->stable run4->stable commit1 test_something2 (SHA1 abc123) run1 Tests PASS commit1->run1 Run #1 run2 Tests FAIL commit1->run2 Run #2 detector Flagged as flaky run1->detector run2->detector

Flaky tests are problematic because they:

  • Create false positives that block legitimate deployments
  • Reduce confidence in your test suite
  • Waste developer time investigating non-issues
  • Can mask real bugs when they fail intermittently

Flaky test detection works by running your test suite multiple times on the same commit. When tests produce different results (pass/fail) across these runs on identical code, they are flagged as flaky. This approach helps identify tests that are unreliable and may cause false positives or negatives in your CI pipeline.

Setting Up Flaky Test Detection

Section titled Setting Up Flaky Test Detection

See CI‑specific setup guides:

GitHub Actions

Configure scheduled/matrix or looping runs and upload reports.

Interpreting Flaky Test Results

Section titled Interpreting Flaky Test Results

Once your flaky test detection is running, CI Insights will analyze the results and identify patterns:

  1. Tests View: Navigate to the Tests section in CI Insights to see tests flagged as flaky
  2. Consistency Metrics: View the success/failure ratio for each test across multiple runs
  3. Timeline Analysis: See when flakiness was first detected and how it trends over time
  4. Impact Assessment: Understand which tests are causing the most CI instability
  • High Flakiness Rate: Tests that fail inconsistently across runs on the same commit
  • Recent Flakiness: Newly introduced flaky behavior that may indicate recent code changes
  • Critical Path Tests: Flaky tests in important workflows that could block deployments
  • Patterns: Flakiness that occurs under specific conditions (time of day, load, etc.)

When flaky tests are identified:

  1. Prioritize by Impact: Focus on tests that affect critical workflows first

  2. Investigate Root Causes: Look for timing issues, external dependencies, or race conditions

  3. Improve Test Reliability: Add proper waits, mocks, or test isolation

  4. Monitor Progress: Use CI Insights to verify that fixes reduce flakiness over time

Understanding common causes can help you fix flaky tests more effectively:

  • Race conditions: Tests that depend on timing between operations

  • Insufficient waits: Tests that don’t wait long enough for operations to complete

  • Timeouts: Tests with hardcoded timeouts that may vary in different environments

  • Network calls: Tests that make real HTTP requests
  • Database state: Tests that depend on specific database state
  • File system: Tests that read/write files without proper cleanup
  • Shared state: Tests that affect each other’s state
  • Order dependencies: Tests that only pass when run in a specific order
  • Resource conflicts: Tests competing for the same resources
  • System load: Tests sensitive to CPU or memory usage
  • Date/time dependencies: Tests that depend on current time
  • Random data: Tests using non-deterministic random values

Best Practices for Reliable Tests

Section titled Best Practices for Reliable Tests

To prevent flaky tests:

  1. Use deterministic data: Replace random values with fixed test data
  2. Mock external dependencies: Isolate tests from network, database, and file system
  3. Implement proper waits: Use explicit waits instead of fixed sleeps
  4. Clean up after tests: Ensure each test starts with a clean state
  5. Make tests independent: Each test should be able to run in isolation
  6. Use stable selectors: In UI tests, use reliable element selectors
  7. Handle async operations: Properly wait for asynchronous operations to complete