Skip to main content

The Many Reasons Why a Pipeline Can Fail

· 4 min read

A CI/CD pipeline is a sequence of automated steps that validate and deliver software throughout the software development lifecycle. Initially, I believed that as long as the pipeline remained operational and code passed all checks during each run, we were set. However, I’ve come to realize that a pipeline's stability and integrity rely on multiple factors, many of which extend beyond the control of individual contributors. Here are some key reasons why a pipeline might become flaky or fail.

Unless a fully containerized environment is in use—which presents its own challenges—the state of the machine running the pipeline can significantly affect build and test results. Hardware issues may be less frequent, but a number of other factors can disrupt pipeline runs.

Network

Network conditions impact a pipeline in multiple ways, primarily because the code is executed on a remote machine. This requires fetching the code and downloading dependencies, which can encounter issues such as:

  • Git checkout errors
  • Dependency download failures
  • Authentication issues

These issues are exacerbated in environments with self-hosted package registries, where network downtime or server overload can cause package unavailability, unlike more stable public registries.

Authentication deserves special mention as multiple failure points exist here: authentication strategies can change, tokens may expire, or some packages might require new forms of authentication. This is particularly common in intranet environments with strict access controls.

Lastly, a slow or unreliable network can also cause timeouts, while proxy-related settings can introduce further issues.

Disk Space

Insufficient disk space can cause pipeline failures, especially if the pipeline generates extensive logs, temporary files, or fails to clean up after each run. Resolving this may seem as simple as freeing up space regularly, but there are trade-offs:

  • Caching dependencies or files can speed up pipeline runs by reusing previously downloaded resources.
  • Keeping logs or other generated files can be useful for debugging when failures occur.

Balancing cleanup with caching and logging is a delicate task.

Memory

Running out of memory can also disrupt a pipeline, particularly when processes are unexpectedly spawned in large numbers during a pipeline run. This can occur when tests spawn multiple processes that fail to terminate.

OS Updates

Operating system updates can alter machine behavior, affecting system settings, environment variables, or software permissions. This may include:

  • System environment changes, such as altered PATH or system libraries
  • Incompatible or unavailable tools, as updates may deprecate previously available options

In many cases, the tests themselves are the source of failures. Flaky tests can be particularly troublesome, as they’re prone to intermittent failures that aren’t always reproducible. Here are some common reasons for test instability.

Timeouts

Tests that depend on external services or processes often use timeouts to handle delays. However, unexpected latency can cause these tests to fail. For example, high-level integration tests might simulate real-world scenarios where intentional delays are set, yet the machine may still struggle to meet these timing requirements.

Race Conditions

Race conditions are another source of flakiness. These occur when the test’s success depends on the order or timing of events, such as when multiple processes attempt to update a shared state. Race conditions are especially likely in code that’s not thread-safe or when processes from different software need to coordinate across threads/processes.

Test Data

Tests relying on specific data can also become flaky if they become outdated. For instance, expired accounts or tokens that are no longer valid can cause tests to fail.

Summary

Even with all these factors considered, it's nearly impossible to guarantee that a pipeline will never fail. Unless the code is set in stone and no further changes are made, which is unlikely, failures are inevitable. The key is then to anticipate these failures and design the pipeline to be resilient to them. Providing all the necessary tools for debugging and monitoring is crucial to aid a speedy recovery when failures do occur.