Hidden Technical Debt in Machine Learning Systems
- Category: Article
- Created: January 17, 2022 3:28 PM
- Status: Open
- Updated: January 17, 2022 5:02 PM
- url: https://proceedings.neurips.cc/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf
New Metaphors
technical debt: long term costs incurred by moving quickly in software engineering.
smell: In software engineering, a design smell may indicate an underlying problem in a component or system.
Background
- This paper argues it is dangerous to think of machine learning’s quick wins as coming for free. Using the software engineering framework of technical debt, the authors find it is common to incur massive ongoing maintenance costs in real-world ML systems.
- Developing and deploying ML systems is relatively fast and cheap, but maintaining them over time is difficult and expensive.
- The authors argue that ML systems have a special capacity for incurring technical debt, because they have all of the maintenance problems of traditional code plus an additional set of ML-specific issues. This debt may be difficult to detect because it exists at the system level rather than the code level.
Highlights
- This paper does not offer novel ML algorithms, but instead seeks to increase the community’s awareness of the difficult tradeoffs that must be considered in practice over the long term.
Contents
Complex Models Erode Boundaries
Traditional software engineering practice has shown that strong abstraction boundaries using encapsulation and modular design help create maintainable code. However, it is difficult to enforce strict abstraction boundaries for machine learning systems by prescribing specific intended behavior.
ML is required in exactly those cases when the desired behavior cannot be effectively expressed in software logic without dependency on external data. The real world does not fit into tidy encapsulation.
Entanglement
- Machine learning systems mix signals together, entangling them and making isolation of improvements impossible.
- The authors refer to this here as the CACE principle: Changing Anything Changes Everything.
- One possible mitigation strategy is to isolate models and serve ensembles. But relying on the combination creates a strong entanglement: improving an individual component model may actually make the system accuracy worse
Correction Cascades.
- Correction model has created a new system dependency on \(m_a\), making it significantly more expensive to analyze improvements to that model in the future.
- A correction cascade can create an improvement deadlock, as improving the accuracy of any individual component actually leads to system-level detriments.
Undeclared Consumers
- Without access controls, some of these consumers may be undeclared, silently using the output of a given model as an input to another system. In more classical software engineering, these issues are referred to as visibility debt.
- In practice, this tight coupling can radically increase the cost and difficulty of making any changes to ma at all, even if they are improvements.
Data Dependencies Cost More than Code Dependencies
Unstable Data Dependencies
- Some input signals are unstable, meaning that they qualitatively or quantitatively change behavior over time.
- One common mitigation strategy for unstable data dependencies is to create a versioned copy of a given signal.
Underutilized Data Dependencies
- Underutilized data dependencies are input signals that provide little incremental modeling benefit. These can make an ML system unnecessarily vulnerable to change, sometimes catastrophically so, even though they could be removed with no detriment.
- Underutilized data dependencies can creep into a model in several ways.
- Legacy Features. The most common case is that a feature \(F\) is included in a model early in its development. Over time, \(F\) is made redundant by new features but this goes undetected.
- Bundled Features. Sometimes, a group of features is evaluated and found to be beneficial. Because of deadline pressures or similar effects, all the features in the bundle are added to the model together, possibly including features that add little or no value.
- \(\epsilon\)-Features. As machine learning researchers, it is tempting to improve model accuracy even when the accuracy gain is very small or when the complexity overhead might be high.
- Correlated Features. Often two features are strongly correlated, but one is more directly causal. Many ML methods have difficulty detecting this and credit the two features equally, or may even pick the non-causal one. This results in brittleness if world behavior later changes the correlations.
Static Analysis of Data Dependencies
- Tools for static analysis of data dependencies are far less common, but are essential for error checking, tracking down consumers, and enforcing migration and updates.
Feedback Loops
One of the key features of live ML systems is that they often end up influencing their own behavior if they update over time. This leads to a form of analysis debt, in which it is difficult to predict the behavior of a given model before it is released.
Direct Feedback Loops
A model may directly influence the selection of its own future training data.
Hidden Feedback Loops
A more difficult case is hidden feedback loops, in which two systems influence each other indirectly through the world.
ML-System Anti-Patterns
Glue Code
- ML researchers tend to develop general purpose solutions as self-contained packages.
- Glue code is costly in the long term because it tends to freeze a system to the peculiarities of a specific package
- In this way, using a generic package can inhibit improvements, because it makes it harder to take advantage of domain-specific properties or to tweak the objective function to achieve a domain-specific goal.
Pipeline Jungles
- As a special case of glue code, pipeline jungles often appear in data preparation. These can evolve organically, as new signals are identified and new information sources added incrementally.
- Without care, the resulting system for preparing data in an ML-friendly format may become a jungle of scrapes, joins, and sampling steps, often with intermediate files output.
Glue code and pipeline jungles are symptomatic of integration issues that may have a root cause in overly separated “research” and “engineering” roles. When ML packages are developed in an ivory tower setting, the result may appear like black boxes to the teams that employ them in practice. A hybrid research approach where engineers and researchers are embedded together on the same teams (and indeed, are often the same people) can help reduce this source of friction significantly.
Dead Experimental Codepaths.
- It becomes increasingly attractive in the short term to perform experiments with alternative methods by implementing experimental codepaths as conditional branches within the main production code.
- For any individual change, the cost of experimenting in this manner is relatively low—none of the surrounding infrastructure needs to be reworked. However, over time, these accumulated codepaths can create a growing debt due to the increasing difficulties of maintaining backward compatibility and an exponential increase in cyclomatic complexity.
Abstraction Debt
There is a distinct lack of strong abstractions to support ML systems.
Common Smells
- Plain-Old-Data Type Smell. ****
- Multiple-Language Smell.
- Prototype Smell.
Configuration Debt
- Another potentially surprising area where debt can accumulate is in the configuration of machine learning systems.
- The Authors have observed that both researchers and engineers may treat configuration (and extension of configuration) as an afterthought.
Dealing with Changes in the External World
One of the things that makes ML systems so fascinating is that they often interact directly with the external world. Experience has shown that the external world is rarely stable.
Fixed Thresholds in Dynamic Systems.
- It is often necessary to pick a decision threshold for a given model to perform some action.
- However, such thresholds are often manually set. Thus if a model updates on new data, the old manually set threshold may be invalid.
Monitoring and Testing
- Prediction Bias: In a system that is working as intended, it should usually be the case that the distribution of predicted labels is equal to the distribution of observed labels.
- Action Limits: It can be useful to set and enforce action limits as a sanity check.
- Up-Stream Producers: These up-stream processes should be thoroughly monitored, tested, and routinely meet a service level objective that takes the downstream ML system needs into account.
Other Areas of ML-related Debt
- Data Testing Debt ****
- Reproducibility Debt ****
- Process Management Debt ****
- Cultural Debt
Conclusions
Technical debt is a useful metaphor, but it unfortunately does not provide a strict metric that can be tracked over time. A team is still able to move quickly is not in itself evidence of low debt or good practices, since the full cost of debt becomes apparent only over time.
A few useful questions to consider are:
- How easily can an entirely new algorithmic approach be tested at full scale?
- What is the transitive closure of all data dependencies?
- How precisely can the impact of a new change to the system be measured?
- Does improving one model or signal degrade others?
- How quickly can new members of the team be brought up to speed?
Personal thoughts
- The reason why a machine learning system has more hidden debt is that it has not only the technical debt that traditional software has, but also the technical debt associated with machine learning.
- Hidden debts of machine learning system includes boundary erosion, entanglement, hidden feedback loops, undeclared consumers, data dependencies, configuration issues, changes in the external world, and a variety of system-level anti-patterns.