Empirical Cycling Community Notes

Ten Minute Tips 18: Metrics Are Not Fitness

Original episode & show notes | Raw transcript

Training for Performance, Not Just the Metric: A Deep Dive

Introduction: The Core Problem

The central theme of the podcast is the critical distinction between training for performance and training to the metric. This distinction addresses a common pitfall in modern, data-driven endurance sports where athletes and coaches can become fixated on improving numbers in a software model, sometimes at the expense of real-world competitive ability.

1. The “Teaching to the Test” Trap: A Case Study on FTP

One of the most common examples of training to the metric is “teaching to the test” with FTP assessments, particularly shorthand tests like the 20-minute test or a ramp test.

The Interplay of Energy Systems

An effort like a 20-minute test is not a pure measure of aerobic fitness. It includes a significant anaerobic contribution. Your body is working above its maximal sustainable aerobic state, and the difference is made up by your finite anaerobic energy reserves.

Key Takeaway: The context of the training leading up to a test is crucial for interpreting the results. Without that context, a simple number can be misleading.

2. Demystifying Performance Models (WKO5, Critical Power)

Modern training software uses models to separate aerobic and anaerobic contributions to power. Understanding how these models work is essential to avoid misinterpreting the data.

Key Metrics:

The Inverse Relationship in the Model

These models are built on a fundamental mathematical relationship. In simple terms:

Total Power Output = Aerobic Power + Anaerobic Power

Because of this, when one component of the model goes up, the other often goes down to account for the same performance data.

Key Takeaway: A drop in FRC or W’ alongside a rise in FTP does not automatically mean a loss of anaerobic power. It is often an artifact of the model adjusting to a higher aerobic baseline. The true test is looking at raw power numbers for short durations.

3. The Danger of Averages: Two Critical Logical Fallacies

Much of our training “wisdom” comes from scientific studies that report group averages. Applying this group data to an individual can be a mistake due to two logical fallacies.

A. The Fallacy of Division

This fallacy occurs when you assume that what is true for the whole (the group average) must be true for all the parts (each individual).

B. The Fallacy of Composition

This is the reverse of the Fallacy of Division. It occurs when you assume that what is true for one part (an individual) must be true for the whole (everyone else).

4. From Diagnosis to Action: A Practical Framework

Instead of fixating on metrics, the podcast proposes a performance-first diagnostic framework. Analyze your race performance to identify your primary limiter, and then train to fix it.

Diagnostic Questions:

The Role of Metrics: Once you have diagnosed the performance limiter, you can then select the appropriate metric to track progress.

This approach places the focus on real-world outcomes and uses data as a tool to validate that the training is working, rather than making the data the goal itself.