the battle of thresholds and the misused concept of ‘high-intensity‘ (part 1)

Mar 5, 2019 | ACADEMY

What do we really analyse?

I presume you’ve dealt with many tracking reports and, as it has often happened to me, you probably did not fully understand the meaning of the proposed thresholds. Indeed, I often mulled over the following question: “what happens to the athlete above this threshold?”. The answer is very different, depending on the parameter considered…


Usually, speed thresholds are used to determine speed categories and describe a number of occurrences, times and distances performed within each category. The underlying assumption is that the greater the activity in the higher speed categories, the higher the intensity sustained by the player.

For many years, we have argued that this is not the case, because speed alone does not take the high metabolic cost of accelerations into account, obviously occurring more frequently when the speed is low. For example, during small-side games, where the available space is limited, it’s almost impossible to reach high-speeds even if these exercises can be very demanding.

The way the categories are usually indicated can sometimes also be rather deceiving. The 0 to 8 km⋅h⁻¹ zone is defined as walking, although a sequence of three-step change of direction “lives” in this band but has nothing to do with walking! At the other extreme, Eliud Kipchoge (Kenyan marathon world record holder) sustained a pace of 2:53 ⋅ km⁻¹ (or 20.81 km⋅h⁻¹) that cannot be considered a two-hour sprint.

“Is there any absolute criterion for defining high-intensity correctly?”

Fast runs are definitely associated with a greater risk for the athlete in terms of injuries. This is so because fast running is an infrequent type of locomotion, which requires good coordination that can be compromised in fatigue conditions. On the average, the overall distance covered by an entire team during a match at speeds > 20 km⋅h⁻¹ is on the order of 6 km (i.e. 600 m for each player on average, some players covering less than 150 m and others more than 1,000 m).

Do all players train enough, regardless of the individual high speed distance, to withstand this risky ‘high-speed’ workload? It can be concluded that the purpose of monitoring high-speed is to estimate the load, both during matches and training and to keep them in balance. On the contrary, if the goal is to assess ‘the high-intensity’… high speed, as such, is less suitable to identify high intensity in team sports.


As discussed above, since high speed, as such, was (is!) not enough to identify high intensity spells, somewhere along the line acceleration was added to the performance analysis. I think that the underlying idea was as follows: “similar to what was done for high speed, an acceleration threshold can be defined, and the combined assessment of high-speed and high-acceleration gets the job done!”. Unfortunately, acceleration alone also has several limitations. First of all, it represents the derivative of speed which ‘amplifies’ all speed variations. The following four figures illustrate the difference between using raw speed (figure 1, left panel) or filtered speed (figure 1, right panel) in the calculation of acceleration. It’s pretty clear that the way to calculate acceleration will strongly influence the final result!

high intensity thresholds

figure 1: Left panels, time course of raw speed during 30 s of constant speed running (orange curve, top) and acceleration calculated therefrom (green curve, bottom). Right panels, time course of filtered speed for the same exercise (blue curve, top) and acceleration calculated therefrom (green curve, bottom).

Secondly, two variables are needed to identify an acceleration event: the acceleration threshold and a minimum passage of time above this defined threshold. Both these numbers are quite arbitrary, so much so that the scientific papers are full of different combinations of these variables. Is there any criterion for choosing them appropriately? I don’t think so…

Another widespread concept when it comes to acceleration is the ‘time window’ on which acceleration is calculated. The need for consensus on this aspect escapes me utterly. It seems that defining a ‘time window’ (for example, equal to 0.5 s) is only useful for assessing the average acceleration in half a second… and nothing more. It would be better to calculate the acceleration (which is the derivative of speed over time) sample by sample, and there are no reasons to prefer a ‘time window’ if we trust the speed data!

Probably, we should start from a different question: “what acceleration are we interested in?”. It could be the single step acceleration or the average acceleration of the center of mass. Clearly, the sampling frequency of the monitoring technology used is crucial, depending on the required information. The accuracy and the sampling frequency of the device, together with the filtering techniques, will determine the expected result (i.e. a meaningful acceleration measured), without the need for a ‘time window’.

Finally, once the acceleration events have been determined, there appears to be a further issue: what was the starting speed immediately preceding the acceleration event? It’s easy to imagine how different it is to achieve the same high acceleration starting from a stationary rather than from a cruising speed. Unfortunately, based on acceleration alone, there is no possibility of distinguishing one or the other because they are evaluated in the same way even if the forces at stake could be very different.

In part 2, we will discuss the combined analysis of speed and acceleration, the lack of information while considering both parameters, and how the energetic approach can bring out critical activity phases that can compromise the maintenance of high level of performance.

Author: Cristian Osgnach

Related Contents
Related Contents