Skip to content

Biomechanically or Kinesthetically Mnemonic

We have long accepted that repetition leads to physical memory, and it is also generally accepted that a large number of reps are needed to ingrain a physical memory (10,000 is the fave). But are some movements or sequences of movements more easily remembered, or more swiftly ingrained than others? In other words, are there sets of movements that are somehow Biomechanically or Kinesthetically Mnemonic? Is this what good drill progression do? Ingrain desired movements more swiftly.

Moreover, are some movements “catchy”? Like a song that you just can’t get out of your head, are there movement patterns that you just can’t get out of your body? Does it follow then that like catchy phrases, these sets of movements are neither intrinsically good nor intrinsically bad, just memorable, indeed “infectious”? So, like lyrics, where we may say that a catchy tune full of misogynist concepts is to be shunned in a society that values equality, so too are som drills or workouts packed with infectious but undesirable mnemonic movements?

a device such as a pattern of letters, ideas, or associations that assists in remembering something.
aiding or designed to aid the memory.

(of a tune or phrase) instantly appealing and memorable.
“a catchy recruiting slogan”
synonyms: memorable, unforgettable, haunting; appealing, popular; singable, melodious, tuneful, foot-tapping
“I’m not sure what their product is, but they’ve got a catchy little jingle”


How Fast Do You Need to Be??

Masters athletes, and new to swim athletes in specific, often think about what kind of paces they can hold in training in terms of sets of 100s, sets of 200s, etc, with the goal to improve their time for swims that are greater than or equal to a mile in length.  When seeking greater speeds in those sets they tend to ask what technical improvements they can make, and seek to make those improvements in the course of their workouts – doing 100s, 200s, etc. while “thinking about their stroke”.  But as you practice some new technique, your body is inevitably weak in terms of repeating that technique because it has not practiced it much.  The technique will therefore tend to degrade as one swims 25, 50, 75+ yards continuously.  In my experience, this degradation is so pronounced that much of the technique change that is desired disappears after roughly 50 yards, and the “new” stroke becomes unrecognizable after 100.

If we accept this as a truism for many athletes, shouldn’t we instead try to focus on how fast they can swim 25 yards at a pop?  And to make us “stronger”, focus on how fast we can do 25 yards, and how many 25 yard repeats we can do?  For some context, I wanted to look at just how fast one needs to be in an all out 25 in order to reach a given speed in a mile.  The two current American record holders in the 1,650 yard freestyle are as follows:

  • Womens 1650 record holder – Katie Ledecky, 15:15, averaging 55.5 / 100, or 13.9 seconds per 25.
  • Mens 1650 record holder – Martin Grodzki, 14:24, averaging 52.4 / 100, or 13.1 seconds per 25.

Let’s estimate that Katie Ledecky can swim a  25 in 11.0 seconds.  Estimate that Grodzki can go a  10.0 for his 25 sprint.  If we take these as reasonable estimates of their top speed capabilities (if someone has exact knowledge, feel free to share), then for their record swims, they averaged approximately 3 seconds per 25 slower than their all out speed.  3 seconds per 25 at these speeds means that their endurance time per 25 is about 130% of their sprint lap time.  Let’s say they can average an additional 1 second per 25 slower for a 2+ mile swim, which would equate to a lap time of ~ 140% of their all out sprint.

So… if you are physiologically suited for the a 15-20 minute event, and trained to the limit of your endurance capabilities, then you should be able to average approximately 130% per 25 of your all out 25 sprint time, and you can do 140% per 25 in a multi-mile swim.  What does this mean in terms of performance in an open water endurance race?  Based on the paces outlined here one needs to be capable of averaging about 1:13 per 100 in a threshold pool set to break 1 hour in a non-wetsuit ironman swim.  So. take this 1:13 per 100, which is 18.25 seconds per 25 as your goal speed.  18.25 is 140% of 13 seconds, so, to break an hour in an Ironman non-wetsuit we need a sprint speed of 13 seconds in a 25.  Other benchmark speeds are as follows:

  • If you can go 16 seconds in a sprint 25, then your max endurance rate would be 1.4 * 16 = 22.4 / 25 which translates to an ironman swim of about 1 hour 11 minutes.
  • If you can go 18 seconds in a 25, then your max endurance rate would be 1.4 * 18 = 25.2 / 25 which translates to an ironman swim of about 1 hour 21 minutes.

Thoughts on the Catch, Early Vertical Forearm and Straight Arm

High Elbows, Low Hands, Fast Vs. Slow
I was at the pool yesterday watching one of my colleagues work with a swimmer who was struggling mightily to pull water.  She demonstrated what folks often call a “dropped elbow”.  When describing fast freestylers we often describe how they appear to have “high elbows”, meaning that their elbows are “high” relative to the surface of the water.  What we don’t mention as often is that a corollary to the “high” elbow is a “low” or “deep” hand.  With the elbow at a shallow location and the hand at a deep location, the result is what we often describe as “Vertical Forearm”.  Figure 1 shows a really great demonstration of this (courtesy of the folks at

Figure 1: Rebecca Adlington’s left arm at the “catch” portion of the freestyle stroke. Via

However, not all fast swimmers have this exaggerated an elbow bend, and indeed, scientists are now beginning to say that there is evidence to suggest that a straight arm pull in the underwater portion of the stroke may actually be more propulsive.  Glen Mills over at has a nice article summarizing these developments –  Of course, scientists theory and reality are not always in lock step, but my own personal experience suggests that there is something to this “deep hands”.  A couple of years ago I wrote a blog that described my impressions after watching a oarticularly talented young swimmer – I was struck by how his hands seemed to dart into the water and reach their deepest point very swiftly (see here

Some Hypotheses
I was wondering about this whole topic yesterday, because while the scientists predict that straight arm should be faster, experience tells us that so many, indeed, probably a majority, of fast swimmers tend towards the high-elbow EVF model instead of the straight arm underwater (Janet Evans being a notable exception).  So, what commonalities do they have, and/or what deficiencies in the straight arm does the bent arm make up for.  Here are some hypotheses:

  • Rigidity – An effective paddle is rigid (this is in contrast to a hydrofoil or wing, which relies upon some measure of pliability).  That’s why oars are not flexible.  An EVF, by rotating the arm outward, helps to lock the elbow making the arm move as a single, solid unit through the water.  A “dropped” elbow by contrast, behaves more like a wet noodle in the water with the hand trailing along behind.
  • Rotational Stability – The high-elbow is actually a result of an internally rotated shoulder (, which creates a tension that helps to prevent the rotation of your paddle, enabling you to maintain a hand that is perpendicular to the direction of the pull – in other words it keeps your hand from turning to the side and “slice” the water.

So, in sum, I will suggest that there is absolutely nothing wrong with a straight arm pull.  Also, there is nothing inherently virtuous with a bent arm pull.  The commonalities between successful swimmers of either ilk is that they maintain a rigid and rotationally stable paddle – how they manage that is a matter of individuality.  However, to start by learning straight arm might be a really good way to develop basic feel for the water since it is a hell of a lot simpler, less abstract, and easier to achieve than EVF.

Detecting Autologous Transfusions: Time-Series Metrics & Bio-Passport

A recent blog by @VeloClinic (What Does 0 Vuelta Positives Mean?) pointed out some research that bio-passport experts (some of them members of the UCI biopassport panel) have been doing to improve the certainty of their detection methods (Morkeberg et al,  The paper in question evaluated 3 algorithms for automated detection/screening of an athletes blood values to find evidence of re-transfusion of stored blood – the subjects had either 1 or 3 bags of blood re-transfused (stored either refrigerated or frozen).  There are two very important facets to what they are doing (in my opinion):

  1. Time Series – These methods are “time series” methods in that they evaluate two successive readings, taken 7 days apart, for a single athlete to detect a re-transfusion event.
  2. Unsupervised – The methods were “unsupervised”, that is, they ran without human intervention to give a “positive” or “negative” reading.

About the “unsupervised” algorithm: An unsupervised algorithm is not a replacement for a panel of experts, but rather a supplement to one.  Thus, the results of a particular metric or set of metrics should not be considered as the state of science, but rather as an interesting glimpse into the ins and outs of the analytical tools that supplement the minds of the bio-passport panel.  Also, this approach is interesting because they take the basic tools Hg and OFF, and add the factor of time, comparing successive measurements.

The results of this study showed that two of the three methods had low percentages of catching doping athletes (some with only a 10-15% chance of catching dopers), but I think that this paper shows some very encouraging signs, particularly in the area of the “intuition” of the passport panelists.  The most successful method was one that was based on the “Best Professional Judgement” of the panelists, and saw very high rates of positives (so if you’re a doper and you think that you have a 90% chance of evading detection – think again).

Hits & Misses
In the Morkeburg study, the three methods evaluated were called the “3G”, “AP” and “Absolutist” methods — the “Absolutist” method was based on the Best Professional Judgement of the scientists involved – the scientists set thresholds for changes between two successive values of OFF-score, Hg, or other markers that they thought indicated re-transfusion.   This “Absolutist” algorithm was the most sensitive method, since it had an overall detection rate of 80% based on the Hg score, and 67% on the OFF-score, meaning that there was only a 1 in 5 chance that an athlete who WAS transfusing would avoid being flagged by this test.  The other metrics (3G and AP) had a very very low level of false positives (0% for for all tests except the Hg Mass test which had a single false positive, < 1%), but similarly had low levels of success – detecting between 11% and 32% of actual transfusions.  Figures 1 & 2 show the results of the Absolutist method applied to 5 bio-passport values taken in a single month for a single athlete.  In these charts, if the lines cross, the reading is flagged for further review.


Figure 1: Plot of actual [Hb] (blue) and the [Hb] that would be indicative of possible blood re-transfusion based on the previously measured [Hb] value. The profile in this example shows a single positive screening values for [Hb] change at the end of the time series. This athlete would be referred for further examination under a “screen” and “detect” methodology.


Figure  2: Plot of actual OFF-score (blue) and the OFF-score that would be indicative of possible blood re-transfusion based on the previously measured OFF-score value.  The profile in this example shows no positive screening values for OFF-score change.

Screening vs. Detecting
The different algorithms proposed had varying detection and false positive rates – with the overall detection rates ranging between 10-80% and the false-positive rates varying between 0-20%.  Given the wide range of detection and false-positives, the most likely outcome of this research would be use each of these different tests according to their strengths and weaknesses.  Some tests would be used for “screening”, that is, to flag a set of results for further review, while other tests would be used for “detecting” or accusing an individual of doping – these two activities would require different tests.  For a screening test, you would want a test that is hyper-sensitive, that is, one with a very small likelihood of missing a doping act, while accepting that a substantial number of false positives might result that would later be reviewed further and discarded.  For a “detecting” test, you would want one that was very conservative, that is, unlikely to produce a false positive, since as we have seen in the Jonathan Tiernen-Locke case, once you are elevated to a formal bio-passport review, the likelihood of a leak and taint of scandal makes a permanent mark on an athletes career in the public eye.

Screening With the Absolutist Method
Since the Absolutist Hg (AbHg) and Absolutist OFF-Score metric (Ab-OFF) detected 80% of transfusions & 67% of transfusions respectively (100% and 43% for the set of subjects who had only 1 bag of refrigerated blood re-transfused) it would be a good screening method.  The Absolutist metrics can be considered as very conservative: they added 20% and 5% of false positives to the 80% and 67% actual positives that they caught.  Figures 1 & 2 show the Absolutist method applied to 5 athlete bio-passport values taken in a single month.  These values would be flagged as positive for [Hb] (Figure 1), but not flagged as positive for OFF-Score (Figure 2).  Based on these as a screening method, this athletes profile would be elevated for further examination.

One logical outcome of this type of research would be to have the results during a grand tour screened with this Absolutist method, then reviewed further and subjected to the other methods.

Athlete’s Bio-Passport: Fluctuations in Reticulocyte %

In a previous blog I showed some excerpts of a large study of professional racers doing the GiroBio (a 10-day UCI under 23 stage race), highlighting trends in Hemoglobin and Hematocrit, noting that Hg tended to drop over the first 5-7 days of the race, but then trended up at the end of the 10 day period in an overwhelming majority of the athletes sampled.  Though trending upwards at 10 days, the final levels of Hg at the end of those two races were generally slightly lower than at the beginning of the race, though a portion of the athletes ended the race at a higher Hg than they had begun.  I had gotten questions about whether or not Reticulocyte percentage (R%) behaved in a similar fashion.  Below is the R% graph for that study, along with some thoughts on what the data may or may not mean.

Increases in Reticulocyte Percentage During a 10-day Stage Race
The graph in Figure 1 shows that for both editions of the GiroBio, the initial response was suppression of R% in the population as a whole after 3-4 days, increases in R% by the end of 10 days, with a significant portion of the sampled population experiencing a net increase in R% over baseline by the end of the 10 day race, indicating that red blood cell production was stimulated in most of the athletes during the latter 5 days of the race.  While this trend is quite similar to that seen in the Hg charts, it differs in that in this case the mean value of the population as a whole showed a net increase in R% from the beginning to the end of the race.  Whether this stimulation of RBC production was caused by fair or foul means is unclear – that is, is the body naturally over-compensating to the load it is subjected to, or the did the majority of the athletes sampled use of EPO or some other erythropoesis inducing chemical?


Athletes Bio-Passport: Horners Histograms

Figure 1: A sample normal distribution or “bell curve”. The “fat part of the curve” is represented by the area within +/- 1 standard deviation. From

Histograms are a graphical technique that allows us to see the shape of a distribution of data.  Any data set is assumed to have some distribution, for example, a “normal” distribution is the name for the shape that we describe as a “bell-curve” (see Figure 1).  This is  the distribution that many natural phenomena (such as human characteristics like height, weight, etc. appear in) — the majority of specimens are somewhere around the median value, and there are a few individuals on either side of the median that represent the extremes of the population (sometimes called the “tails”).  It is reasonable to assume that something like Hemoglobin (Hg) is normally distributed throughout a population of humans, with a handful of individuals with extraordinarily high levels, and a handful of individuals with extraordinarily low levels, but with most folks clustered around the median value.


Figure 2: Histogram showing the distribution of Hemoglobin (Hg) values for Chris Horners bio-passport data collected between 2008-2013.

Horners Distributions for Hemoglobin & Reticulocytes
Figures 2 & 3 show the histogram distributions for Chris Horners bio-passport values for Hg (Figure 2) and R% (FIgure 3).  It is quite interesting to note that the Hg plot seems to conform to a “normal” distribution quite nicely with values clustered about a median of 14.6.  What this graph describes is that of the 39 Hg values recorded in the Horner Bio-Passport dataset, 17 of those values (nearly half) fall between 14.4-14.8, with the aforementioned median value of 14.6.  Only 5 values (12 %) fall below 13.6 or above 15.6.  But, “normal distribution” and “looks normal to me” are two entirely different ideas – “looks normal” would require a sense of what we “should” expect if nothing was amiss – or if the athlete were not trying to manipulate their values (through legal means or otherwise).


Horner_Rpct_histHorner’s R% has a much different distribution, with the median value of 0.7 NOT coinciding with the peak.  There appears to be a peak of R% values about 0.5 and another about 0.8 – we might describe this as a “bi-modal” or “two peak” distribution.  However, this is not to say that an individuals Hg or R% should be normally distributed.  I don’t honestly know, though I would say that it might be reasonable to speculate that the Hg level would be more likely to be normal than the R%, since the R% is a kind of “2nd order” function, since it changes in reaction to fluctuations in the Hg level (actually, it responds to fluctuations in O2 levels, which are a function of Hg).

Power Histograms: A Visual Comparison of Two Seasons

A few years ago a good friend of mine, David Luscan, had a fairly remarkable season time-trialing his bike.  In a nutshell, he was a long time triathlete who was experiencing a 2 year plateau in biking power, and having thoroughly exhausted the “20-minute-intervals-to-a-more-powerful-you” approach, decided to try out some high volume cycling.  A month of 20 hour per week training on the bike, followed by two months of what might be called a “taper” or “sharpening period” left him 40 watts stronger and 4 minutes faster for a 40k TT (the full store is detailed here: ).  The following year he resumed the use of focused threshold intervals, and while he was more or less stagnant in terms of power gains, he spent less hours on the bike, and ran and swam to boot.

What does it all mean?  Does it mean that it may take a lot of volume to break through a hard-earned plateau (2010), but that you can maintain those gains with a lower volume, higher intensity approach in subsequent years?  Or, “it takes a lot to improve but less to maintain”  I don’t know, but here is what it looks like.  The figures below show his breakthrough season (2010) on the top (Figure 1), and his subsequent “no stronger but less training” season on the bottom (Figure 2).  Histograms like this tell us how a given set of things are distributed.  So, in this case we have a plot of all the time Dave spent cycling in 2010 and 2011, categorized according to how many watts he was putting out at the time.  The wattages are broken into 5 watt ranges (the x axis), and the percent of time that he spent in that watt range shown on the y-axis.  For example, Figure 1 tells us that Dave spent 4% of his total time in 2010 riding between 205-210 watts, and in 2011, only  1.5% of his time was spent between 205-210 watts.  If you add up a block of these you can get some more information.  For example, in 2010, Figure 1 shows that Dave spent about 30% of this time riding between 200-250 watts, whereas he spent about half that amount, roughly 15% of his time riding at 200-250 watts during 2011.  These graphs are a stock output from the Training Peaks software.


Figure 1: Histogram showing percent of training time spent at a range of wattage levels. Plot is for David Luscan during his 2010 season where he biked exclusively.  Plots generated with Training Peaks –


Figure 2: Histogram showing percent of training time spent at a range of wattage levels. Plot is for David Luscan during his 2011 season where he swam biked and ran.  Plots generated with Training Peaks –

Digital Geography

We blog about GIS, geodata, webtechnology and have jobs too!

The 512 Project

Learning to eat healthy on a food stamp budget.


ramblings from a former distance swimmer who enjoys hours on the trail

Rees Vineyard

Wine Growing From Ground Zero


Exploring the Body & Mind Through Sport

Athletic Algorithms

Exploring the Body & Mind Through Sport