The Science and Bullshit of Lifting – Part I
Around 1.500 words, estimated reading time: 8 min.
I’ve been living and breathing philosophy since 1993.
Since 2013, I’ve been breathing chalk, too. But it was only recently that I realized how powerful the mix could be.
Philosophy is incredibly useful to tell apart science and bullshit. This is priceless at the gym because the majority of fitness-related YouTube, Instagram, and Facebook accounts that peddle ‘sports science’ to lifters are just pushing plain bullshit.
How would I know?
Short answer: I’m a philosopher and damn good at what I do.
The long answer is hardly longer. I’m a logician trained in the philosophy of science and an expert in information-seeking-by-questioning. When a conference needs a keynote speaker on the topic, I’m among the top picks. For sure, we are not that many, but the top of a short list is still the top.
I also put my money where my expertise is, and I know where to seek information when I need it and how to evaluate sources. Finally, I know science when I see it, and the same goes for bullshit.
Now, back to today’s topic. It’s a bit more technical than my usual posts, so I’ll have to split the discussion over a series. In Part I (this post), I look at whether there is a general method to tell science from pseudoscience, and whether there is a general method to identify bullshit. [Spoiler alert: the answers are no and yes, respectively.]
In Part II, I’ll look into some features of science that bullshitters can easily exploit. In Part III, I’ll point to some unexpected directions one should look to back one’s lifting with science. [UPDATE: Part III has been scrapped, replaced by the ‘Old School Strength’ Series beginning here]
Also, sometime around Part II and onward, I’ll point a few fingers.
But now, let’s get to business.
Demarcation & Underdetermination
Telling apart science from pseudoscience is called, in philosophical jargon, the demarcation problem.
The most recent attempt to establish a demarcation between science and pseudoscience occurred in the 1930s. Some philosophers, concerned with the fascination for the pseudoscience associated with Stalinism and Nazism, tried to use logic and mathematics to identify the boundaries between science and crap.
The main problem is the relation between candidate general truths, like scientific laws, and data. Scientific laws are supposed to hold in all circumstances. But they are formulated in response to a limited number of data points (sometimes as generalizations, sometimes as explanatory hypotheses, but the difference does not matter for this discussion).
Take, for example, a claim like: “the iliopsoas muscle is a hip flexor“. It holds in principle for all human beings past, present and future (as long as they have a psoas major and an iliacus). And it is based on a limited number of observations: dissections of a finite number of corpses, palpation of a finite number of healthy subjects, etc.
So, it’s only a candidate general truth of physiology backed by a limited number of votes. Scientists may have missed something in the mass of unobserved facts — or even in the observed ones — that would count against this candidate. Obviously, the same holds for all scientific laws, not only generalities about the iliopsoas.
Let’s get back to our 1930s philosophers. One of them, namely Austrian philosopher and logician Rudolf Carnap, had presented a method applicable to claims devoid of support from data whatsoever (and had debunked philosophical blabber from Nazi philosopher Martin Heidegger.)
Still missing was a generalization that would have allowed to calculate the degree of support that some data grants to a candidate general truth.
With such a formula, it would have been possible to tell when a candidate general truth enjoyed a vote of confidence or when it had been voted out by available facts (like the existence of an ‘iliopsoas’, debunked in [McGill 2007:61]).
Unfortunately, it turned out that it is never possible to tell exactly how much facts support a candidate general truth because there is no such thing as a ‘naked fact’. The problem is known as the underdetermination problem.
An example of underdetermination: aaaaabs!
Believe it or not, the existence of a functional difference between ‘upper abs’ and ‘lower abs’ is somewhat of a scientific controversy.
At first, settling it seems simple: stick electrodes in the upper and lower parts of r.abdominis of a bunch of subjects and tell them to do crunches. If the upper part fires more than the lower part, you have your answer.
And that’s precisely what seems to happen.
So, in all appearances, the upper-lower abs difference is a scientific fact.
But there’s a snag: electromyograms readings must be ‘normalized’, that is, corrected for differences in local conductivity of tissues, etc. When you apply the correction, any significant difference disappears:
But what tells you which correction to apply? Anatomical theory right?
Which is a collection of candidate general truths. Yup, you read well. The candidate general truth that “there is no functional difference between lower and upper abs” is supported by facts properly corrected by other candidate general truths. Supported, no doubt, by facts, properly corrected, etc.
Science and Bullshit
In principle, a candidate general truth cannot suffer exceptions. Negative factual evidence against it should vote it out.
And yet, the most dramatic consequence of underdetermination is that, in practice, it is always possible to put the blame on the evidence. This is why, in any given science, two different textbooks or studies might contain claims that contradict one another.
It’s a feature of science, not a bug.
Every particular science has its own way to live with it. However, bullshitters are expert at exploiting this feature to their advantage and present it as a bug. Which brings us to bullshit.
In 1986, Princeton philosopher Harry Frankfurt wrote a remarkable article titled On Bullshit. The essay was published in book form in 2005 and stayed on the New York Times bestseller list for 27 weeks.
Maybe Frankfurt was dead-serious. Maybe he was trolling his colleagues who write and publish about topics that would never be considered serious in any other science. Whatever his reasons were, his characterization of bullshitting has since become a reference in the philosophical literature about lying and deception. Bullshitting is indeed, according to Frankfurt, a close kin to lying, but a bullshitter may very well stop short of lying outright if it serves his purpose:
“[a] bullshitter may not deceive us, or even intend to do so, either about the facts or about what he takes the facts to be. What he does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to.”
Harry Frankfurt, On Bullshit
Subsequently, also, there’s a simple test to determine whether someone is bullshitting you. Answering the question: “Is that guy misrepresenting what he is up to?” will do.
Answering this question takes a little more time and resources than most people are willing to spend, but it can be fun.
Wrapping up (for now): Scientific Bullshit
Pop-sci is the main source of scientific information in the fitness industry in general.
But in a nutshell, it boils down to that: most lifting-related pop-sci is scientific bullshit because the pop-sci guys misrepresent their intentions. They do not care whether you understand the science. If anything, it’s better if you don’t, because you’ll need them to do it for you.
In order to make a living, exercise pop-scientist sell the same thing that countless others are selling too (programs, nutrition plans, supplements, advertisement space, etc.). They need to make it look special. Being ‘science-based’ is just a selling point.
Exercise pop-scientists don’t even have to lie about the science. They nitpick about details, hand-wave subtleties, and never, ever talk about real methodological issues such as demarcation and underdetermination.
Keeping their audience slightly overwhelmed convince them that the full story would go over their head. So when some real science confuses them — like conflicting studies or conflicting information in textbooks — the audience look up to them.
If they really cared about science, they would promote science textbooks and explain how to face their inconsistencies. Pretty much what I do.
Now, in all fairness, I have some role models. To name a few, Matt Perryman (Squat Every Day) and Alex Viada (The Hybrid Athlete). Perryman doesn’t bullshit you because he has pretty much left the industry and the sales of his ebook are probably his beer money. Viada has pretty much a niche market, but he is good enough to sell his stuff without bullshitting about the science.
Not everyone is that honest. Or that good.
That’s all for Part I. But I won’t leave you without some reading recommendations after some self-promotion.
OpenStax, Anatomy and Physiology, retrieved June 3, 2017.
McGill, 2016: Low Back Disorders: Evidence-Based Prevention and Rehabilitation, 3nd ed., Human Kinetics. [Note: my references are to the 2007 2nd ed. I’ve not received my copy of the 3rd yet. I’ll update them when I have. Also, McGill is an advanced textbook, and sometimes contradicts the above; when in doubt, trust McGill.]
Zatsiorsky & Kraemer, 2006: Science and Practice of Strength Training, 2nd ed., Human Kinetics.