*by Shawn Burke, Ph.D.*

PREFACE

*In this installment of The Science of Paddling we take a brief diversion from paddling, and explore what “science” means, and how that can inform one’s reading of these articles. So kick back, grab your favorite adult beverage, relax and float downstream.*

INTRODUCTION

When I was an undergraduate – you know; back when dinosaurs roamed the Earth – I presented my senior thesis before members of our department faculty. The presentation was titled, “An Acoustical Measurement of Fundamental Parameters in Porous Sound Absorbers.” Sounds like a real page-turner, eh? Without getting into the details, the thesis combined a mathematical model of how sound travels through porous materials (like, say, fiberglass) with the design, construction, and operation of an experiment, the data from which could be used to determine the physical constants in the mathematical model.

The presentation went great; I had fun giving it. Then came the Q&A part. The first question went to Prof. Earl Dowell who asked, in essence, “That’s all fine and dandy. But *where’s the science?*”

Now aside from being an excellent and insightful teacher, noted researcher, and a generous and patient person, Prof. Dowell is also the founder of the field known as aeroelasticity and a member of the National Academy of Engineering. It wasn’t quite like having Isaac Newton stand up and ask a pointed question, but in my world it was pretty close. In response I babbled on about my approach, the model, the results found, and the implications thereof. I pointed out one unexpected result from the experiments (an acoustical anisotropy), and that seemed to suffice. Afterwards, my thesis advisor pulled me aside and noted, “That was an *excellent* question from Prof. Dowell. You handled it pretty well, but…” I won a couple of awards for my research. It was only later that I realized a Zen master had confronted me with a koan, and instead of achieving satori I remained mired in samsara.

So why the stroll down memory lane? Does this have something to do with paddling? Well, no; this doesn’t have anything to do with paddling. It does, however, have something to do with The Science of Paddling (see how meta this is getting?). In particular, where is the science in my articles? Is there any? (Hint: Yes.) And is there something we can draw from science and the scientific method itself to learn how to most profitably read these articles? (Another hint: Yes. Question everything!)

This article was prompted by some of the emails I’ve received about TSOP, where the emailers were trying to see more in these articles than is really there. Or in some cases, less. So if you feel like pairing a little philosophy with your paddling, read on! We’ll start… with science.

SCIENCE!

So what is science? “Science… is the organized, systematic enterprise that gathers knowledge about the world and condenses the knowledge into testable laws and principles.” [E.O. Wilson, *Consilience*, pp. 58.] Features of science include repeatability of results, the ability to abstract results into theories, and perhaps most important, *falsifiability*, e.g., a scientific theory can be proven true or false using universally-accepted methods. Further, a scientific result can be true within a certain context (anti-CD28 monoclonal antibody TGN1412 successfully stimulates T cells in monkeys), and false outside of it (anti-CD28 monoclonal antibody TGN1412 triggers a potentially fatal immune response in humans). Consequently, science must also bound the range of applicability for its results. These bounds generally arise from underlying assumptions or limitations regarding a theory, test result, or test method.

Now most Science of Paddling articles are seasoned with a bit of mathematics, not just science. In contrast to science, mathematics is not concerned with empirical validation through observations made in the physical world. Mathematics is a collection of concepts that are tested – in other words, proven true or false – using the tools of logic. Some of you may recall doing “proofs” in a high school geometry course. You can thank the ancient Greek geometer Euclid for that! A mathematical proposition – which can also be thought of as a theory – can be proven true or false in light of certain agreed-upon “facts,” called axioms, or in light of previous mathematics that has withstood similar rigorous analysis. In that way mathematics is falsifiable, too.

*Applied* mathematics is used to build and analyze models related to observable reality. For example, around 600 BC Thales of Miletus developed a geometric tool called “shadow reckoning” to calculate the height of buildings and towers, as well as the distance of ships from a harbor, using a method we now refer to as similar triangles. It’s fairly easy to show that Thales’ work – and later, Euclid’s more encompassing geometry – has a correspondence to objects in the real world. Empirical models based on geometry can be constructed with rulers, dividers, and lengths of string. You may have done this yourself, bisecting a line segment with another perpendicular line drawn through intersecting arcs having their origin at the segment’s endpoints; dividing a circle’s circumference into sixths using a divider set to the circle’s radius; and so forth. These models allow us to demonstrate that many geometric theories are true via construction: you draw a picture, you make a measurement, and can see with your own eyes that shadow reckoning is true. Geometric propositions are falsifiable.

Similarly, we can understand that arithmetic works based on empirical observation: If I place two apples into a basket that already contains three apples, then the resulting numbers of apples in the basket is five. And while some of you may shudder when you remember algebra, it’s also easy to show how algebraic statements like, “If a train leaves New York traveling west at 50 mph, and a train leaves Los Angeles traveling east at 60 mph, which train will reach Chicago first?” directly correspond to empirically-testable results.

But things got weird when Newton came along and developed the foundations of classical physics. Isaac Newton and Gottfried Leibniz – at the same time, and independently – developed calculus, the branch of mathematics that among other things is used to model *changes* in physical quantities like position, speed, and acceleration. Calculus is built on the foundations of algebra and geometry (the slopes of lines, tangents, areas, and the like), plus a couple of revolutionary new ways of viewing the world (limits and differentials). Newton employed the new calculus to develop the branch of physics called mechanics, which described the relationship among force, mass, momentum, acceleration, and velocity. And yet… how obvious is it that gravity is an acceleration, and not a force? (You can thank Newton for that one.) Despite these less-intuitive mathematical underpinnings, classical mechanics derives its power from the ability to model and predict things in the physical world: The trajectory of a bullet, how to launch a spacecraft into orbit, or the fastest way to paddle across a river in current.

A proliferation of other disciplines followed including fluid mechanics, acoustics, thermodynamics, heat transfer, and (thanks to Faraday’s experiments and Maxwell’s analysis) electromagnetics, all of which have formalisms that rely on further branches of mathematics know as differential equations. Differential equations are built upon the foundation of calculus. In each of these fields sophisticated mathematical models are developed based upon simpler models that are generally accepted as true. As an undergraduate I recall deriving a series of coupled partial differential equations called the Navier-Stokes equations that are based, when all is said and done, on Newton’s 2^{nd} law of motion (force is proportional to the time rate of change of momentum, aka mass times acceleration for fixed-mass systems). The Navier-Stokes equations allow us to do fun things like design jet airplanes, sewer systems, and automotive fuel injectors. And it lets me write Science of Paddling articles.

What is the link between sophisticated mathematical models and the real world? Philosophers have puzzled over this for centuries, and I’m not qualified to add to their corpus. I’ll let noted scientist E.O. Wilson step in:

For reasons that remain elusive to scientists and philosophers alike, the correspondence of mathematical theory and experimental data in physics in particular is uncannily close. It is so close as to compel the belief that mathematics is in some deep sense the natural language of science. “The enormous usefulness of mathematics in the natural sciences,” [mathematician Eugene] Wigner wrote, “is something bordering on the mysterious and there is no rational explanation for it. It is not at all natural that ‘laws of nature’ exist, much less that man is able to discover them. The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.” [E.O. Wilson, *Consilience*, pp. 53]

What I’ve personally experienced is that the predictions of applied mathematics, *when properly qualified*, match phenomena I’ve seen and empirically measured. For those of you wondering when I’d get to it, that’s the (long-winded) takeaway from this article.

We’ll revisit what “when properly qualified” means in a moment. But first, consider a simple example: an electrical circuit composed of resistors, capacitors, and inductors. If I apply a particular voltage signal to this circuit, I can predict what the voltage and current signals will be anywhere in the circuit via a mathematical model readily derived using college sophomore-level linear systems theory. Consequently, one can demonstrate this direct correspondence between the mathematical model and the physical circuit. Ditto for the feedback control of a flexible mirror’s shape based upon a set of mathematical functions called Zernike polynomials, and the design of a sonar sensor to “automatically” resolve the angle of an underwater sound source. I’ve personally modeled and built each of the systems listed above; in the last case I relied on the differentiation property of Fourier transforms – you know, math – to design, develop, and patent an entirely new class of sensor… and it worked in the field as predicted. In each case I was left with a sense of both delight (akin to, “Nailed it!”), and wonder: How can mathematical equations “tell” me what to do, and why does the thing I build subsequently work? And as important, when does it *not* work?

It’s easy to be pragmatic about these things, accept that it works, and get on with your life, *especially as an outside observer*. This perspective has limits, though, especially in reading technical articles. It can foster complacency, as embodied in the reaction, “Well, that’s sure some sophisticated mathematics – it all must be true.” Or it can cause one to latch onto something familiar, and base their reaction on the habitual / familiar rather than thoughtfully considering the work as a whole. I caution you, gentle reader, not to fall into these traps.

I’ll let you in on a little secret; give you the secret handshake of science and engineering. Applying calculus and other sophisticated mathematics to objects in the physical world relies upon *assumptions*. For example, Newton’s physics relies on reference frames that are either static or move with a fixed speed. If a reference frame is accelerating, or you start approaching the speed of light, classical physics falls apart and you have to jump on board with Einstein – and things get *really* weird. Underlying my favorite fluid mechanics expression, the Navier-Stokes equations, is the continuum hypothesis. The continuum hypothesis requires that you limit your analysis to dimensions a whole lot larger than the distances between atoms in the fluid. This means you can say “the Charles River is flowing at 900 cfs” rather than having to specify the velocity of each and every water molecule in the river. Seems obvious, but there are entire branches of fluid mechanics (e.g., non-equilibrium statistical mechanics) that treat fluids statistically. Or that nifty sonar sensor I invented: It works great resolving a sound source along a two-dimensional plane, but once you start looking at sound sources in three-dimensional space (aka, “off axis”) performance begins to suffer.

If you are mindful of the *underlying assumptions* you can bound when a system will perform correctly, and when performance will degrade. It is equally important to understand the underlying assumptions in *any* physical model or analysis, *for the exact same reasons*. If you are mindful of the *underlying assumptions* you can bound when the results presented in a technical article work, and when performance will degrade (or where the model has less fidelity, or is not applicable). This is part of the scientific method through and through, harkening back to Socrates when he noted, “The beginning of wisdom is the definition of terms.” And that’s the lens you can and I hope will employ when you read any Science of Paddling article, or any article for that matter. In that way, as the reader you too are a scientist.

TELL ME ABOUT PADDLING ALREADY

So where does this leave The Science of Paddling? Where’s the science?

Well, certainly the article *The Deflection Point* has elements of science: the relationship between heart rate and effort during exercise will experience a break point at the anaerobic threshold owing to our physiology, and I saw this in data I took myself. That’s an example of first-person science: test a theory via experiment, then see what the data shows. If others conduct the same experiment and obtain the same result, that’s even better; if enough confirmations are found to make the result statistically significant, Bingo! All I contributed was a framework for making this break point easier to see when plotted based on a model of how hulls experience resistive forces when underway. That’s applying a mathematical concept, rooted in a simplified physics model (drag force in racing hulls is proportional to the square of velocity above a certain speed), to experimental data. Is that one tweak by itself… science? I’m not convinced that it is, but it sure is useful.

“Linearizing the Field” was all about exploring underlying assumptions. The NECKRA Points Series utilizes a straight-forward process to weight finishing times of various canoes and kayaks, paddled by various combinations of male, female, youth, and senior / veteran paddlers, in order to rank race results. The underlying assumptions of this approach were likely never stated before this article was published, perhaps because they were implicit and common-sensical. But as pointed out in the article, this approach breaks down when more than one weighting factor is applied to a given finishing time, and grows worse as more are introduced. True result; possibly useful; likely of interest to a tiny fraction of the readership. But this article shone a light on the role of underlying assumptions in something many of us in New England took for granted. Is this science? Yes, in that it built upon a previous model (the Points Series) and developed new conclusions, verified by testable mathematics, that reveald inherent assumptions in the NECKRA model. The article gets at method, e.g., question everything.

Finally, how about the predictive model employed in “About the Bend”? For those of you who haven’t memorized all of my articles (which is all of you, I hope!) I applied someone else’s empirically-derived sprint canoe dynamical model to assess the role of the bend angle in bentshaft paddles for the marathon stroke. I wasn’t attempting to make any *quantitative* performance prediction per se; sprint and marathon canoe stroke mechanics are different, but at least there was some data to work with. All I was looking for was insight into whether the analysis suggests that the bend angle makes any difference (it does), and whether the model suggests there could be an optimal bend angle (it does). That’s it. There was definitely no science involved, just some math twiddling where I extrapolated data taken from a sprint canoe paddler, who had employed a sprint canoe stroke, and applied it somewhat tenuously to marathon paddling.

Yet despite my efforts to couch the analysis in qualifying statements, “About the Bend” generated more email traffic than any other I have written. At the risk of being rude – which I don’t intend to be in the least – most of the questions could be answered by, “Please re-read what I wrote about the *underlying assumptions*,” and, “No, the article doesn’t quantitatively predict X,” for whatever conclusion ‘X’ was. All predictions were qualitative.

In essence, this is what Professor Dowell was asking about, all those years ago. Back then I had taken a mathematical model, constructed an experiment that embodied certain features of the model, took data, and used the model to interpret the experimental results. In a way I was fitting data to a model without fully appreciating (or at the time, understanding) the assumptions implicit in the model itself. Kind of like “build your own tautology,” where I was asking the model to be consistent with itself. In retrospect, some of the assumptions I employed at the time were obvious and reasonable since the acoustic wave equation is derived from the Navier Stokes equations (which are based upon Newton’s Second Law of Motion, which we accept as correct because it works) and the equation of fluid mechanical continuity (which is based on the notion that fluid mass is conserved, and not lost; since the experiment didn’t “leak” air this was a valid assumption). However, the thermodynamic equation of state – which relates acoustic pressure and density – I used to derive the model had an implicit assumption that I could only articulate a year later, and it was the crux of the matter. I later understood that the experiment was actually looking for when and how a certain property embodied in this equation of state changed with frequency; it was *testing an assumption of the model*. As an undergraduate I was looking at it from two levels of abstraction higher. And I was enamored with all the nifty equations.

We’re all busy; we all like what we like; I for one am fascinated by, well, everything. In my periodic science and engineering news trolling across the Interwebs I stumble upon headlines touting remarkable achievements in medicine, engineering, astrophysics, and the like. Yet when I read an interesting article a little voice starts whispering in my ear, “Where is the source article that this is based on?” And when I finally locate the source article I first search for disclosures about the underlying assumptions: test population size, measurement limitations, whether experiments were performed on humans or fruit flies, etc., in order to separate the wheat from the chaff. In understanding the assumptions, you can quickly bound the range of applicability and thus the utility of any result in science, engineering, mathematics, medicine, etc.

One might argue that this process of examination and discernment sucks all of the fun out of life – or that I don’t have a life if this is what I do for fun! For me, searching for and understanding underlying assumptions and limitations lets me know when to discount a headline or claimed result, and when to really get excited about it. That way I can put my energies into following science and engineering that will have, in the words of my friend and thesis committee member Dr. Dan Hegg, “Depth, breadth, and permanence.” That’s where long-term satisfaction lies. It also helps me decide when I can put a particular result into practice, hopefully to make my life easier, more fun, or best of all, provide more insight.

© 2019, Shawn Burke. All rights reserved.

v1.0