What Do We Not Know, and When Did We Stop Knowing It?

A month or so ago Turkish president Recep Tayyip Erdogan gave an interview where he said, “My biggest battle is against interest. My biggest enemy is interest. We lowered the interest rate to 12%. Is that enough? It is not enough. This needs to come down further.”

Given the country’s soaring inflation, Erdogan anti-interest adventure has been met with much … confusion, especially outside Turkey. After all, the country’s inflation has been persistently high, over 80% in recent data, and yet his central bank continues to cut rates, contrary to monetary policy orthodoxy. 1Aside: And it really is his central bank: Erdogan has gone through a central banking leader every year or two for the last decade. You do what President Erdogan wants, or you don’t get to pretend to run the country’s central bank anymore. Even if you don’t run it really. Such policy says, in its simplest form, that in the face of high and/or rising inflation a country’s central bank should raise rates, slowing the supply of credit, contracting the money supply, and generally making people and companies less likely to spend.

So, Erdogan is doing the opposite of orthodoxy, and doing it proudly. It’s easy to laugh, or shake one’s head, or worry about the effect on Turkish citizens’ net worth, but let’s engage with it a little further. Why is he doing it? And why aren’t his citizens losing their mind over it? 2I keep talking about orthodoxy, and so it’s worth reminding what that is, to a first approximation anyway, One version of Here’s What You Should With Rates in Response To Inflation is the Taylor rule, which works like this:
r = p + .5y + .5(p – 2) + 2
where
r = the federal funds rate
p = the rate of inflation
y = the percent deviation of real GDP from a target

In short, central bank rates should rise with inflation, but perhaps not quite as much. Maybe. There may or may not be a quiz.
3If you’re super interested in monetary policy orthodoxy, or how this whole thing works, Google it, or check Greg Mankiw’s Principles of Economics.

He seems, in part, to be infected with at least two bad ideas. The first bad idea is that interest rates are evil, what Islam calls “riba”, the charging of interest on loans. It is explicitly forbidden, which requires entertaining contortions to get around. Leaving those contortions aside, condemning interest rates as evil is popular in Turkey, where something like 99.8% of the population is registered by the state as Muslim. Condemning interest rates is populist religious politics, even at these rates of inflation. Even when the evidence suggests you’re fairly badly wrong.

Erdogan’s second bad idea is that higher interest rates cause inflation. While this might sound daft—monetary policy says the relationship is the other way around—he has argued in public that he has it right, and others don’t. Granted the evidence, again, isn’t in his favor, but he is bravely fighting monetary policy orthodoxies. This idea, that higher rates cause inflation, and vice-versa, isn’t something Erdogan cooked up on a raki bender. In economic circles, it’s sometimes called Neo-Fisherism, and it provokes … debates. 4The idea, in short, is that more things affect inflation than interest rates, like demographics, supply chains, growth, immigration, and so on. Sure, to this argument, you can lower inflation in the short-run by raising rates, but there is a kind of natural rate of inflation in the economy, given the above factors (and more), and it will eventually revert, no matter what you do.
Further. and this is even sneaker, advocates argue that the real interest rate is the nominal (central bank) interest rate minus the inflation rate, which is true, but they turn it around in a kind of mathematical jui jitsu. They argue that nominal rates are caused by inflation plus the real rate. So if the central bank pushes nominal interest rates down to a low level, then inflation responds by going lower still. Yes, this makes precious little sense, but that’s why it’s a bad idea, but one weirdly widespread.

****************

So, here we have two bad ideas, held onto firmly in the face of credible, contradictory evidence. What causes people to hold onto bad ideas in the face of strong evidence against them? In his classic book When Prophecy Fails, writer Leon Festinger explained it like this:

A man with a conviction is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point.

Leon Weseltier, When Prophecy Fails

We often think we know a thing, but we don’t actually know it. Most obviously, we may be wrong, even if we could have been right. Now, there are many ways to be wrong this way—being stupid, incurious, lazy, biased, or politics-loving and so on all help—but not all of them reveal themselves easily. You can go a long time being wrong about something inconsequential before being wrong becomes consequential, if ever.

Let’s say I’m wrong about my conviction that life on earth was seeded by teenage sentient frog aliens having a laugh. How does that belief become consequential, assuming it’s an error?5Note: I’m not saying I’m wrong. I mean. who is to say? After all, frog DNA is freakishly unusual, with a third of it being transposons, and three-quarters of that being DNA transposons, which can move genes around more or less on a lark. It’s suspicious. In what way does it cause me to make poor decisions today? What feedback do I get that I might interpret as cautionary? Not much, unless I share my beliefs publicly, thus attracting derision, so I don’t do that.

On the other hand, you can get direct and cautionary feedback when you hold incorrect views, depending on the lag times and consequentiality. For example, thinking that gravity doesn’t apply to you near the surface of Earth usually reveals itself as a misapprehension when one jumps out of a tree to one’s death. But if I jumped out of a tree and didn’t fall to the ground for weeks, or even months, I would be wrong, but it wouldn’t be obvious to me for some time. I’ll come back to this in a bit.

Years ago Kathryn Schulz wrote in her book Being Wrong about what it feels like to be wrong: It generally feels just like being right. The feeling is the same, and it only changes when you are confronted with contradictory information, and then, in the best case, you go Ooooooh, I’m been wrong all this time and didn’t know it! And you change your mind. And even that’s often not enough, as philosophers of science have written about in exhaustive detail. People, even when confronted with credible contradictory information, are very good at hanging onto their beliefs.

There are more insidious ways of being wrong. We may have an idea that was once true but is no longer. How many moons does Jupiter have? Do you remember? You were almost certainly once taught a number, or maybe even looked it up. I have a vague sense that it’s around a dozen or so. But that isn’t true. That was true back in the 1990s, and all the way back to the 1960s, but it has changed dramatically in recent years. We now know of 90 moons around Jupiter, admittedly not all of them named, after moon-finding-bonanzas in the 2010s (46!) and 2010s (34!). 6In his terrific book, The Half-Life of Facts, Sam Arbesman wrote about this phenomenon, and once apprised of it you find yourself looking continually over your cognitive shoulder, wondering which facts have snuck up and changed when you weren’t paying attention. Or at least I do.

Another way you can be wrong is that your belief is plausible, but not well tested, sometimes because of inadequate studies, but often under the belief that it simply must be true. The gold standard here is an RCT, short for “randomized controlled trial”. This is where you randomly assign people to different groups, generally, a treatment and a control group, find a relevant dependent variable, and then see what happens. This is an important way of advancing knowledge, arguably one of our greatest scientific tools.

Because nothing good can go unpunished, RCTs aren’t loved. They are costly and time-consuming, and there can be ethical issues with assigning people to groups when one is reasonably confident bad things will happen to the group, or when known good things won’t. And then there are things that just seem … obvious. Of course, replacing estrogen in menopausal women will improve health. Why do we need an RCT to prove that something so obvious is a good idea? It’s obvious. 7While estrogen therapy came into vogue in the 1960s, there was a sharp decline in the 1970s, after reports of a 4–14 times higher risk of endometrial cancers, all linked to estrogen therapy. The FDA required a warning on all estrogen products that indicated a risk for blood clots and cancer.

But does everything really require an RCT? Some things genuinely are obvious. I don’t need an RCT to prove that ceasing hitting myself in the head with a rock is a good idea. There is a famous paper about this—whether we need RCTs for everything—by the way. 8Yeh, Robert W., et al. “Parachute use to prevent death and major trauma when jumping from aircraft: a randomized controlled trial.” bmj 363 (2018). The paper described an attempt to test the efficacy of parachutes in jumping out of airplanes by randomly assigning people to groups with and without parachutes, and seeing which group had more deaths. It found there was no difference in death rates, but added a cautionary note: “[T]he trial was only able to enroll participants on small stationary aircraft on the ground, suggesting cautious extrapolation to high altitude jumps.”

This is hugely appealing, obviously. Some things are obvious, after all. But standards change, our ability to test things changes, and even the effect sizes can diminish over time. I got thinking about this recently when I saw a comment in the Reddit group /r/medicine about things that are true about a medical specialty, but no one ever says them out loud. (And in the following comment from this doctor, “OB” is “obstetrics”.)

These are usually called “medical reversals” 9Wikipedia: When a newer and methodologically superior clinical trial produces results that contradict existing clinical practice and the older trials on which it is based, and they are surprisingly common in medicine, especially in cardiovascular medicine, which may or may unnerve you.10 A 2013 study (Prasad V, Vandross A, Toomey C, Cheung M, Rho J, Quinn S, Chacko SJ, Borkar D, Gall V, Selvaraj S, Ho N, Cifu A (August 2013). “A decade of reversal: an analysis of 146 contradicted medical practices”. Mayo Clin Proc. 88 (8): 790–8. doi:10.1016/j.mayocp.2013.05.012) of a decade of medical journal articles found that of the 363 articles focused on standard of care practices, 146, or about 40%, led to reversals of the practice. That is a very big number. We can take some solace in knowing that medicine does reverse itself when it gets things wrong, or, more to the point, when it is shown incontrovertible proof that it got something wrong, but that is a rarity. Many fields outside medicine hold onto bad ideas for a very long time.

Of course, there is more to it than emotion and stubbornness. There are also statistical arcana worth remembering, like effect size, type I and type II errors, significance, power, confidence intervals, and so on. And in a perfect world, I would explain all of those to you in a way that wouldn’t madden statisticians and bore normals, but that’s just not possible. 11Fine, check out some of the visualizations here, which are terrific, especially on the widely misunderstood topics of significance testing and effect sizes. I’ll just say that both apply broadly to the credibility of social sciences research, like economics, but also to medicine. And ignorance of them leads directly to problems, like the ongoing reproducibility crisis, where claimed effects disappear once new researchers study them.

All of these problems exist in economics, where we started. There are immense confounds, not least of which is that much of what goes on there, especially in monetary policy must, in practice, be based on fairly unrealistic expectations about how humans think and behave. Central banks must be seen, for example, to be committed to keeping rates high in the face of inflation, it is argued, because to do otherwise would suggest you will give up the fight before inflation is vanquished. But that implies you will keep rates high even after it’s obvious that inflation is declining, which means you will keep rates high for too long, which will likely cause a recession, all because humans, being humans, are trying to predict what you will do next, even if you’d rather not do it. Whew. This circularity is part of rational expectations modeling, where economists assume people think coherently about the future12 and plan their spending and savings accordingly. They kinda don’t, of course, but that’s not something models cope with very well.

So, to return to where we started, where does this leave our budding monetary policy theorist, Turkey’s Recep Erodgan, and his two bad ideas? Inflation is still high, and rates are still low, even if his central bank has indicated it may not cut rates much more (which may oy may not cost him his job). And Erdogan’s ideas are still bad ideas. But, holding them, like holding many bad ideas, feels, to him, like holding good ideas—because they’re not costing him anything, and may even be gaining him supporters. Knowledge expires or is wrong, even knowledge from the best-intended research, but doesn’t matter, unless there are consequences.13And most published research findings are wrong. See: Ioannidis JPA (2022) Correction: Why Most Published Research Findings Are False. PLOS Medicine 19(8): e1004085. https://doi.org/10.1371/journal.pmed.1004085 And there is no level of interest rates, or inflation, that can making him stop holding them, as long as they keep getting him what he wants.

  • 1
    Aside: And it really is his central bank: Erdogan has gone through a central banking leader every year or two for the last decade. You do what President Erdogan wants, or you don’t get to pretend to run the country’s central bank anymore. Even if you don’t run it really.
  • 2
    I keep talking about orthodoxy, and so it’s worth reminding what that is, to a first approximation anyway, One version of Here’s What You Should With Rates in Response To Inflation is the Taylor rule, which works like this:
    r = p + .5y + .5(p – 2) + 2
    where
    r = the federal funds rate
    p = the rate of inflation
    y = the percent deviation of real GDP from a target

    In short, central bank rates should rise with inflation, but perhaps not quite as much. Maybe. There may or may not be a quiz.
  • 3
    If you’re super interested in monetary policy orthodoxy, or how this whole thing works, Google it, or check Greg Mankiw’s Principles of Economics.
  • 4
    The idea, in short, is that more things affect inflation than interest rates, like demographics, supply chains, growth, immigration, and so on. Sure, to this argument, you can lower inflation in the short-run by raising rates, but there is a kind of natural rate of inflation in the economy, given the above factors (and more), and it will eventually revert, no matter what you do.
    Further. and this is even sneaker, advocates argue that the real interest rate is the nominal (central bank) interest rate minus the inflation rate, which is true, but they turn it around in a kind of mathematical jui jitsu. They argue that nominal rates are caused by inflation plus the real rate. So if the central bank pushes nominal interest rates down to a low level, then inflation responds by going lower still. Yes, this makes precious little sense, but that’s why it’s a bad idea, but one weirdly widespread.
  • 5
    Note: I’m not saying I’m wrong. I mean. who is to say? After all, frog DNA is freakishly unusual, with a third of it being transposons, and three-quarters of that being DNA transposons, which can move genes around more or less on a lark. It’s suspicious
  • 6
    In his terrific book, The Half-Life of Facts, Sam Arbesman wrote about this phenomenon, and once apprised of it you find yourself looking continually over your cognitive shoulder, wondering which facts have snuck up and changed when you weren’t paying attention. Or at least I do.
  • 7
    While estrogen therapy came into vogue in the 1960s, there was a sharp decline in the 1970s, after reports of a 4–14 times higher risk of endometrial cancers, all linked to estrogen therapy. The FDA required a warning on all estrogen products that indicated a risk for blood clots and cancer.
  • 8
    Yeh, Robert W., et al. “Parachute use to prevent death and major trauma when jumping from aircraft: a randomized controlled trial.” bmj 363 (2018). The paper described an attempt to test the efficacy of parachutes in jumping out of airplanes by randomly assigning people to groups with and without parachutes, and seeing which group had more deaths. It found there was no difference in death rates, but added a cautionary note: “[T]he trial was only able to enroll participants on small stationary aircraft on the ground, suggesting cautious extrapolation to high altitude jumps.”
  • 9
    Wikipedia: When a newer and methodologically superior clinical trial produces results that contradict existing clinical practice and the older trials on which it is based,
  • 10
    A 2013 study (Prasad V, Vandross A, Toomey C, Cheung M, Rho J, Quinn S, Chacko SJ, Borkar D, Gall V, Selvaraj S, Ho N, Cifu A (August 2013). “A decade of reversal: an analysis of 146 contradicted medical practices”. Mayo Clin Proc. 88 (8): 790–8. doi:10.1016/j.mayocp.2013.05.012) of a decade of medical journal articles found that of the 363 articles focused on standard of care practices, 146, or about 40%, led to reversals of the practice. That is a very big number.
  • 11
    Fine, check out some of the visualizations here, which are terrific, especially on the widely misunderstood topics of significance testing and effect sizes. I’ll just say that both apply broadly to the credibility of social sciences research, like economics, but also to medicine. And ignorance of them leads directly to problems, like the ongoing reproducibility crisis, where claimed effects disappear once new researchers study them.
  • 12
    and plan their spending and savings accordingly. They kinda don’t, of course, but that’s not something models cope with very well.
  • 13
    And most published research findings are wrong. See: Ioannidis JPA (2022) Correction: Why Most Published Research Findings Are False. PLOS Medicine 19(8): e1004085. https://doi.org/10.1371/journal.pmed.1004085