Readings: Wildfires, Loot Boxes, Fruit flies, Lego, etc.

I got out of the car and stamped on the cigarette. “You don’t do that in the California hills,” I told her. “Not even out of season.”
—Philip Marlowe, in Raymond Chandler, “Playback” (1958)

Sometime over the next day or so more people in California are likely to have their electricity turned off than lost power during any US hurricane in 2019. Around 2m people — 800,000 customers — will lose power as utilities try to prevent their lines from sparking wildfires in the current Santa Ana event. 

And everyone knows who to blame—Pacific Gas & Electric! Climate change! Hedge funds!—and they’re mostly wrong.

First, however, some history. Wildfires in California predate human inhabitation. There is charcoal evidence in river beds of massive fires long before humans—aboriginal or otherwise—settled in the state. The state has what ecologists call a fire-adapted ecology, one that is not only prone to regular, massive fires, but even has many species of plants adapted for that event.

The typical timing of these fires is high winds and low humidity during what are called Santa Ana events1Many literary types know about the relationship between California fires and Santa Anas because of writer Joan Didion, whose essay mentioning them is an annual citation ritual. On point of un-clichéd pride, I will not mention Joan Didion again., usually in the fall, and the historical trigger was lightning. Millions of acres burned decadally that way in what eventually became California, mostly during Santa Anas. 

That, of course, changed once humans came to the state. While Santa Anas continued, fires no longer needed solely lightning triggers. The first native populations were active fire-setters, burning tens of thousands of acres every year to ready them for planting, transforming the California landscape through the power of fire. The earliest Spanish explorers noticed this, with some reporting as they sailed up the coast how this was the land of smoke, with a pall often covering the land. 

But newer triggers yet arrived when the state became settled by non-natives. Humans are profligate sources of ignition, from campfires, to gas-powered equipment, to pyromania, to, yes, power lines falling into dry landscapes. We have taken a state prone to massive fires and brought it what it didn’t need: Many news ways to be set on fire. At the same time, we have spread our ignition sources throughout the state. Where lightning was once limited to the mountains, and natives (largely) to the state’s coastal plains, humans are now everywhere in California, so wildfires can now start everywhere. 

You can see this pattern in the following figure, where California fire frequency soared with the state’s population. Note that this slowed in recent decades as settlement slowed, replaced by fewer, larger fires.

But let’s get back to causes. To give you some more historical context, here is a table of the largest California wildfires over the last hundred years—and their causes.

To summarize the causes:

    • Humans (accidentally or on purpose): 9
    • Lightning: 7
    • Powerlines:4

The takeaways are two-fold. First, most California fires aren’t caused by powerlines. Second, most fires are caused by humans—5 or 6 of the ten largest, depending on how you want to allocate things. Take away humans and you take away most of the ignition sources. You also take away most of the consequences too, as the following figure shows, where the number of California properties exposed to wildfire risk is larger than the rest of the country combined.

But California is inhabited, and that has consequences, like properties and powerlines. So, we need to answer a few questions here about why powerlines cause fires, whether that’s increasing, whether it’s negligence, and whether there is anything that can be done about it. 

Powerlines cause fires when they fall into dry landscapes, spark, and cause stuff to start burning. It’s that simple. Of course, they don’t fall into landscapes and start fires every day. It requires a bunch of pre-conditions, like low humidity, dry fuels, and (usually) winds. Without these elements, California fires either don’t start or stay small. 

Can this powerline-falls-into-dry-stuff problem be prevented? Is it somehow negligence? There isn’t anything that can be done about dry fuels, low humidity, or winds, so the question becomes, Can utilities prevent power lines from falling into landscapes during wind events?

Sure, bury them. Buried powerlines can’t fall into dry stuff and cause it to catch fire. But that is not a realistic solution. Pacific Gas & Electric alone has 134,000 miles of overhead power lines in the state, and burying them would cost something like $100-billion, according to one estimate. Burying even a fraction of the lines would still cost billions, leaving aside the environmental damage, or the unintended consequences of having power crews working in fire-prone landscapes to bury the lines, thus almost certainly starting fires in the pursuit of preventing fires.2Some have argued that paying homeowners to trim back brush would be a cheaper and better solution. While maintaining a “safe space” around homes in California is a good idea, and this could save some homes, it does not address the underlying trigger issue of what causes wildfires in the first place. Further, it ignores the “loot box” problem that I refer to and expand on below. 

Of course, that won’t stop many from making the “negligence” argument. Having made out nicely by turning PG&E into the bad guy for recent fires in the state—with their lines having causes fires and billions in property losses—some hedge funds have turned PG&E into a gaming-style loot box, something that with a modest investment of legal fees they can freely pillage for its cash contents. To, in effect, close the loot box, PG&E has now been forced to turn off a huge slice of California’s power when stronger Santa Anas blow. 3There is another solution, of course, but I’ve been reluctant to mention it. The California state legislature could inoculate utilities against negligence lawsuits brought by property owners if a fire starts after power is left on during a wildfire. I have issues with this solution, not least the unanticipated consequences of further privileging a regulated industry, but it would begin to address the loot box problem if the state simply said that utilities cannot be sued for leaving the power on during wildfire conditions. Residents cannot have this both ways.

So, are we simply screwed? The state is going to have fires, and this may be getting worse as a result of climate change, and utilities are going to cause some of these fires, and people always want someone else to blame. This isn’t a great combination.

We may not be screwed, but, as should now be obvious, blaming utilities is pointless. No-one wants to ask the right questions, like, why, in a tightly-coupled system,  wildfire-prone landscapes are inhabited, and why those properties don’t see insurance prices reflecting the real systemic risks created by their existence.

Houses in the California wildland-urban interface can be thought of as barbecue starters in a butane landscape—cheap sources of ignition with systemic consequences for the rest of the state as fires started there blow west during Santa Anas into more heavily populated areas.  Risk simply isn’t priced properly in California. Population growth in previously rural counties comes with consequences for the rest of the state, as historical data shows, and this ignition growth re-accelerated during the go-go, we”ll-fund-homes-built-anywhere years before the financial crisis. Entire landscapes were transformed by tract homes, most directly in the path of previous wildfires. All these thousands of homes, new and old, have powerlines, people, and lawn trimming appliances—ignition sources—virtually none of which are paying insurances prices commensurate with either their own risk, or with the systemic risk they create for others in the state. 

But meanwhile, rather than talking about the real issue, let’s go right on blaming utilities, or blaming climate change, or blaming anything but a grounded sense of what it means to live in a fire-prone landscape. That’s much more fun than talking about how mispriced risk, the housing bubble, and loot boxes embedded in a tightly-coupled system of urbanism and wildfires is a really, really bad idea, one that will only get more costly over time. 

<><><><><><><><><><>

Finally, here are some articles and papers worth reading:

Science & Technology

Life Sciences

Finance & Economics

Readings: Red Meat Therapy, Brexit, Parking, etc.

One way to think about the recent meta-analysis paper on the health consequences of eating red meat is to think of red meat as a medicine. Let’s call it Red Meat Therapy, RMT for short, and we can imagine administering RMT to patients.

This will seem weird. First, red meat isn’t usually thought of as a medicine, any more than, say, panang curry is thought of as a medicine, but let’s put that aside for a moment. Second, and in case you didn’t read the paper, it showed (based on meta-analysis of a host of other papers) that eating red meat is likely bad for you. We don’t usually administer non-medicines that are likely bad for you to people and call them medicines, or least we don’t usually do that and not call it quackery. 

But bear with me. Because the paper argued that we have such weak evidence against eating red meat is that it’s hard to make a strong recommendation against eating red meat. That’s not the same thing as saying red meat is not not bad for you, let alone that red meat is good for you. We just aren’t sure how bad it is, but it seems at least a little bad. Admittedly, it’s not clear what to do with that information. Lots of things are a little bad — sometimes it seems like most things in life1Like life itself, really. are at least a little bad. 

What are we to do with things are a little bad? One thing we can do is ignore it. We do that a lot. We can also put it in practical and quantitative terms, which seems like a non-awful idea.

We have ways of doing that kind of calculation. One way is to use a measure called “number needed to treat” (NNT). It tells you how many patients need to be treated with a particular medication before we expect to see an effect, like, say, a saved life.2It’s a fun calculation. NNH is the inverse of the absolute risk reduction (ARR), which is, in turn, the difference between the rate of an experimental treatment (EER) and that of a control treatment (CER), or ARR = CER – EER. To be specific, if a drug reduces the risk of a bad thing happening from 50% to 40%, the ARR is calculated as 0.5 – 0.4 = 0.1, which gives us an NNT of 1/ARR = 10. You would, in other words, need to treat ten people to expect one to benefit. In the best case the NNT is 1, where everyone who is treated benefits. That mostly doesn’t happen, other than in fake medicines for, like, hair loss.

But you can turn NNT around and calculate the number needed to harm (NNH). That, as the name suggests, is how many people need to have a particular treatment before it’s likely we hurt someone. Granted, that isn’t the way we usually think about therapies, for obvious reasons, but it can cover some interventions. 

You can apply that method to our RMT. Pretending red meat is medicine, and taking the base incidence3Around 4.6% of one of the main projected bad consequences of red meat’s excessive consumption (colorectal cancer), and then comparing that to the study-based projected increase in colorectal cancer4This is obviously controversial — hence the meta-analysis we are writing about here — but a mid-point is a roughly 20% increase, putting the incidence at 5.4% or so., we can say, at least approximately, how many people would have to take Red Meat Therapy before we expected an additional case of colorectal cancer. 

So, how effective is RMT? Not so good, at least under these assumptions. If we were trying to give one more person colorectal cancer by stuffing them regularly with red meat, we would need to treat around 100 people with RMT. If this were a drug, we would likely call it a failure — it doesn’t do much for most of the people who take it. 

To return to the original study, this does help us think more coherently about the paper. It isn’t that red meat isn’t bad for you, as some of the resulting media articles implied5Red meat is back on the menu!. It’s just that the effect size is so small, and the confounds so large, that we can’t say much about red meat’s actual effect on most people, most of the time, in most real-world situations, especially given all the stuff we do to ourselves that aren’t good for us.6There is a separate issue here, which is how seriously to take meta-analysis papers of other papers that themslves have weak conclusions based on their data. I sometimes argue that such meta-analysis papers are the research equivalent of collateralized debt obligations (CDOs): bundles of individually squirrelly things that magically become credible when wrapped together in a neat quantitative package. This, of course, should feel unsettling, and no more defensible in meta-analysis papers than in mortgage securities, but that’s a topic for another day. Humans are complex, mischievous systems, and we shouldn’t be surprised when our bodies conspire to make even the best intentioned nutrition researchers look silly.

<><><><><><><><><><><><><><><><><><>

Here are some papers worth reading:

Life Sciences

Science & Technology

Economics & Finance

Readings: The Money Machine, Comorbidities, Consumption, etc.

It’s not clear to me what makes people who encounter financial markets mostly through Vanguard target-date retirement funds and drive-by encounters with the Wall Street Journal or CNBC think they should opine so freely. Then again, that sort of Hey Look Me Offer Opinion doesn’t stop anyone in other domains — just listen to recent retirees offer each other healthcare advice the next morning you’re in a Starbucks — so perhaps I shouldn’t be surprised. 

As I write here too often, people don’t understand the systemic consequences of borked (or successful, for that matter) initial public offerings.

  • It isn’t just about the shareholders. But that matters.
  • It isn’t about the private investors in that company. But that matters.
  • It isn’t about the prospective public investors in that company. But that matters.
  • It isn’t about the underwriters. But that matters.
  • It isn’t about the valuation of similar public or private companies. But that matters.

It’s about a system, a Great Money Machine that grinds up money being used for one thing and funnels it to other things. When an offering fails, money goes other places. Public institutions were selling Thing X to generate cash to use for buying the new Y, so they can now buy back X, or buy a Z, or whatever. And to the extent they depressed prices in somethings, that will stop. At the same time, companies that had planned to buckets of post-IPO money — underwriters, the company itself, etc. — now don’t have that money so that don’t get to do that. The money stays elsewhere in the system, and almost certainly won’t be spent for similar purposes. 

The list is very long, and grows longer and more material the larger the IPO is that didn’t happen. Thinking of it narrowly — all other companies using software and heavy capital expenditures to try and brand themselves as tech companies with tech valuations will suffer, but, hey, there aren’t others of those, so we’re good — is silly and a gross misunderstanding of how the Great Money Machine of markets works. 

<><><><><><><><><><><><><><>

Here are some papers worth reading:

Life Sciences

Science & Technology

Economics & Finance

Weekend Readings: Nutrition, Fossil Fuels, Bad Odds, Bonds, etc.

“Life is a gamble, at terrible odds. If it were a bet you wouldn’t take it.”
― Tom Stoppard, Rosencrantz and Guildenstern are Dead

Some papers that I’ve been looking at during the week. I’ve sorted these by broad topic area for your weekend reading.

Life Science, Exercise, & Nutrition

Science & Technology

Finance & Economics

Readings: Bernoulli, Repo Markets, Hunger, etc.

Among the more useful things I know about complex systems, is this: When you fix a complex system under pressure, it either blows up again right away, or it doesn’t for a while.

I learned this when working in golf course maintenance, but was reminded of it again today when looking at what’s been going on in repo markets. Turning first to the golf course, the plastic irrigation pipes there would regularly break, sometimes shooting geysers impressively high in the air. When pipes broke you had to quickly turn the water off, dig out enough pipe to do a repair, and then wait a little for the glue to set. And a lot of leaning idly on shovels was involved, which was teenage-me-pleasing.

And then, the fun started: We would turn the watering system back on, and do a quick circuit of the golf course, looking for new breaks. While breaks could happen anywhere in the system, the most impressie ones were always in the same area: the 10th/18th fairway, which were side by side.

Why there? A little Bernoulli will help here. The total energy in a pipe with flowing, incompressible fluid can be approximated as follows:

TE = z + v2 / 2g + p/ρg

TE: Total Energy
z: elevation
v: velocity
P: pressure
g: gravitational constant

In short, the energy in a pipe is a function of the pressure and the velocity of the fluid flowing through it. If a larger diameter pipe flows into a smaller diameter one, the flow speeds up, but the pressure drops. You see that happen when you put your thumb on the end of a garden hose: You can now sneakily shoot water over your car and wet your kids, but the hose doesn’t blow up. You’re increasing the velocity, but keeping the total energy the same.

(This is Bernoulli’s principle, named after David Bernoulli, one of a family of maddeningly brilliant but pleasingly self-destructive Bernoullis, who discovered the inverse relationship between fluid pressure and velocity. And yes, it bothers me too that pressure doesn’t rise with velocity when you reduce the diameter of a pipe. I know it doesn’t, and I have always known it doesn’t, but it has never felt right, if you know what I mean. It feels almost sneaky — all that pressure just sitting there in big pipes — and I blame David Bernoulli. Maybe even his whole damn family of polymaths.)

Anyway, a picture might help here. In the following screen grab from a simulation I’ve been messing with where you can see how a changing pipe diameter leads, all else equal, to lower pressure, but higher velocity. Of course, this neglects friction, which causes velocity to be lower than it would otherwise be, thus raising pressure in the smaller diameter pipe somewhat, but let’s not get into that.

This is all well and good, you’re thinking, but what does it have to do with golf courses, let alone, you know, repo? Well, the trouble with this irrigation system was that it pumped water from a lake west of the course, up to the 10th/18 fairway, and from there up to the rest of the golf course.

This is a problem. Because the key word in the preceding sentence is “up”, and it occurs twice. There was about a 15m height difference between the pond and the two valley fairways, and another 15m height difference to get up to the holes above the valley. That’s a 30m difference, which turns out to be hugely important when it comes to pumping water. Why? Because water is heavy and doesn’t want to go anywhere, especially if water is sitting on water in a big column, like in a sloping pipe. Dealing with friction losses requires higher pump pressure, but dealing with sizable elevation differences requires much more effort, increasing the pressure in the valley pipes much more than would the case if, say, the pond was up on the ridgeline and the golf course was down below. (To be fair, this would introduce other problems, but no-one said golf course irrigation system design was easy.)

The upshot is that the reason why the most catastrophic blowups happened where they did was because the design was poor, causing the system to be hit with high pressure in a way that wasn’t obvious to onlookers. It also introdued unexpected sensitivies, like making it twitch about where sprinklers were turned on: if you didn’t bleed off some pressure  in the valley, the lower pipes were likely to burst; put too many sprinklers in the valley, and the ridges suffered.

Most compex systems are like this, of course. They are peacemeal, with pressure points in unexpected places that could be reduced or even eliminated by better design. In the case of repo markets, when everyone rushes into markets not designed for that sort of rush, weird things happen, like last night, when normally stable rates spiked in a way that, in essence, should never happen. (I analogized it to waking up in Los Angeles and briefly seeing New York in Pasadena, before it went back to New York again when everyone shouted at it.)

Maybe it was a technical issue, maybe it was a sudden surge of large financial services companies wanting to put assets in fear of a new Middle East conflict. Whatever. The effect was a massive pressure spike in repo markets, as you see below.

Like I learned a long time ago on golf courses, the main thing worth knowing about complex systems under pressure that they were never built for, is they either break away, or they break later. But later is never never.

<><><><><><><><><><><><><>

Here are a few articles and papers worth reading:

Readings: Collusion, Coupling, Crime, etc.

Everyone’s a capital markets expert until they invest in something. Or, more accurately, until they invest in something and it doesn’t do what they thought it would do.

WeWork has been a Rorschach test for the punderati, letting people infer whatever they want to from capital markets. A bubble! Excess! Poor corporate governance! Not important! A one-off! Not a tech company! And so on. An expensive real estate company with an unusual structure and seemingly-serving CEO has become something onto which everyone gets to project their hopes, fears, and delusions of capital markets omniscience.

The following weekend tweet from one market resesearcher is instructive. The argument here is that WeWork’s performance will be irrelevant if it comes public and tanks. Recall: The company has been threatening to go public for weeks now, and it keeps cutting the valuation — $45b! $40b! $30b $10b — in a kind of weird one-man auction with itself. At the same time, it keep introduction new management wrinkles to make it less like a private company pretending to be public, and more like a company almost doing its best to pretend it almost wants the things that go along with being public, like oversight, independent boards, limited self-dealing, and nuisances like that.

Will a bad WeWork IPO have zero impact? Well, there are at least three ways it could have an impact, so let’s see if we can strike those off, one at a time:

  1. Investors see other companies just like WeWork in public or private markets, so it they used its problem to revalue those companies, which makes their investors and sad and less rich.
  2. WeWork’s backers need the money from this investment going public, so a lower valuation makes them cut back on some things or sell other things, which hurts public or private markets.
  3.  WeWork’s backers’ backers decide not to back WeWork’s backers in future, which means less money flowing into private and public markets.

The first argument is eminently plausible, as annoying it might seem. But that should come as no surprise. After all, the whole reason why the WeWork silliness is at least mildly entertaining is because of how anomalous and weird it is. If there were lots of WeWork-alikes it wouldn’t be nearly as much fun to talk about WeWork, so saying it isn’t like other companies is the whole point. So, yes, it’s unlikely a bombed WeWork IPO hurts other company’s IPO prospects — unless, of course, they are expensive, money-losing, venture-backed, real estate companies run by CEOs not entirely convinced they want to be fully and completely public.

More interesting arguments flow from 2) and 3) above. And these are what most people who haven’t had the crap beaten out of them by capital markets doing ridiculous things miss. Like I like to say here, Everything causes everything. This is true2 in tightly coupled systems, like markets.

Here’s a “for instance”. Many professional investors leverage up: they borrow money to use for one purpose, using as collateral money they expect to receive from prior investing. If that collateral turns out to be less than expected, or later coming than expected, unhappy things happen. Said investor might have to sell other holdings to raise money, they may have to slow their pace of investing in other things. None of these are obvious and headline-making, like market crashes, IPO windows slamming shut, and all the things that makes for nice headlines. It’s more mysterious, a sort of, in physics terms, spooky action at a distance that turns out to be neither spooky nor at much of a distance.

So, are there signs that of WeWork-related spooky action at a distance given how much money the company’s major investors have tied up in it, and what’s happening to its valuation? Of course, as the following snippet shows. This is just one example, but SoftBank is is seeing less appetite from future investors, which will trickle through to its own investing activities, potentially driving down valuations in non-WeWork companies, and in its pace and size of investing.

Everything causes everything, especially in capital markets, even — and perhaps especially — if it doesn’t look like it.

<><><><><><><><><><><><><><><><><><><><><><><><>

Here are some articles and papers worth reading:

Readings: Free Email, Oligopolies, Cascades, Perfumes, etc.

[This is a long post, for which I’m sort of but not really sorry. I started trying to figure something out for my own purposes, and that turned into this, as happens.]

Why isn’t email free-er? To most observers it might seem free, but it isn’t. The monthly costs you pay at some of the largest bulk delivery companies to send out emails to a  paltry 5,000 people a day are hefty, pushing $1,000 a year. And they soar from there into many (many) thousands of dollars for larger numbers of contacts and messages.

How is it that something so technically straightforward, with so little marginal cost, and so well understood, continues to cost so much? Why hasn’t competition driven the price of this (seeming) commodity business to zero-ish?

There are at least three reasons one might posit:

1. It isn’t as cheap as it looks

Maybe providing email services isn’t actually all that cheap. Maybe there are hidden costs that people don’t realize, and those keep costs high. I think of it kind of like the following graph. 

The trouble is that it’s not obvious what the causes of that Mysterious Price Difference might be. The main costs in running an email server — or a cluster of servers — are the software, the hardware, the storage, and the bandwidth. All of these have declined for decades, and continue to decline faster than the year-over-year growth in email traffic or email accounts. It’s hard to imagine another cost not captured in the above four factors, but that’s about all you’re left with on the cost front.

2. It isn’t as easy as it looks 

While having email transit from A to B, even via a host of other mail servers, might seem technically straightforward, it isn’t. Specifically, the real complexity in getting email from place A to place B isn’t the A to B part. No, it’s making sure that email that should go from A to B does do that, and that email that shouldn’t — we often call this sort of email “spam — doesn’t. Differentiating spam from ham is usually thought of as non-trivial, and that creates an advantage for companies that are good at it. And getting good at it requires you to have billions of emails to work with — a corpus, in tech speak — so it’s hard to enter the market, etc.

Email isn’t as easy as it looks, dude.

If you don’t believe me, just tell your favorite techie that you are thinking of setting of setting up your own email server. You will get the look of pity, disgust, and horror that orthopedic surgeons normally reserve for people thinking of, say,  resurfacing their own hip in the bathroom. 

Granted, setting up an email server isn’t easy, but it also isn’t hard. Or at least it’s not hard in the usual sense of hard, like some NP-hard halting problem, or why the square root of -9 is equal to 9i, rather than not being equal to anything at all, etc.. There are many perfectly straightforward guides to putting an email server in the cloud somewhere, or on a Raspberry Pi, and even on appliances you can purchase with pre-built email servers on them. The tricky part about building an email server has mostly to do with maintaining them, filtering spam, and convincing other email hosting companies (and other email servers) that you’re not such a bad guy, so they accept (or transfer) your email.

Putting maintenance aside, which is too often treated as harder than it is, the first spam identification problem isn’t as hard as it once was. Open source software like SpamAssassin (when properly configured) do so well at this — something like 97% effectiveness in one study I saw — that you’d be hard-pressed to call detecting spam from textual or email envelope cues a good example of “not as easy as it looks”. The second problem shouldn’t be that hard, but turns out to be trickier, so I return to it in the next section. 

3. It’s a natural monopoly/oligopoly

While no-one argues this, it’s worth putting this out there: Maybe email provision is, despite appearances to the commodity contrary, a natural monopoly. After all, the big three email hosting companies (Google/Yahoo/Hotmail) have something like 70% of the market; the big three bulk email providers (SendGrid/MailChimp/Amazon SES) probably account for only a little less of the total message market (data is bad, so it’s hard to know, but SendGrid alone sends more than 40 billion emails a month). 

Why might this might be natural oligopoly? In large part because of the way the industry uses longevity as a proxy for reputation, which in turn drives your credibility score as an email service provider. Most new email servers are, well, new, and they don’t start off with a neutral reputation, as you might expect, but with a negative one. It takes very little to make that worse, but an active effort from recipients to make it better. People must seek out your messages in spam and tell email hosts that your message is actually not spam. Most people don’t do that, so email servers start with a crappy default reputation and go nowhere good from there.  The odds are stacked even worse against new providers, and the more “successful” you are, the worse the odds get, as it gets more likely your reputation becomes that of a bulk provider. (This, in part, explains the endless lawsuits involving Spamhaus, an email host rating outfit, and various bulk email providers. who think the former mostly protects incumbents.)

So, to answer my original question, why isn’t email cheaper, the answer is that it should be, and perhaps could be, but the industry has gone down a path that rewards incumbents for their incumbency, and makes it very difficult for new scale providers to enter the market, especially if they want to compete on price and volume. 

<><><><><><><><><><><><><><><><><><><><>

Here are some more articles and papers worth reading:

Readings: Narratives, Aging, Football Manager, etc.

The quality of ideas seems to play a minor role in mass movement leadership. What counts is the arrogant gesture, the complete disregard of the opinion of others, the singlehanded defiance of the world.

Eric Hoffer, The True Believer: Thoughts on the Nature of Mass Movements

Yale economist Bob Shiller’s new book “Narrative Economics” is weird. While he is a wide-ranging fellow, having wandered across real estate, markets, stocks, the good society, subprime, bubbles, and so on, he has mostly been the guy telling the (economic) truth about new things that people are excited about. In his new book, however, Shiller is the guy getting excited about an old thing people used to be excited about, but now mostly don’t talk about in nice company.

To be fair, being “viral” — the topic of Shiller’s new book — was ruined for me a long time ago. Specifically, I blame Hotmail, whose email tagline, added to all its users’ emails without their consent, was, venture-famously, “Get your free Email at Hotmail”, with the word “Hotmail” linked to the hotmail domain for easy signup. This was creepy, of course. Turning private emails into marketing messages is dodgy, but no-one cared, because it worked, and we were young and naive — and so it made Hotmail viral, got it bought by Microsoft, thus turning venture firm Draper Fisher Jurvetson into a well-known venture capital fund, and making Tim Draper and Steve Jurvetson into household names, at least in certain West Coast circles. Even if only for those last two reasons, being “viral” has a lot to answer for.

Shiller doesn’t talk about Hotmail in his new book — and for that, I thank him. I would buy more books if I knew for sure they didn’t talk about Hotmail. What he is interested in, however, is how things become viral, and what impact that has on the broader economy. Why, for example, did bitcoin become such a phenomenon? Shiller argues that one of the causal factors was the underlying story. Mysterious founder! Replacing old currencies!

I’m troubled by this. I like the book, and I take Shiller’s point — the story makes a big difference — but an economics over-reliant on narrative is the economics of what statistician Andrew Gelman calls “story time”. Too often analysis falls down this rabbit hole, like a newspaper story that opens with, say, two examples of something horrible but compelling that has happened, and then marches off into a connected series of what-ifs and what-abouts, none of which made plausible by the opening anecdotes. Narratives are, in a word, dangerous.

Does that mean we shouldn’t care why people believe the things they do? Not all all. Much more important, however, is how people convince others to believe things, often through implicit and explicit networks of information and misinformation. Much has been written on this topic, including the seminal work of Eric Hoffer on why masses come to believe things, and what the consequences are.

It is also instructive to model beliefs, especially how to believe things that only stand a passing chance of being true. I like to do this using modeling apps, like NetLogo, which is oodles of fun.

You can see how I modeled viral theorizing in the above settings I capture from NetLogo. I won’t go into it in-depth, but the gist is that viral theorizing — narratives — in my model are tied to education level, how dissatisfied people are, how credulous people are, and how easy it is for them to communicate. Increase their ability to communicate, lower their education, make them more credulous, etc., and they fall for the first narrative that comes along. Do the opposite, and narratives have a harder time.

The above is a screenshot of how the model finished. You can watch the full video here. In short, while people were resilience, and relatively highly educated, the amount of communication made it inevitable that many people fell for the narrative, even if only because so many people around them — literally or virtually — had already fallen for it. Narratives have eaten almost half the population.

<><><><><><><><><><><><>><><><><>

Here are some articles and papers worth reading:

Readings: Causality, Stock Tweets, Nutrition, etc.

Something did happen to me somewhere that robbed me of confidence and courage and left me with a fear of discovery and change and a positive dread of everything unknown that may occur.

Joseph Heller, Something Happened

Tzara: Causality is no longer fashionable owing to the war.
Carr: How illogical, since the war itself had causes. I forget what they were, but it was all in the papers at the time. Something about brave little Belgium, wasn’t it?
Tzara: Was it? I thought it was Serbia…
Carr: Brave little Serbia…? No, I don’t think so. The newspapers would never have risked calling the British public to arms without a proper regard for succinct alliteration.

Tom Stoppard, Travesties

I’m reasonably sure I have no idea why most things happen. I don’t let that prevent me from doing things, of course, but as time goes by I mostly assume that I’m traveling in a bubble of useful and convenient coincidences and I hope that it doesn’t end right away, or at least that I don’t notice when it ends. This is why I’m terrified of words like “because”: I have little idea why anything happens, even the things I think I know why they happened.

Financial markets are good teachers of this sort of thing. Having spent decades around markets, I learned that once I had a pretty good idea why a few things happened, that coincided neatly with my discovery that I had no idea why those things happened. This has been true across debt markets, equity markets, macroeconomics, microeconomics, and so on. (I have had a similar experience in healthcare, where my main discovery as I go ever-deeper is how little we know, how often what we think we know is wrong, how often medical reversals happen, and how often better tests lead to more incidentaloma-induced unnecessary treatments, not better solutions.)

I reached the point, not long ago, where, to a first approximation, I began assuming that everything causes everything. This helpful shorthand allows me to nod sagely any time, for example, some new food is shown to be carcinogenic, or some piece of news causes an obscure economic indicator to fluctuate. Everything causes everything, I unhelpfully remind me.

Like most people, however, I still get excited when I discover someone confidently making causal claims in domains that I think I once knew something about but discovered that I didn’t. MAYBE THEY KNOW SOMETHING, I think. It’s one of the reasons people watch financial television, read financial papers, subscribe to investment letters, and so on. They know that these people don’t know — hey, if they did, they’d be doing something else that paid better, amirite? — but MAYBE THEY KNOW SOMETHING.

Way back in my equity analyst days was when I first ran into Barron’s magazine. It was this weird newspaper-cum-magazine that littered the trading desk, and that I had never seen before. And it was full of people who knew things. There stock picks, economic forecasts, and roundtables of investor-y people making predictions in front of irreverent but respectful Barron’s staffers. Sadly, I fairly quickly discovered that, while well-intentioned, these were not MAYBE THEY KNOW SOMETHING. Picks often went south, trends ended as they were written about, and so on.

To Barron’s credit, this never really stopped their compulsive causality detection. They still do that sort of thing, as if nothing has changed, as if markets aren’t quasi-efficient, etc. I was reminded of that this recently when I noticed the word “because” in a Barron’s tweet, and got hives. And then in another Barron’s tweet. And another. Data-happy fellow that I am — the word “because” frightens me, as I said above — so I wanted to know, What happened? How did Barron’s become so sure of causality’s arrow? Could we all benefit? MAYBE THEY KNOW SOMETHING.

Here is a scraped list of the last two dozen or so Barron’s tweets containing the word “because” and the word “Dow”, one of the favorite objects of their because-ing.

Sadly, while initially optimistic, my optimism was crushed. Judging by these tweets, everything causes the Dow to fluctuate: trade talks, raised hopes, dashed hopes, tariffs, speaking, not speaking, waiting, not waiting — even threes, for reasons that I prefer not to know. I am back to EVERYTHING CAUSES EVERYTHING, but I reserve the right to now then guiltily think that MAYBE THEY KNOW SOMETHING. Sure, not these people, but maybe … someone else.

<><><><><><><><><><><><><><>

Here are a few articles and papers worth reading:

Economics & Finance

Life Sciences

Readings: Doctors, DNA, Democracy, etc.

How many patients do doctors accidentally kill every year? This turns out to be a more difficult question to answer than you might expect. 

It might seem easy to answer this question. You need just three things, more or less ordered in time:

  1. A live patient
  2. A medical error
  3. A dead patient 

The problems start here, however. How do we know that the error killed the patient? Sure, sometimes it’s obvious — doctor gives patient wrong medication, patient dies, etc. — but most of the time it’s less clear than that. But even in that case, if there was no way to know the medication was going to have that effect, is that a doctor-caused death? If we did this experiment, ahem, on patients, say, 50,000 times to get statistical significance, would they all die? Inquiring minds want to know, even if it’s not an experiment we’re likely to run.

And how do we know the error was a preventable error anyway? Many times patient die as a result of cascades. Here is one case: “aspiration led to respiratory failure, acute renal failure, shock, and cardiac arrest”. The allegation is that the aspiration was preventable, so the resulting series of unfortunate events leading to a dead patient were also therefore preventable. But was the aspiration preventable?  I supposed it depends on what they aspirated, when, and how quickly everything went badly afterward. If they ingested a sponge left in their mouth by a doctor, that’s bad; if they somehow choked on saliva during the night, then that’s trickier.

Critics go on and on about this sort of thing, about poor data; about headline-hungry researchers; about devious doctors hiding errors; about the difficulties in post-death preventability assessment; about whether sick people (hey, they were in a hospital after all) would have died anyway; about our inability to run proper randomized experiments; and so on. The result is wide variation in estimates of how many patients are killed by medical errors every year, from 25,000 to 400,000, and pretty much every number in-between.

There can be no doubt that hospitals are nothing like the slaughterhouses they were in the 19th century and earlier. Back then the safest thing you could do, if sick, was stay as far away as possible and maybe die of something else. With no viable theory of infection, for example, or at least none that had anything to do with how infections actually happened, hospitals were petri dishes for post-operative bacteria, with instruments shared across patients, sterilization non-existent, etc.

But not all medical progress is toward safety. As Lindsey Fitzharris described in her wonderful “The Butchering Art”, the arrival, for example, of general anesthesia, while welcomed by patients who previously had to be strapped down so they could suffer through grisly procedures, didn’t initially have the desired effect. Instead, doctors, no longer under time pressure, began attempting riskier and more complex procedures, or being more exploratory during what should have been less risky procedures, causing death rates to initially increase somewhat.

As weird as this will seem, we have no idea how many patients that doctors kill per year. We only know it’s highly non-zero, that it’s higher than it should be, and that it’s unlikely to ever fall as far as it could, given the nature of risk, of uncertainty, and of causality.

<><><><><><><><><><>

Here are some articles and papers worth reading: