Readings: Free Email, Oligopolies, Cascades, Perfumes, etc.

[This is a long post, for which I’m sort of but not really sorry. I started trying to figure something out for my own purposes, and that turned into this, as happens.]

Why isn’t email free-er? To most observers it might seem free, but it isn’t. The monthly costs you pay at some of the largest bulk delivery companies to send out emails to a  paltry 5,000 people a day are hefty, pushing $1,000 a year. And they soar from there into many (many) thousands of dollars for larger numbers of contacts and messages.

How is it that something so technically straightforward, with so little marginal cost, and so well understood, continues to cost so much? Why hasn’t competition driven the price of this (seeming) commodity business to zero-ish?

There are at least three reasons one might posit:

1. It isn’t as cheap as it looks

Maybe providing email services isn’t actually all that cheap. Maybe there are hidden costs that people don’t realize, and those keep costs high. I think of it kind of like the following graph. 

The trouble is that it’s not obvious what the causes of that Mysterious Price Difference might be. The main costs in running an email server — or a cluster of servers — are the software, the hardware, the storage, and the bandwidth. All of these have declined for decades, and continue to decline faster than the year-over-year growth in email traffic or email accounts. It’s hard to imagine another cost not captured in the above four factors, but that’s about all you’re left with on the cost front.

2. It isn’t as easy as it looks 

While having email transit from A to B, even via a host of other mail servers, might seem technically straightforward, it isn’t. Specifically, the real complexity in getting email from place A to place B isn’t the A to B part. No, it’s making sure that email that should go from A to B does do that, and that email that shouldn’t — we often call this sort of email “spam — doesn’t. Differentiating spam from ham is usually thought of as non-trivial, and that creates an advantage for companies that are good at it. And getting good at it requires you to have billions of emails to work with — a corpus, in tech speak — so it’s hard to enter the market, etc.

Email isn’t as easy as it looks, dude.

If you don’t believe me, just tell your favorite techie that you are thinking of setting of setting up your own email server. You will get the look of pity, disgust, and horror that orthopedic surgeons normally reserve for people thinking of, say,  resurfacing their own hip in the bathroom. 

Granted, setting up an email server isn’t easy, but it also isn’t hard. Or at least it’s not hard in the usual sense of hard, like some NP-hard halting problem, or why the square root of -9 is equal to 9i, rather than not being equal to anything at all, etc.. There are many perfectly straightforward guides to putting an email server in the cloud somewhere, or on a Raspberry Pi, and even on appliances you can purchase with pre-built email servers on them. The tricky part about building an email server has mostly to do with maintaining them, filtering spam, and convincing other email hosting companies (and other email servers) that you’re not such a bad guy, so they accept (or transfer) your email.

Putting maintenance aside, which is too often treated as harder than it is, the first spam identification problem isn’t as hard as it once was. Open source software like SpamAssassin (when properly configured) do so well at this — something like 97% effectiveness in one study I saw — that you’d be hard-pressed to call detecting spam from textual or email envelope cues a good example of “not as easy as it looks”. The second problem shouldn’t be that hard, but turns out to be trickier, so I return to it in the next section. 

3. It’s a natural monopoly/oligopoly

While no-one argues this, it’s worth putting this out there: Maybe email provision is, despite appearances to the commodity contrary, a natural monopoly. After all, the big three email hosting companies (Google/Yahoo/Hotmail) have something like 70% of the market; the big three bulk email providers (SendGrid/MailChimp/Amazon SES) probably account for only a little less of the total message market (data is bad, so it’s hard to know, but SendGrid alone sends more than 40 billion emails a month). 

Why might this might be natural oligopoly? In large part because of the way the industry uses longevity as a proxy for reputation, which in turn drives your credibility score as an email service provider. Most new email servers are, well, new, and they don’t start off with a neutral reputation, as you might expect, but with a negative one. It takes very little to make that worse, but an active effort from recipients to make it better. People must seek out your messages in spam and tell email hosts that your message is actually not spam. Most people don’t do that, so email servers start with a crappy default reputation and go nowhere good from there.  The odds are stacked even worse against new providers, and the more “successful” you are, the worse the odds get, as it gets more likely your reputation becomes that of a bulk provider. (This, in part, explains the endless lawsuits involving Spamhaus, an email host rating outfit, and various bulk email providers. who think the former mostly protects incumbents.)

So, to answer my original question, why isn’t email cheaper, the answer is that it should be, and perhaps could be, but the industry has gone down a path that rewards incumbents for their incumbency, and makes it very difficult for new scale providers to enter the market, especially if they want to compete on price and volume. 

<><><><><><><><><><><><><><><><><><><><>

Here are some more articles and papers worth reading:

Readings: Narratives, Aging, Football Manager, etc.

The quality of ideas seems to play a minor role in mass movement leadership. What counts is the arrogant gesture, the complete disregard of the opinion of others, the singlehanded defiance of the world.

Eric Hoffer, The True Believer: Thoughts on the Nature of Mass Movements

Yale economist Bob Shiller’s new book “Narrative Economics” is weird. While he is a wide-ranging fellow, having wandered across real estate, markets, stocks, the good society, subprime, bubbles, and so on, he has mostly been the guy telling the (economic) truth about new things that people are excited about. In his new book, however, Shiller is the guy getting excited about an old thing people used to be excited about, but now mostly don’t talk about in nice company.

To be fair, being “viral” — the topic of Shiller’s new book — was ruined for me a long time ago. Specifically, I blame Hotmail, whose email tagline, added to all its users’ emails without their consent, was, venture-famously, “Get your free Email at Hotmail”, with the word “Hotmail” linked to the hotmail domain for easy signup. This was creepy, of course. Turning private emails into marketing messages is dodgy, but no-one cared, because it worked, and we were young and naive — and so it made Hotmail viral, got it bought by Microsoft, thus turning venture firm Draper Fisher Jurvetson into a well-known venture capital fund, and making Tim Draper and Steve Jurvetson into household names, at least in certain West Coast circles. Even if only for those last two reasons, being “viral” has a lot to answer for.

Shiller doesn’t talk about Hotmail in his new book — and for that, I thank him. I would buy more books if I knew for sure they didn’t talk about Hotmail. What he is interested in, however, is how things become viral, and what impact that has on the broader economy. Why, for example, did bitcoin become such a phenomenon? Shiller argues that one of the causal factors was the underlying story. Mysterious founder! Replacing old currencies!

I’m troubled by this. I like the book, and I take Shiller’s point — the story makes a big difference — but an economics over-reliant on narrative is the economics of what statistician Andrew Gelman calls “story time”. Too often analysis falls down this rabbit hole, like a newspaper story that opens with, say, two examples of something horrible but compelling that has happened, and then marches off into a connected series of what-ifs and what-abouts, none of which made plausible by the opening anecdotes. Narratives are, in a word, dangerous.

Does that mean we shouldn’t care why people believe the things they do? Not all all. Much more important, however, is how people convince others to believe things, often through implicit and explicit networks of information and misinformation. Much has been written on this topic, including the seminal work of Eric Hoffer on why masses come to believe things, and what the consequences are.

It is also instructive to model beliefs, especially how to believe things that only stand a passing chance of being true. I like to do this using modeling apps, like NetLogo, which is oodles of fun.

You can see how I modeled viral theorizing in the above settings I capture from NetLogo. I won’t go into it in-depth, but the gist is that viral theorizing — narratives — in my model are tied to education level, how dissatisfied people are, how credulous people are, and how easy it is for them to communicate. Increase their ability to communicate, lower their education, make them more credulous, etc., and they fall for the first narrative that comes along. Do the opposite, and narratives have a harder time.

The above is a screenshot of how the model finished. You can watch the full video here. In short, while people were resilience, and relatively highly educated, the amount of communication made it inevitable that many people fell for the narrative, even if only because so many people around them — literally or virtually — had already fallen for it. Narratives have eaten almost half the population.

<><><><><><><><><><><><>><><><><>

Here are some articles and papers worth reading:

Readings: Causality, Stock Tweets, Nutrition, etc.

Something did happen to me somewhere that robbed me of confidence and courage and left me with a fear of discovery and change and a positive dread of everything unknown that may occur.

Joseph Heller, Something Happened

Tzara: Causality is no longer fashionable owing to the war.
Carr: How illogical, since the war itself had causes. I forget what they were, but it was all in the papers at the time. Something about brave little Belgium, wasn’t it?
Tzara: Was it? I thought it was Serbia…
Carr: Brave little Serbia…? No, I don’t think so. The newspapers would never have risked calling the British public to arms without a proper regard for succinct alliteration.

Tom Stoppard, Travesties

I’m reasonably sure I have no idea why most things happen. I don’t let that prevent me from doing things, of course, but as time goes by I mostly assume that I’m traveling in a bubble of useful and convenient coincidences and I hope that it doesn’t end right away, or at least that I don’t notice when it ends. This is why I’m terrified of words like “because”: I have little idea why anything happens, even the things I think I know why they happened.

Financial markets are good teachers of this sort of thing. Having spent decades around markets, I learned that once I had a pretty good idea why a few things happened, that coincided neatly with my discovery that I had no idea why those things happened. This has been true across debt markets, equity markets, macroeconomics, microeconomics, and so on. (I have had a similar experience in healthcare, where my main discovery as I go ever-deeper is how little we know, how often what we think we know is wrong, how often medical reversals happen, and how often better tests lead to more incidentaloma-induced unnecessary treatments, not better solutions.)

I reached the point, not long ago, where, to a first approximation, I began assuming that everything causes everything. This helpful shorthand allows me to nod sagely any time, for example, some new food is shown to be carcinogenic, or some piece of news causes an obscure economic indicator to fluctuate. Everything causes everything, I unhelpfully remind me.

Like most people, however, I still get excited when I discover someone confidently making causal claims in domains that I think I once knew something about but discovered that I didn’t. MAYBE THEY KNOW SOMETHING, I think. It’s one of the reasons people watch financial television, read financial papers, subscribe to investment letters, and so on. They know that these people don’t know — hey, if they did, they’d be doing something else that paid better, amirite? — but MAYBE THEY KNOW SOMETHING.

Way back in my equity analyst days was when I first ran into Barron’s magazine. It was this weird newspaper-cum-magazine that littered the trading desk, and that I had never seen before. And it was full of people who knew things. There stock picks, economic forecasts, and roundtables of investor-y people making predictions in front of irreverent but respectful Barron’s staffers. Sadly, I fairly quickly discovered that, while well-intentioned, these were not MAYBE THEY KNOW SOMETHING. Picks often went south, trends ended as they were written about, and so on.

To Barron’s credit, this never really stopped their compulsive causality detection. They still do that sort of thing, as if nothing has changed, as if markets aren’t quasi-efficient, etc. I was reminded of that this recently when I noticed the word “because” in a Barron’s tweet, and got hives. And then in another Barron’s tweet. And another. Data-happy fellow that I am — the word “because” frightens me, as I said above — so I wanted to know, What happened? How did Barron’s become so sure of causality’s arrow? Could we all benefit? MAYBE THEY KNOW SOMETHING.

Here is a scraped list of the last two dozen or so Barron’s tweets containing the word “because” and the word “Dow”, one of the favorite objects of their because-ing.

Sadly, while initially optimistic, my optimism was crushed. Judging by these tweets, everything causes the Dow to fluctuate: trade talks, raised hopes, dashed hopes, tariffs, speaking, not speaking, waiting, not waiting — even threes, for reasons that I prefer not to know. I am back to EVERYTHING CAUSES EVERYTHING, but I reserve the right to now then guiltily think that MAYBE THEY KNOW SOMETHING. Sure, not these people, but maybe … someone else.

<><><><><><><><><><><><><><>

Here are a few articles and papers worth reading:

Economics & Finance

Life Sciences

Readings: Doctors, DNA, Democracy, etc.

How many patients do doctors accidentally kill every year? This turns out to be a more difficult question to answer than you might expect. 

It might seem easy to answer this question. You need just three things, more or less ordered in time:

  1. A live patient
  2. A medical error
  3. A dead patient 

The problems start here, however. How do we know that the error killed the patient? Sure, sometimes it’s obvious — doctor gives patient wrong medication, patient dies, etc. — but most of the time it’s less clear than that. But even in that case, if there was no way to know the medication was going to have that effect, is that a doctor-caused death? If we did this experiment, ahem, on patients, say, 50,000 times to get statistical significance, would they all die? Inquiring minds want to know, even if it’s not an experiment we’re likely to run.

And how do we know the error was a preventable error anyway? Many times patient die as a result of cascades. Here is one case: “aspiration led to respiratory failure, acute renal failure, shock, and cardiac arrest”. The allegation is that the aspiration was preventable, so the resulting series of unfortunate events leading to a dead patient were also therefore preventable. But was the aspiration preventable?  I supposed it depends on what they aspirated, when, and how quickly everything went badly afterward. If they ingested a sponge left in their mouth by a doctor, that’s bad; if they somehow choked on saliva during the night, then that’s trickier.

Critics go on and on about this sort of thing, about poor data; about headline-hungry researchers; about devious doctors hiding errors; about the difficulties in post-death preventability assessment; about whether sick people (hey, they were in a hospital after all) would have died anyway; about our inability to run proper randomized experiments; and so on. The result is wide variation in estimates of how many patients are killed by medical errors every year, from 25,000 to 400,000, and pretty much every number in-between.

There can be no doubt that hospitals are nothing like the slaughterhouses they were in the 19th century and earlier. Back then the safest thing you could do, if sick, was stay as far away as possible and maybe die of something else. With no viable theory of infection, for example, or at least none that had anything to do with how infections actually happened, hospitals were petri dishes for post-operative bacteria, with instruments shared across patients, sterilization non-existent, etc.

But not all medical progress is toward safety. As Lindsey Fitzharris described in her wonderful “The Butchering Art”, the arrival, for example, of general anesthesia, while welcomed by patients who previously had to be strapped down so they could suffer through grisly procedures, didn’t initially have the desired effect. Instead, doctors, no longer under time pressure, began attempting riskier and more complex procedures, or being more exploratory during what should have been less risky procedures, causing death rates to initially increase somewhat.

As weird as this will seem, we have no idea how many patients that doctors kill per year. We only know it’s highly non-zero, that it’s higher than it should be, and that it’s unlikely to ever fall as far as it could, given the nature of risk, of uncertainty, and of causality.

<><><><><><><><><><>

Here are some articles and papers worth reading:

Readings: Paradoxes, Barbers, Brexit, Groupon, etc.

How wonderful that we have met with a paradox. Now we have some hope of making progress.

Niels Bohr

The UK Brexit debates is a limitless source of the strange. Among my favorites this week was the claim that the Tory party might, in its ardor for a general election, turn against itself in a vote of no-confidence. Such a vote would take down the government, thus causing the government to then have to call an election, given its absence of confidence in itself.

Leaving aside whether this will happen — by which I mean, Please, please, please let it happen — the thing I appreciate most about this is its embrace of logical illogic. Governments don’t, however confidently, have votes of no-confidence in themselves. First, it means you’re no longer the government, which seems a bad idea, given that the main reason to be in government is to be in government. Second, and more fun, is that not having confidence in yourself forces an election that, one assumes, you think you will win, suggest you have confidence in your lack of confidence about your confidence to confidently govern. Or something.

The crazy thing, practically speaking, is that this is mostly logical-ish. If, as a minority party, you can’t convince the other parties to vote against you, thus forcing a general election, you are forced to vote against yourself. It makes perfect sense, even if it might seem mad.

This is, of course, a paradox. And I love paradoxes, arguments that, despite a sensible premise and logical argument, produce contradictory or illogical conclusions. Much of scientific progress can be connected to paradoxes — which is, in large part, why “Well, that’s weird” — is such a powerful observation when doing scientific research.

Much of the best comedy springs from paradoxes as well, which isn’t a coincidence.

But there are paradoxes and there are paradoxes, and analytic philosopher W. V. Quine argued that there are three kinds. The first kind Quine identified was a result that might seem nuts, but can be shown to be true anyway. There are many such paradoxes, but among the best known is the Monty Hall Problem, where a decision that seems a coin flip isn’t a coin flip. Quine called this a veridical paradox, where an absurd conclusion turns out to be true.

A second kind of paradox Quine identified was where something that appears false turns out to actually be false. Granted, this might not seem like much of a paradox — more like something better described as “stupid” — but it can be. An example: There many fairly compelling mathematical proofs that 1=2. These can be very convincing, even if they seem false, and we know they must be false, but it’s sometimes difficult to pin down why, exactly, that they are false.

The third kind of paradox Quine identified was where you reach an internally contradictory result when applying proper reasoning. This usually involves statements that reference themselves, producing bizarre conclusions. The Barber Paradox is an example: The barber is the one who shaves all those, and those only, who do not shave themselves, so, does the barber shave himself? (There are entire books of this sort of thing, like those by Raymond Smullyan.)

So, which kind of paradox is the British no-confidence vote? To my way of thinking, it is one of those statements that circles back and eats itself, a logical paradox created by language that implies the sentence is saying something that it can’t, which makes it — drumroll — an antimony. I’m doubtful, however, that this means we are making any progress. Sorry, Niels Bohr.

<><><><><><><><><><><><><><><>

A few articles and papers worth reading, most with a paradox theme:

Readings: Games, Brexit, qMRI, Fund Managers, etc.

While game theory is (at least within the academic community) largely considered a failure, that doesn’t mean it is without its uses. I was thinking of it this morning in reading a recent FT piece about misunderstandings between the European Union and Brexiteers in Britain.

Here is Simon Kuper:

Both the EU and the British government keep making the same mistake about each other, notes Douglas Webber of Insead business school, author of European Disintegration? (2016): each side thinks the other will cave to avoid an economically damaging no-deal Brexit.

In fact, says Webber, both sides regard short-term economics as secondary. Johnson’s government prioritises achieving Brexit. Europeans prioritise preserving the rules of the single market and standing by Ireland.

Source: How Europe views the Brexit endgame, Simon Kuper

This is, of course, a kind of prisoner’s dilemma, where both parties think that the other side is so worried about its counterpart defecting — embrace a no-deal Brexit — that the other side will “see sense” and negotiate in good faith toward a new post-Brexit arrangement.

Source: theeconomicsofgaming.wordpress.com/2017/04/09/the-prisoners-virtual-dilemma/

The trouble with this sort of symmetric game is not the symmetry, however. Among the useful things we have learned from game theory, in general, and the prisoner’s dilemma, in particular, especially from competitions based on game theory, is that it matters whether the game (rightly or wrongly) is seen as repeated or one-off. If a game is a one-off, then vengeful and retributive strategies can do surprisingly well. After all, who cares how much you maximize your own gains at another party’s expense if you never have to deal with them again? Defect away!

Despite appearances, however, Brexit is not a one-off game. While Britain admittedly won’t be Brexit-ing in future on a yearly basis, it will still have to deal with the EU constantly. Its trade, immigration, and political relationships are too strong and important. If anything, post-Brexit, it would be talking with the EU almost as much, albeit with a different focus and status. This is more like a repeated game, in prisoner’s dilemma terms, which might make one optimistic that the two sides will sort things out.

But surprisingly, perhaps, even in repeated games, however, fairly vicious strategies can outperform. Famously, a strategy called “tit-for-tat” did very well in many competitions, where it followed the simple strategy of doing whatever its counterpart last did. This, of course, could lead to catastrophic spirals if two such strategies played one, a kind of ultimatum game where both sides “defect” into oblivion. (This has led to more nuanced versions, like where a strategy periodically tries to cooperate, but then goes back to defecting if the other side doesn’t switch strategies.)

Where does this leave us? Both sides seem stuck on the idea of convincing the other of its respective commitment to the idea that the other doesn’t want a no-deal Brexit. But it also seems clear both think a no-deal Brexit wouldn’t be that bad, both politically and economically. Further, both see some political capital in acrimonious discussion, as weird as that might seem. It leaves me, at least, with the nagging sense that both sides will defect — a no-deal Brexit — and things will get very messy indeed.

<><><><><>><><><><>><><><><>><><><><>><><><><>

A few papers worth reading:

Advances in qMRI (quantitate MRI) are fascinating, but they have been held back somewhat by MRI’s sensitivity to water content, which, while regulated fairly closely, changes enough to make comparative tissue characterization difficult. A new approach does away with some of these problems, allowing, for the first time, allowing doctors to track molecular changes in the brain over time, thus permitting a deeper understanding of aging-related changes. Important work.

While colorectal cancer incidence is no longer rising among older cohorts, it is, according to a new study, rising sharply among the young, which is a surprise. Some of the increase may have to do with more sensitive detection methods, but this is seemingly more going on than that, with an increase in young adults twice as rapid as that in older adults. Researchers speculate about a new source of large intestine carcinogenesis, but it’s cautionary and early work.

Collapse remains, historically speaking, an important source of flux in the global environment. A new study shows that the collapse of the Soviet Union resulted in a net emissions reduction of 7.61 gigatons of carbon dioxide equivalents from 1992 to 2011. By way of comparison, this is about one-quarter of the CO2 emissions from deforestation in Latin America over the same period.

Most of the attention paid to poor performance among active fund managers has been directed to how damn hard it is, not how the industry is structured. A new study takes a structural view, arguing, somewhat paradoxically, that the industry, despite declining in terms of managers and assets, still isn’t concentrated enough, that having too many fund managers reduces the incentive for outperformance.

Decades from now we will look back on this period and likely realize we were kids playing with information matches when it comes to the onslaught of often siloed info-flow overwhelming people, especially in politics. A new paper argues that this has created both tacit and explicit opportunities for what it calls “information gerrymandering”, essentially a way, based on connections and zealots, to prevent people from seeing information that might provide them with a different perspective. This is intriguing work with considerable explanatory power.

Finally, some quick hits:

Readings: Innovation, Robots, Inflammation, Inbreeding, Queuing, etc.

I think a lot about lines, and most of life is lines (queues, if you’re from the UK). They are everywhere: merging onto a freeway, on-phone hold times, why a laptop is so hot, chatbots at online retailers, weekend dim sum, a ski lift will open on a powder morning, and so on. These are all lines, with many similar and important properties.

It is less well understood than it should be, but one of the most important developments of the last few decades is that of queuing theory, a body of research that allows us to engineer systems to deal with stuff showing up, stuff waiting for service, and then said stuff leaving. The “stuff”, of course, can be human, can be cars, can be packets on a network — it can be many things — but they’re all often in queues. And like so many things at which we can throw algorithms, the cost of engineering a queue has collapsed in recent decades: we know more, and we can manage them less expensively. To a first approximation, queues are cheap and everywhere.

An example will help. Most queues can be characterized via two parameters: arrival times and service times; the rate at which things show up in the queue, and the speed with the queue is processed. Both of these parameters have distributions. The simplest version comes when people (let’s use that example) show up a constant, predictable rate, and when service rates are also fixed. Most real-life scenarios aren’t like that, of course, and both arrival rates and service rates vary wildly, but can be assigned distributions, like the exponential, that make the problem tractable.

So, here is a simple example. Both arrival and service times are exponentials, and I’ve set them equal. What’s interesting inthis simple example is how queues build and dissipate, even though the arrival and service rates are equal. All it takes is a few complex cases, or a few extra people showing up — both of which are predictable given the underlying distribution -= and suddenly people are waiting longer than expected.

Things get much messier with no underlying parameter changes. In the following case, lines ballooned, even though the arrival rate and the service rate are still the same.

What’s interesting about queuing theory is how useful it is, and how easily you can model real-world situations in ways that make important problems kinda go away. But the problems haven’t really gone away. All it takes is a temporary change in the parameters to make everything go bananas — an accident on a freeway, a stuck process on a computer, a powder day, a market crash, etc. — and the queues run wild. There is a kind of hidden wildness in systems that disappears, until it doesn’t.

Look around you for queues. You will find them everywhere, and we are obsessed with making them more efficient. This has consequences, and only subtle changes will throw us into entirely new regimes.

<><><><><><><><><><><><><><><><><><><><>

A few papers worth thinking about:

The Geography of Unconventional Innovation

I have long argued that the primary benefit of doing startups in dense cities (to a point) is that it increases the likelihood of “collisions”, of people, by chance, running into other people. That can lead to an exchange of ideas, which provokes new ways of thinking, and sometimes turns into innovations. I am pleased to see this being born out in a new study, that shows that overall innovation is flatter than is often modeled, but “atypical innovations” are more closely associated with dense urban areas where collision foment them.

Psychological reactions to human versus robotic job replacement

People are funny about how they feel about robots and jobs. For example, they prefer that other people’s jobs aren’t replaced by robots, but they would prefer their own job, if it gets eliminated, be replaced by robots than by humans. Why? The authors argue that “being replaced by machines, robots or software (versus other humans) is associated with reduced self-threat”. This is intriguing, and not at all what we usually think happens.

Association of Blood Marker of Inflammation in Late Adolescence With Premature Mortality

We are becoming increasingly aware of health problems tied to inflammation, and the problems apparently start even earlier than previously known.. According to a new paper using erythrocyte sedimentation factors (how quickly red blood cells fall to the bottom of a test tube as a proxy for inflammation) in adolescence are highly predictive of death due to cancer and cardiovascular disease.

Emergences

A provocative, rich, and fascinating talk by Danny Hillis on how complexity emerges from simplicity, whether we are talking about life, technologies, or almost any other common system.

Other reading:

Live Through This: Courtney Love at 55
• Financial analysis of transfer values in the top 5 football leagues
Extreme inbreeding in a European ancestry sample from the contemporary UK population

Readings: Cancer, statins, and CFOs

“Once for all, he accepts the stock of commonplaces, prejudices, fag-ends of ideas or simply empty words which chance has piled up within his mind, and with a boldness only explicable by his ingenuousness, is prepared to impose them everywhere”
― José Ortega y Gasset, The Revolt of the Masses

We live in strange times. As I write this, the UK is debating yet another Brexit-related motion, meanwhile, in a kind of streaming seppuku, one of its own members crosses the floor and makes the current UK government largely a non-government. All the while, UK politicians continue to genuflect in the general direction of The Voter, promising that they are doing  what The Voter wants, but no-one has any idea what The Voter wants, unless it’s what said politician thought in the first place.  Sometimes we hear about Ordinary Constituents who send Letters too, but it’s not clear whether Ordinary Constituents and The Voter are fungible on a 1:1 basis. It is a remarkable moment, combining sophistry, arrogance, confusion, and populism.

Of course, it’s not just Brexit. There is a sense of fragility in much of current events, from politics, to economics, to the environment. Society feels as if it’s going through a Galloping Gertie phase, where under the pressure of sustained pressure systems are finding harmonics they didn’t previously realize existed, and oscillating in increasingly destructive amplitudes. 

<><><><><><><><><><><><><><><><><><><><><><><>

A few papers and articles worth reading: 

Variations in common diseases, hospital admissions, and deaths in middle-aged adults in 21 countries from five continents (PURE): a prospective cohort study

While cardiovascular disease retains its top spot worldwide with respect to killing humans, it is important to note (as the above study does) that cancer is no longer the top cause of human death in higher-income countries. No, that position has been taken by cancer, which recently passed cardiovascular disease in those countries. This is an artifact of an aging society as much as anything else, but it is noteworthy.

Seeing is believing? Executives’ facial trustworthiness, auditor tenure, and audit fees

Some truly fascinating and bizarre research is coming out of the financial community lately as it exploits new sources of data. This is that, as researchers show that the firms of CFOs with more “trustworthy” faces — as judged by a machine learning algorithm — are charged 5.6% lower audit fees. Perhaps unsurprisingly, facial trustworthiness is shown to have no association with either financial reporting quality or litigation risk.

Cross-national evidence of a negativity bias in psychophysiological reactions to news

I am generally of the view that you should ignore news, perhaps checking in once a year or so, and even then maybe only quickly scan news from twelve months before that. Too much recent news is irrelevant, inflammatory, unimportant, etc. This new study reminds us of that, showing how skewed it is to negative news — despite there being a large and mostly untapped audience for positive news.

Do statins really work? Who benefits? Who has the power to cover up the side effects?

There is a punchy and spot-on critique of statins, their role in health, and their unanticipated consequences. It seems increasingly clear that such drugs are overprescribed, and that our cholesterol-lowering fixation is killing people, whether directly or indirectly.

Fully automated snow depth measurements from time-lapse images applying a convolutional neural network

I’ve long thought that computer vision is a much more important technology than it’s usually given credit for being. It is an example of a general-purpose technology, one that can absorb many other modes via which we learn about and understand the world in which we live. Combine computer vision, machine learning and, okay. snow, and you’ve really got my attention.

Readings: Four papers, back from hiatus, etc.

Summer’s ending, or at least the kids-out-of-school part is, so this newsletter will get going again. Sorry about that — either the hiatus or the recommencement: your call — but things will recommence next week.

To give you a taste, here are a few papers I enjoyed reading this weekend . Next edition will go back back to the usual preamble followed by a few interesting papers.

Hope you had a great summer.

<><><><><><><><><><><><><>

While a sheet of pluripotent stem cells engineered to become corneal cells hasn’t return this Japanese woman’s vision to normal, it has arrested decline. Only a month post-surgery, but this is fascinating and important work.

Fear-mongering about food has moved onto pet food, with expensive consequences. Pets,to a first approximate, are scavengers — this is how they survived and formed human attachments. Pretending otherwise is expensive silliness.

A seminal paper on tissue properties from a pioneer of biomedical engineering, YC Fung, who turned 100 this week. A lovely man, with whom I have had the good fortune of spending a little time.

Feedback from skin on the foot is highly specific to the region of the foot, turning feet and limbs into sensors, with important implications for walking and running.

A new algorithm seems able to extract tradable sentiment data from news feeds. While this is not new, most prior such models either found weak signals, no signal, or were so temporally unstable as to be worthless.

Readings: Cheating CEOs, Inflammation, Seedy hedge funds, etc.

PREAMBLE

Here are a few graphs that caught my eye this week.

HEALTHCARE & SPORTS

SCIENCE & TECHNOLOGY

FINANCE & ECONOMICS

BOOKS

The Best Books on Modern German History

MOVING PICTURES

The Bob Emergency: a study of athletes named Bob

THINGS I LIKED