« Liberty’s Twilight - 13may20 | Main | Unsettling ‘Settled Science’ »

13 May 2020


Scott O

Just talked to my neighbor (fireman/EMT) about his test results re CV19. It was negative but his doctor told him he didn't think the tests had any better than a 40% accuracy rate.
I have no idea where the doc got his percentage from, but it sure doesn't give much credence to formulating public policy based on testing.
Our small town is pretty much open for business except eat-in restaurants and bars. Very few wear masks although everyone seems to be very good about keeping a 6' distance from strangers. Vehicle traffic is about normal.


I like this conclusion:

"As described here, the data supporting the cost-effectiveness of respiratory virus testing are suggestive but far from conclusive. Additional studies are critically important to inform the decision-making of microbiology and virology laboratory medical directors, clinicians, and hospital administrators as they work together to implement respiratory virus testing algorithms that ensure quality, cost-effective, clinical care of patients with suspected respiratory virus infections."



on a related note, these guys did a nice job.


But I honestly can't see real world numbers reflected in the estimates. Are the coefficients inaccurate or are there other factors not being addressed?

George Rebane

Scenes 738am – Good find Mr scenes; they did indeed do a good job. But I they missed a couple of points and crossed a bridge too far. As you point out, they should have, but didn’t put any realworld numbers on their graphs which would have made them more credible – they just showed the general functional responses to the variations in their spate of input parameters. And they also totally blew the definition of ‘herd immunity’, claiming that there is a specific relationship between the S, E, R cohorts when it kicks in that allows them to draw a herd immunity threshold line. (S = susceptible, E = exposed, I = infected, R = recovered.) There is no such specific level, and their definition should have included the critical factor of a probability continuum of the contagious encountering and then infecting the E cohort that is diffused with the R cohort.

The bridge too far involves their gratuitous inclusion of sliders representing some arbitrary levels of ‘social distancing’, ‘washing hands’, ‘wearing masks’, … that then affect the reproduction rate of the undefined contagious I cohort fraction that is still out and about. To quantify such effects in an epidemic spread model ranges between impossible (very, very difficult) and humorous. (Reminds me of the same games played by purveyors of general circulation models selling preventable manmade climate change.) But overall, they tried to convey the quantitative direction the SEIR cohorts take as you vary these factors. And that’s perhaps why they didn’t dare put units on the time and number axes.

BTW, you may already have picked it up, but my Epidyne belongs to the SEIR family of epidemic models. This came about as happenstance because I had finished the Epidyne structure design, and was already writing the detailed equations when I thought it might be good to look at some existing literature on such models. I was concerned that I might be overlooking some important approach or factor that would render Epidyne useless. To my joy, I found that I had the correct/accepted architecture – great minds and all that 😉 - and my squigglies were at least as good as ‘theirs’. Some of the established models from academe also incorporated effects that were both questionable and/or literally impossible to quantify from realworld data, which then demanded liberal use of brown numbers to create the curves that needed ‘flattening’.

Bill Tozer

RE Update: Dr. Jay Bhattacharya

If you got the time....




re: GeorgeR@10:09AM

Perhaps we should pay epidemiologists based on their prediction accuracy. They can hang out with the guy on the radio who 'explains' why the stock market went up by 100 points.

I've poked through a lot of papers in the last couple of months and there's a remarkable lack of real world data, and worse, near total lack of back testing. It's almost like you can't quite trust these guys.

Oh well, people also think that Paul Krugman has anything worthwhile to say. It's a mystery of life

Bill Tozer

We could open up again and forget the whole thing


Bill Tozer

Dr. Rebane, you got data all wrong. Wrong, wrong, wrong. Faceplant. Now go back to the drawing board and come back....this time with feeeling.


“Today, data science is a form of power. It has been used to expose injustice, improve health outcomes, and topple governments. But it has also been used to discriminate, police, and surveil. This potential for good, on the one hand, and harm, on the other, makes it essential to ask: Data science by whom? Data science for whom? Data science with whose interests in mind? The narratives around big data and data science are overwhelmingly white, male, and techno-heroic. In Data Feminism, Catherine D’Ignazio and Lauren Klein present a new way of thinking about data science and data ethics—one that is informed by intersectional feminist thought.

Illustrating data feminism in action, D’Ignazio and Klein show how challenges to the male/female binary can help challenge other hierarchical (and empirically wrong) classification systems. They explain how, for example, an understanding of emotion can expand our ideas about effective data visualization, and how the concept of invisible labor can expose the significant human efforts required by our automated systems. And they show why the data never, ever “speak for themselves.”

Is there anything intersectionality can’t do? I’m wondering why the Marvel folks haven’t come up with a film superhero called “Intersectional Person.”

More seriously, the issue of selection bias in social science research is entirely legitimate, but from what I can tell data feminism mostly reduces to the complaint that there’s not enough social science rooted in the standard left-social justice narrative. And then there are the parts that arguably reinforce the old gender stereotypes that lead to things like math-phobic Barbies. In addition to critiques of data manipulation, there are some curious complaints about how data looks—even the colors used in charts and such.“




I used to worry my pretty little head over whether mathematics was over-expressed in anthropomorphisms, but decided it was over my pay grade and that it'll be a while before I find a dolphin to argue the problem with.

In terms of general bullshitism in 'intersectional math', it's chock-a-block full of course. Plenty 'o texts, theses, articles for unread magazines. A hodge-podge of honest attempts to understand teaching methods with outright nonsense. I assume that it simply reflects a huge oversupply of undertalented people valiantly keeping their iron rice bowls full. I'm afraid that our social science practitioners will simply keep it up until we are all forced to find something serious to do or they simply get hit by a bus.

an article I ran into:

George Rebane

Scenes 746am - Useful article, but hopefully RR readers have been educated in all this by a lot less verbiage and more rigorous math and stark graphics (above included). A couple of examples below -



re: GR@9:55AM

Mostly intended as a pointer to Wittgenstein’s Ruler: Unless you have confidence in the ruler’s reliability, if you use a ruler to measure a table, you may also be using the table to measure the ruler.

"Unless the source of a statement has extremely high qualifications, the statement will be more revealing of the author than the information intended by him. This applies to matters of judgment. According to Wittgenstein’s ruler: Unless you have confidence in the ruler’s reliability, if you use a ruler to measure a table you may also be using the table to measure the ruler. The less you trust the ruler’s reliability, the more information you are getting about the ruler and the less about the table." -NNT

Both aphorisms and expressions are where you find them.

George Rebane

scenes 1022am - Agreed. And that's what all my blather and squigglies concerning error propagation are about.

Bill Tozer

Since I read so many “journalists” from the upper Midwest, I decided to check out how they see the world from their perch.

1) “Minnesota reached a new daily high of 32 deaths attributed to COVID-19 yesterday, bringing the total to 832. (The previous daily high was 30.) Twenty-eight of the 32 new deaths — 87.5 percent — occurred among residents of long-term care facilities, bringing that total to 663. Eighty-two percent of all deaths attributed to the disease in Minnesota to date have taken place in nursing homes and other assisted-living facilities.

The age breakdown of the new decedents followed the usual pattern. Twelve were in their 90’s, 12 were in their 80’s, six were in their 70’s, one was in his 60’s and one in his 50’s.

At yesterday’s daily press briefing, Health Commissioner Jan Malcolm declared that progress was being made in the Walz administration’s “5-point battle plan” (that’s what they call it) to address the nursing home crisis, as I’ve been calling it since early in this series. Nevertheless, we haven’t yet seen progress in the numbers or even asked what took the authorities so long to notice this particular problem.”


“The bottom line, in any event, is that there is no basis for concluding that the nature and extent of the shutdown orders in these states have had any impact on the states’ COVID fatality rates.

The other obvious fact about these numbers is that the death rates are astonishingly low. Our lives have been turned upside down by the Wuhan virus and governments’ reactions to it, and yet you need to go to the fourth or fifth decimal place to find fatalities. The Black Death, it isn’t.”


The comments to this entry are closed.