Philosophy instructor, recreational writer, humorless vegetarian.
319 stories
·
5 followers

Review Drift

2 Shares
Guest post by C. Thi Nguyen

Here are three stories about one thing. The first story is about social media and donuts.

Before COVID destroyed travel, I kept having this same experience. I’d be in some new city. I’d do a little online research and hear about some new donut shop that everybody was raving about. I’d go, wait in the enormous line, see all the stickers about winning awards, and admire the gorgeous donuts in joyous anticipation. And then I’d eat the donut — and it would turn out to be some horrible waxy cardboard thing. Each bite was, like, some kind of pasty mouth-death. And then I’d sit there on the curb, with my sad half-eaten donut, watching the line of people out the door, all chattering about their excitement to finally get to be able to get one of these very famous donuts, everybody carefully taking donut pics the whole while.

And weirdly, totally different cities would give me the same kind of bad donut. These donuts all had a similar kind of visual flair: they were vividly colored; and they were big, impressively structural affairs — like little sculptures in the medium of donut. But they all had that weird, tasteless, over-waxy chaw. My theory: these donuts were being optimized, not for deliciousness, but for Instagram pop. And that optimization can involve certain trade-offs. You need a dough that’s optimized for structural stability, and a frosting that’s optimized for intense color.

Right now, Instagram is where food goes viral. And I’m not saying that the visual quality is unimportant. Appearance is part of the aesthetics of food. But what makes food unique, in the aesthetic realm, is the eating part: the taste, the smell, and texture. And I’m not saying it’s impossible to make a beautiful, yet delicious donut. I’ve had some, rarely. But Instagram seems to be enabling the rise of donuts made primarily for the eye. When Instagram becomes a primary medium for recommending food, you get this weird kind of aesthetic capture. Instagram will rewards those restauranteurs who are willing to trade away taste and texture in exchange for more visual pop.

The second story is about clothes. I’ve been buying my clothes online these last few years, and I keep having the same experience. I buy something from a relatively new company with a lot of Internet presence and very good reviews. The clothes arrive. They look awesome; on the first wear, they’re incredibly comfortable. Then, pretty quickly, they start falling apart. They stretch out of shape, they start pilling, they fall apart at the seams.

Some of it is surely the current economics of fast fashion. And some of it is that a lot of these companies are spending more on advertising than on clothing quality. But that doesn’t explain the barrage of good reviews on uncurated sites. What I’m starting to suspect is that, in the online shopping world, a lot of companies are starting to specifically target the moment of review.

The vast majority of online reviews are submitted close to the moment of purchase, after only a few wears. So online reviews mostly capture short-term, and not long-term, data. So new companies are heavily incentivized to optimize for short-term satisfaction. A stiff piece of clothing that slowly breaks in to comfortable and lasts forever won’t get great reviews. A piece of clothing that has been acid-washed to the peak of softness will review quite well — and then fall apart a few months later. (You can find a similar effect on Twitter. Twitter Likes are usually recorded at the moment of first reading — so simple ideas we already agree with are more likely to get Likes, but long-burn difficult ideas that change our minds, eventually, get lost.)

Call this phenomenon review drift. Review drift happens whenever the context of review differs from the context of use. In the current online shopping environment, good online reviews drive sales. So companies are incentivized to make products for the context of review. If the context of review is typically short-term, then companies are incentivized to optimize for short-term satisfaction, even at the cost of long-term quality. (A related phenomenon is purchase drift: when the context of purchase differs from the context of use.)

A third story: Seventeen years ago, I was backpacking and camping almost every weekend. In quick succession, I had three horrifying moments with some cheap folding knives. One of those left me cut to the bone. So I had a “As God as my witness, I’ll never use crappy knives again!” moment. I decided to ask some park rangers for recommendations. The next three park rangers I met all turned out to be carrying variations on the same pocket-knife, from the same company. And I read some reviews online praising these same knives to the stars, as lifelong companions. So I bought one.

Here is a picture of my own personal Spyderco Delica 4, which has served me incredibly well for 17 years.

It is basically indestructible. I dropped if off a 100 foot cliff once and it was fine. It also has a thousand subtle design features that took me years to really appreciate. One of the interesting things about Spyderco knives: they look fucking weird. I think we have a particular Platonic image of a knife — military, stabby, tough — and Spydercos don’t look like that. (A common complaint among bro-type dudes that want to look all tactical tough: “Spyderco looks like wounded pelicans.”) But all those weird organic design swoops are amazing in the hand. Spyderco’s ergonomic design genius is well-known in the online knife appreciation community. The classic Spyderco designs just meld into your hand; they become fully intuitive, natural extensions of you. But it took me years to fully appreciate it. When you first see and hold one of these knives, especially the lightest and grippiest plastic-handled ones, they just feel cheap and weird.

A couple months ago, somebody stole my other favorite pocket-knife out of my car. It was pandemic, and my brain was starved for sensation, so I had no other choice but to go looking at updated knife reviews. And what I found was that, in between my last knife-buying venture, 17 years ago, and the current one, a vast sprawling network of knife reviewers had arisen, mostly clustered around certain YouTube channels. There is now entire online community that had sprung up dedicated to constantly reviewing and collecting knives. And this community had developed an obsession with a feature called “fidget-quality”. This is how fun it is just to sit and open and close the knife, over and over again.

A folding knife has a quality called “action”. The way that it opens and closes — the speed, the feel of the flick, the satisfying hefty click of the locking mechanism — can all be aestheticized. There are even love-odes to which knives sound good — which ring like some kind of hyper-masculinized bell when they snap open and closed. And I’ll give it to you: good action is sweet. I’m totally up for aestheticizing anything and everything. But — and some of the Internet Knife Community[1] have started to notice this — some of these very expensive, wonderfully fidgety knives don’t actually cut that well. Or that some of them have handles with really clean, pretty metals — which Instagram nicely, but which also turn out to be really, really slippery.

Here is a theory: knife sales right now are driven by the Internet Knife Community. The Internet Knife Community is driven by Instagram, but most heavily by YouTube knife reviewers — like the knife-review superstar Nick Shabazz. Nick is a great, fun, lively reviewer. But, to get popular, a reviewer has to put out a lot of regular content — like multiple knife reviews a week. But somebody who is sitting in their room, making multiple knife-review videos a week, isn’t out in the woods for years with the same knife. So what they’re doing, to review the knife, is cutting the few cardboard boxes they might have around, and then fidgeting with it — and paying lots of attention to the fidget-quality. The context of review exaggerates the importance of fidget-quality, compared to the importance of, you know, cutting stuff.

A similar thing seems to happening in the boardgame community. Boardgames are, one might hope, made for hundreds and thousands of plays. One of the reason boardgames are such a good value proposition is that you can slowly discover the depths of the game over years of repeat play. But the community is now getting driven by popular reviewers, often on YouTube, and getting popular requires putting out frequent and regular content — multiple reviews a week. Which means the most dominant voices, which drive the market, are playing each game a couple of times and then reviewing. And that drives the market in a particular direction. It drives it away from deep rich games that take a few plays to wrap your mind around. The current landscape of popular reviewers seems to be driving the market towards games which are immediately comprehensible, fun for a handful of plays, and then collapse into boring sameness.

So: the structure of the online environment right now seems to demand that superstar reviewers put up frequent updates. Which means reviewing lots of products in rapid succession. But if you’re reviewing the kind of thing that is subtle, that takes a long time to really get to know, then the context of review has drifted really far from the context of use. So we’re evolving this perverse ecosystem centered around influential reviewers — but, where, to become influential, their review-context must be really far from the standard use-context.

Review drift isn’t new. Every age has its own mediums for review and every review medium has its strengths and weaknesses. An earlier era was dominated by written reviews, which have their own limitations. (A lot of the times, I suspect that much art that’s been critically revered in the past has gotten that status, in part, because it’s the kind of thing that’s easy to write about. Like, clever symbolic intellectual stuff is easier for academics and clever art critics to write about than subtle, spare, moody stuff.)

The new wrinkle, I think, is the degree to which many modern contexts concentrate review drift and homogenize it. This is starting to become apparent in all kinds of technological circumstances. A lot of modern technologies create concentrated gateways, which channel the majority of the public’s attention through a single portal. So much of our collective attention is set by how, exactly, Google’s search engine algorithms work, and how it ranks the result. So much of our collective purchasing is set by how exactly Amazon’s algorithm works. And one thing we know is: the more a single system becomes dominant, and the more legible its internal mechanics are, the easier it is for interested parties to game that system and to hyper-optimize. There are whole industries that exist around optimizing your Google search ranking and your Amazon product ranking.

So: there’s always going to be review drift; reviews can never be perfect. But if, at least, review drift happens for different reasons, and in a plurality of directions, then it’ll be a hard target for a big company to optimize for. But if there is some kind of systematic, structural feature that encourages the same kind of review drift across a whole reviewing community, then we create a clearer system for companies to target. And this can happen when a whole body of reviews gets filtered through a particular portal — like Instagram or YouTube — which homogenizes the patterns by which reviewers get famous, or strongly filters the kinds of reviews that get recorded. The more uniform the review drift, the more legible the target for the optimizers.

This is part of a larger pattern we’re starting to see more and more. We can call it the phenomenon of squashed evaluations. When an entire rich form of activity gets evaluated through one tiny window, then the importance of whatever’s in-frame gets over-exaggerated — and whatever’s outside of that frame gets swamped. So the same general kind of pressure that’s giving us high schools laser-focused on standardized tests, pre-meds obsessed with their GPAs, and journalists obsessed with click counts, is also giving us beautiful tasteless donuts and sexy flickable knives that aren’t good at cutting.

-------------------------------

[1] This is their actual name for themselves.

Read the whole story
istoner
19 hours ago
reply
Saint Paul, MN, USA
denubis
19 hours ago
reply
Share this story
Delete

The Lab-Leak Trap

1 Share

After months of getting very little coverage, the lab-leak theory for the origins of COVID-19—which holds that the virus emerged from a research setting—is now a source of endless chatter. Vanity Fair has a new, 12,000-word investigative feature on the subject, while lab-leak op-eds continue their exponential spread across the pages of The Washington Post, The Wall Street Journal, and The New York Times.

A careful look at all the ways that the pandemic might have started matters for the future: It should help us figure out the safest regulations, and the most important goals, for research on emerging pathogens. But the sudden rush of coverage hasn’t always made the lab-leak theory or its implications any easier to grasp. Much has done the opposite, in fact, ensnaring readers in semantic quibbles, side points, and distractions.

To focus better on what really matters, watch out for these traps:

The No-Evidence Trap

It would be confusing—merely confusing—if no one could agree on the strength of the evidence for a laboratory accident. But certain pundits have suggested that we’re still completely in the dark. Is there really any evidence at all, they ask, of anything?

[Daniel Engber: If the lab-leak theory is right, what’s next?]

“What’s missing from all this reexamination and soul-searching is a fundamental fact,” wrote Michael Hiltzik in the Los Angeles Times last week. “There is no evidence—not a smidgen—for the claim that COVID-19 originated in a laboratory.” The Columbia University microbiologist Vincent Racaniello said the same on his podcast, This Week in Virology: “It’s just crazy, because there’s no evidence for a lab leak; there’s plenty of evidence for the natural origin.” Others claim the exact opposite. There is “still zero evidence to support the theory that the virus emerged from nature,” Marc Thiessen announced in The Washington Post, and “mounting signs that it did not.”

This stance isn’t just confusing; it’s absurd: The “absence of evidence” here is not actually absence of evidence. Although more careful commentators have pointed to a lack of “direct evidence” in favor of either COVID-origins scenario, or of “hard,” “credible,” or “slam dunk” evidence, thick dossiers of indirect, soft, bank-shot evidence do exist on both sides, and merit close consideration.

Even circumstantial facts have value, after all. How many of us worried that the COVID-19 vaccine trials delivered “not a smidgen” of evidence, just because their conclusions—inferred from the fact that fewer people who received the shots got very sick—were indirect? Not to be a legal pedant, but even the “smoking gun” often said to be missing from the lab-leak debates would, if taken literally, count as circumstantial evidence.

A somewhat less tendentious claim, also a dime a dozen these past few weeks, holds that nothing more, or nothing new, has emerged about COVID-19’s origins since the start of the pandemic (and so any recent shift in attitude is probably unfounded). “The evidence hasn’t changed since spring of 2020,” Adam Rogers wrote in Wired. “Scientists don’t want to ignore the ‘lab leak’ theory, despite no new evidence,” read a headline in The New York Times. I made this point myself in The Atlantic, noting that “the lab-leak hypothesis is gaining currency even as the facts remain the same.” But this is false, and I was wrong—just another victim of the trap.

The evidence for a laboratory origin, like the evidence for a natural origin, may be circumstantial, and it may be weak in each specific. But it’s growing. The science journalist Rowan Jacobsen lays out the timeline in a recent piece for Newsweek. Last May, a resolute group of internet randos turned up new details about the mine in Mojiang where researchers from Wuhan had found the closest-known relative of the SARS-CoV-2 coronavirus. In June, they showed that this virus had been looked at in recent years; in August, they found that more than half a dozen other viral relatives had been sampled from the same mine (but their details were never published); and last month, they showed that the Wuhan lab’s prior descriptions of what went down at that mine were misleading. We also saw new claims, earlier this year, that three workers at the Wuhan Institute of Virology had been hospitalized for respiratory ailments in November 2019; then, just last week, Vanity Fair’s Katherine Eban added that these workers had, in fact, been running experiments on coronavirus samples. Could all of this be bullshit? Sure—but at the very least, it’s bullshit freshly dropped into the pasture.

Keep in mind that circumstantial evidence for a natural origin has been growing too. A paper published just this week provided evidence that live wild mammals—and potential viral hosts—were sold by the thousands across Wuhan’s wet markets in the months leading up to the pandemic.

The mere existence of evidence for the lab-leak hypothesis, or of new evidence since last spring, is not up for debate. When the experts tell you otherwise, take it as hyperbole: This evidence is so lame, it might as well not exist. Then try to understand why they think it’s lame.

The Mad-Scientist Trap

The lab-leak theory isn’t singular; rather, it’s a catchall for a continuum of possible scenarios, ranging from the mundane to the diabolical. At one end, a researcher from the Wuhan Institute of Virology might have gone out to sample bat guano, become infected with a novel pathogen while in the field, and then seeded it back home in a crowded city. Or maybe researchers brought a specimen of a wild-bat virus back into the lab without becoming infected, only to set it free via someone’s clothes or through a leaky sewage pipe.

The microbiologists Michael Imperiale and David Relman, both former members of the National Science Advisory Board for Biosecurity, told me several weeks ago that lab-leak scenarios of this rather more innocent variety—involving the collection and accidental release of a naturally occuring pathogen—were the most probable of all the non-natural possibilities. Yet the most prominent opinionating on this topic has clustered at the other end of the continuum, at first around the dark-side theory of a bioweapon gone awry, and then around the idea that a harmless virus had been deliberately transformed into SARS-CoV-2 (and released by accident) after a reckless series of tabletop experiments.

That’s another pitfall in this debate: a tendency to focus only on the most disturbing and improbable versions of the lab-leak hypothesis, and to downplay the rest. The mad-scientist trap sprays a mist across the facts by presuming scientific motivations; it posits that researchers could have caused the pandemic only if they’d been trying to create infectious pathogens.

Efforts to enhance a virus in a lab, usually described as “gain of function” studies, have engendered hyperbolic speculation. On May 11, Senator Rand Paul pressed Anthony Fauci on the creation of “man-made super-virus[es]” in Wuhan; the Fox News host Tucker Carlson has spent recent weeks harping on Fauci’s alleged support for “the grotesque and dangerous experiments that appear to have made COVID possible”; and The Washington Post’s Josh Rogin has dubbed the Italian American scientist and administrator the “godfather of gain-of-function research.”

The implied threat of this research hangs over all the most serious reporting too. Last week’s feature in Vanity Fair alleged, ominously, that the three Wuhan Institute of Virology staffers who are said to have fallen ill in November 2019 were “connected with gain-of-function research” in particular. The article also describes, at length, how some within the Trump administration came to believe last year that efforts to investigate the pandemic’s origins were being stymied by federal experts who “had either received or approved funding for gain-of-function research.” As one former State Department official told the magazine, “There is a huge gain-of-function bureaucracy” operating inside the government.

[Read: NIH Director: We need an investigation into the Wuhan lab-leak theory]

The problem is, depending on how one chooses to define gain-of-function research, it could well include most virological research, some forms of vaccine development, and a healthy portion of biology writ large. Anytime a scientist tries to probe or tweak the function of a gene, she could be working in this vein. In that sense, yes, the National Institutes of Health is a “huge gain-of-function bureaucracy.” So what?

One might assume that the single-minded fear of gain-of-function research is peculiar to conservatives—sitting, as it does, at the shadowy convergence of Big Government and Critical Frankenstein Studies. But the urge to blame scientific hubris for scientific problems, as opposed to farcical incompetence, seems to have long-standing, bipartisan support.

This trap was last sprung seven years ago. In March 2014, a CDC lab accidently shipped the highly virulent H5N1 bird flu to a poultry lab at the Department of Agriculture. Then in June, another CDC lab sent off samples of the bacteria that cause anthrax without properly inactivating them—and 75 government employees were potentially exposed. A few weeks after that, scientists at NIH stumbled across six vials of smallpox in a forgotten cardboard box. Regulators had every reason to believe that accidental laboratory leaks of naturally occurring pathogens were more common (and more likely) than genetic-engineering studies gone awry. But when confronted with all this evidence that scientists were slipping on banana peels, the government looked at other risks instead: It announced a pause on gain-of-function research.

We’re in the process of defaulting to the same idea—that better biosafety might be achieved, and the next pandemic headed off, if we prevent or slow the development of genetically engineered bananas. That might only help ensure that no one thinks too hard about the odds of slapstick-fueled catastrophe. We may yet find, with more investigation, that the Wuhan Institute of Virology, and other places like it around the world, is positively strewn with banana peels. If that’s the case, our first and most important goal should be to clean them up. In the meantime, don’t be fooled by false antonyms. The opposite of nature isn’t hubris, and if SARS-CoV-2 turns out not to have a “natural” origin, that doesn’t have to mean someone made it in a lab.

The Culture-War Trap

Another befogging tendency shifts attention from the source of the pandemic, and how to prevent future outbreaks, to what we have or haven’t said about this topic in the past. “I don’t care, and I’ve never cared” whether the lab-leak hypothesis is true, Jonathan Chait wrote on May 26. Rather, he’s concerned about “the vulnerabilities in the mainstream- and liberal-media ecosystem” that were revealed by its dismissal. The physician Vinay Prasad made a similar argument about the standards for discourse within the research community. “Liberalism in science—the ability to hold and discuss a broad range of views—is a newborn bird,” he wrote last week on MedPage Today. “Lab leak is just a salient reminder of how vulnerable that bird is—and that’s the real lesson we need to learn.”

Watch out for this distracting shift in tense, from the questions that remain unanswered to the questions that were never asked. The latter has become another proxy for the solipsistic fight over “cancel culture,” with its fearful and familiar symmetry of grievances. In this toxic framing, everyone can be a brave purveyor of dangerous truths. The virologist and lab-leak skeptic Angela Rasmussen has been threatened with violence and sexual assault; the former CDC director and lab-leak believer Robert Redfield has received death threats from fellow scientists. Robert Garry says that he and other virus experts who believe in a natural origin for COVID-19 are like “that one man in 12 Angry Men”; David Relman says that scientists like him, who are worried about biosafety, need “a safe space” to express themselves.

We can stipulate that some or many scientists made knee-jerk assumptions about the pandemic’s origins, and that some or many journalists did the same. We can also stipulate that plausible theories were written off last year as claptrap. But recent reporting on the disease-origin investigations within the Trump administration shows that this reflex might have been adaptive. According to Eban’s article in Vanity Fair, a “small group” within the State Department who initially suspected a laboratory origin were warned off by other bureaucrats, and told not to open a “Pandora’s Box.” They continued anyway, operating on a quasi-freelance basis, again sifting through old intelligence reports. Anyone with a decent memory will hear echoes of the search for weapons of mass destruction in Iraq, which was also pursued by a small group within the government, outside normal channels.

[David Frum: The pro-Trump culture war on American scientists]

Eban reports that on January 7, this team set up a three-hour meeting to present the strongest-possible case for the lab-leak theory, and then to probe it for holes. Among the experts they brought in that day was Steven Quay, a doctor and an entrepreneur who had self-published a book called Stay Safe: A Physician’s Guide to Survive Coronavirus. Among the recommendations given in that book: Brush your teeth regularly and wear a “salt-enhanced” mask. Quay wasn’t there to talk about oral hygiene or protective condiments, but to present the results of his Bayesian analysis of the pandemic’s origins, which landed on a 99.8 percent probability that the outbreak had begun in a laboratory. The problem was, Quay had never done this kind of analysis before. (There’s “a first time for everything,” he told the group.)

On Sunday, he doubled down with an op-ed in The Wall Street Journal, which made no mention of the foregoing study but did assert, on highly suspect grounds, that the evidence for a lab leak is “firmly based in science,” and more compelling than the alternative. Other lab-leak experts are far more credible, and the State Department’s investigation may well have been onto something. But ample circumstantial evidence—already present in 2020, and no less pertinent today—suggests that the Trump administration was full of it, and the scientific-journalistic establishment was reasonable to doubt it.

Regardless, any vow of silence from the mainstream press or public figures, justified or not, has since been lifted. Liberal discourse around the lab-leak hypothesis has been restored, at least for now, so let’s stay focused on the task ahead, of putting off the next pandemic for as long as we can.

Read the whole story
istoner
5 days ago
reply
Saint Paul, MN, USA
Share this story
Delete

The Aducanumab Approval

4 Shares

As the world knows, the FDA approved Biogen’s anti-amyloid antibody today, surely the first marketed drug whose Phase III trial was stopped for futility. I think this is one of the worst FDA decisions I have ever seen, because – like the advisory committee that reviewed the application, and like the FDA’s own statisticians – I don’t believe that Biogen really demonstrated efficacy. No problem apparently. The agency seems to have approved it based on its demonstrated ability to clear beta-amyloid, and is asking Biogen to run a confirmatory trial to show efficacy.

They will be absolutely overjoyed to do that, of course, because the whole time that’s going, they will be selling the first drug that (in theory) targets the etiology of Alzheimer’s. The backed-up demand is going to be gigantic, and Biogen is going to make enormous amounts of money. They have nine years, as it turns out, to get this trial done, and I feel safe in predicting that it’s going to take alllll niiiiine loooong sloooow years to get this done. Why shouldn’t it? The company certainly showed no interest whatsoever, not even a twitch, in running a confirmatory trial before this, so why should they hop to running one while the drug is selling? I continue to think that odds are quite good, and certainly unacceptably so for Biogen, that the drug will turn out in the end to have no real effect on Alzheimer’s patients at all. I’ve been dreading a decision like this for a long time.

So the FDA has, for expediency’s sake, bought into the amyloid hypothesis although every single attempt to translate that into a beneficial clinical effect has failed. I really, really don’t like the precedent that this sets: what doesn’t get approved, now? I suppose only things that definitely cause harm, because otherwise why not just ask for the same deal that Biogen got, and go out and prove efficacy while you turn a profit? I know that this is just the libertarian turn that many people have wished for, but I’m still not sanguine about it. I’m going to quote myself, because my opinion hasn’t changed a bit:

What would be so bad about moving to an “efficacy not required” regulatory regime? I think that it’s a flight from scientific evidence, which is the only thing we’ve got. Otherwise, everything starts to look like the “dietary supplement” industry, and what a mess that is. Here, you drop the efficacy requirement and I’ll develop grape juice for Dread Disease X. Not just plain grape juice – grape juice concentrate capsules. Mechanism? It’s the bioflavanoids. Probably. I think that they’re antioxidants, among other things. Lots of things. I can show safety in the clinic, too, so you have to approve my grape juice gel caps: I have a mechanism by which they might work (you can’t prove I’m wrong, can you?), I can show they’re safe, and you’ve eliminated the requirement that I prove that they actually do anything useful. Off to market! The patients who unfortunately suffer from Dread Disease X will, I’m sure, pay a lot for something that might help them. Don’t they have a right to try my antioxidants? There’s nothing else like them on the market, you know.

Go ahead and laugh – I mean, yeah, I’m pretty amusing, but I don’t keep grinning as long as I might when this topic comes up. The aducanumab approval, to me, is just a tiny step off of the radio-ad “Not intended to treat, cure, or modify any disease” memory supplements. It’s true that those ads always have to work in the line about how “These statements have not been evaluated by the FDA“, and at least Biogen doesn’t have to say that. Their statements have indeed been evaluated by the FDA, and the FDA has decided to punt on all the important ones and just let Alzheimer’s patients, their families, their insurance companies, and the taxpayers (through Medicare and the VA) pay for it all while we figure it out somewhere around the year 2030 or so.

Here’s Matthew Herper at Stat, talking about that exact FDA rule-change problem, and here’s Damian Garde and Adam Feuerstein trying (perhaps in vain) to estimate just how much money Biogen could be reaping from all this. That’s partly because, as Andrew Joseph notes, the agency applied no restriction at all to what patients can get the drug. Steve Usdin is gloomy about this at BioCentury, and wonders if this is going to open the door to many more such approvals in neuroscience and beyond. Zach Brennan’s not happy at Endpoints, either, and neither is their Bioregnum column. In general, the more you know about drug development and drug approvals, the more likely it seems that this decision came as an unwelcome surprise. That probably goes for plenty of people inside the FDA, come to think of it.

This was disgraceful decision, and we’re all going to be dealing with the consequences of it for years to come.

The post The Aducanumab Approval first appeared on In the Pipeline.
Read the whole story
istoner
6 days ago
reply
Saint Paul, MN, USA
denubis
6 days ago
reply
Share this story
Delete

ProPublica: We Should Tax Wealth, Not Income

1 Comment

ProPublica reports today that someone at the IRS has leaked a "vast trove" of private data on the tax returns of thousands of the nation’s wealthiest people. This is disturbing, but what's done is done. ProPublica acknowledges that publishing this information is a violation of privacy but says they eventually concluded that "the public interest in knowing this information at this pivotal moment outweighs that legitimate concern."

It must be a bombshell, then. Let's take a look:

To capture the financial reality of the richest Americans, ProPublica undertook an analysis that has never been done before. We compared how much in taxes the 25 richest Americans paid each year to how much Forbes estimated their wealth grew in that same time period. We’re going to call this their true tax rate.

....In 2007, Jeff Bezos, then a multibillionaire and now the world’s richest man, did not pay a penny in federal income taxes....His tax avoidance is even more striking if you examine 2006 to 2018, a period for which ProPublica has complete data. Bezos’ wealth increased by $127 billion, according to Forbes, but he reported a total of $6.5 billion in income. The $1.4 billion he paid in personal federal taxes is a massive number — yet it amounts to a 1.1% true tax rate on the rise in his fortune.

This is the gist of the entire piece. ProPublica simply asserts that taxes as a share of wealth represent the "true" tax rate even though the United States taxes income, not wealth. Bezos, for example, paid 22% of his income in federal income taxes. This is close to average for the top 1%. The same is true to varying degrees for the other billionaires ProPublica profiles. So what exactly have we learned?

  • Wealth is not income.
  • The US taxes income, not wealth.
  • The income of the super rich varies greatly from year to year.
  • The super rich make use of tax avoidance strategies unavailable to wage slave schlubs like you and me.

I don't know about you, but I already knew all this. The ProPublica piece is, essentially, not a revelation but merely a long op-ed arguing that we should tax wealth, not income. That's fine, but does it justify publishing private tax returns? I'm skeptical. If you aren't, keep in mind that using the IRS for political purposes has an ugly history that in no way always benefits progressives. It's not something we should take lightly.

Read the whole story
istoner
6 days ago
reply
Either I've misunderstood something important or Kevin is being obtuse in the manner of a curmudgeon. Annual change in wealth is a commonsense definition of annual income, when "income" is a specific technical term that excludes most of the income streams of the meagrich. If Scrooge McDuck's vault of gold coins increases by 100 billion gold coins in a decade, it seems fair to me to say his income was 100 billion gold coins in that decade, regardless of the route those coins took to get there.

Totally agreed, though, that this leak is a serious breech of professional and ethical duty, and my initial impression is that the info leaked is nowhere near important or scandalous enough to justify leaking (or publishing) data that was entrusted to the government on the promise of privacy.
Saint Paul, MN, USA
fwaltman
6 days ago
If I bought a house 20 years ago for 100K and today it is worth 1M is that "income" -- should I be taxed on that? Most would say until I sold the house, then no.
Share this story
Delete

Anatomy of a Hoax

1 Comment and 2 Shares

Photo courtesy of Penguin Young Readers.

Eric Carle, the author and illustrator of more than seventy books that captivated, amused, and educated generations of children, died last month at ninety-one. Carle’s work, and his seemingly effortless connection to young readers, was motivated by the privations of his own childhood. Raised in Nazi Germany, he was forced to dig trenches on the Siegfried line; his father, whom he adored, had become a prisoner of war in Russia. Carle’s later proclivity for vivid, exuberant colors was a reaction against the “grays, browns and dirty greens” of buildings camouflaged to protect against bombing. After the war, in America, he worked as a commercial artist, developing meticulous collages of tissue paper and acrylics that soon launched his career as an illustrator and children’s writer. His most famous book, The Very Hungry Caterpillar, came in 1969, and has sold more than 55 million copies worldwide. “I think it is a book of hope,” he said on its fiftieth anniversary, in 2019. “Children need hope. You—little insignificant caterpillar—can grow up into a beautiful butterfly and fly into the world with your talent.”

If you looked at Twitter after Carle’s death, you may not have seen that quotation. It was lost in the din surrounding another remark:

My publisher and I fought bitterly over the stomachache scene in The Very Hungry Caterpillar. The caterpillar, you’ll recall, feasts on cake, ice cream, salami, pie, cheese, sausage, and so on. After this banquet I intended for him to proceed immediately to his metamorphosis, but my publisher insisted that he suffer an episode of nausea first—that some punishment follow his supposed overeating. This disgusted me. It ran entirely contrary to the message of the book. The caterpillar is, after all, very hungry, as sometimes we all are. He has recognized an immense appetite within him and has indulged it, and the experience transforms him, betters him. Including the punitive stomachache ruined the effect. It compromised the book.

This story was drawn from Carle’s interview with The Paris Review for Young Readers, and tens of thousands of people shared it in praise and remembrance. “What a good man,” one wrote. Another posted, “Eric Carle said fuck the system eat cake and be unapologetically hungry.” A third was inspired to go big for lunch: “a chicken Parm and a whole ass order of garlic knots.” Nigella Lawson retweeted the story, Smithsonian Magazine included it in their obituary, and the parenting site Motherly noted that it had “a profound impact … Eric Carle recognized the harm in implying shame should be something a living creature feels simply for eating food they need to eat in order to grow.” On WQED, during a live broadcast, the radio host asked Carle’s son, Rolf, for more details about the stomachache quarrel. “That’s one of the stories I haven’t heard,” Rolf said, “and when you get an answer, please get it to me.”

He hadn’t heard the story because it never happened. Debunkers, including Snopes, soon pointed out that The Paris Review for Young Readers had originated in 2015 as an April Fools’ Day joke. There had been only a single issue of the “magazine,” which included a rewrite of American Psycho focused on haute couture lunchboxes, a word hunt that featured terms like chiaroscuro and post hoc ergo propter hoc, and a photo of the editor reading to an avid crowd of children, a cigarette hanging from the corner of his mouth. Though few had fallen for the Carle interview at first sight, the passage about the stomachache dispute had been republished in a 2019 book, Fierce Bad Rabbits, and its appearance in print, out of context, gave it a legitimacy that was hard to shake. Clare Pollard, the book’s author, had cleared the citation with the Review and a prominent literary agency. But institutional memory lapses quickly, and neither party knew to inform her that it was a hoax.

The Review issued an apology and attached a disclaimer to the article. Meanwhile, reactions to the ruse were divided. Some, wedded to the story’s message, would only reluctantly concede that it was fabricated. “It clearly resonated with many for a reason, though we do regret the error,” Motherly wrote in a retraction. Others were so delighted by the quotation that they chose to go on believing it anyway: “This is the reality I will be moving forward with, thanks!” But still more felt sorely deceived. The Very Hungry Caterpillar was wrapped up in intimate memories of reading to their children, or being read to, and those memories had been disturbed. Because, after Carle’s death, this fiction was crowding out the facts of his remarkable life, it risked tainting his legacy and should be expunged. An indignant reader felt that “whenever misinformation like this goes viral”—a phrase that may call for retirement, after a global pandemic—“the people who are MOST key to spreading it … are often so extremely reluctant to admit and correct it!” With these criticisms came others: the interview was too believable to pass as parody; it was fatphobic, and churlish in its implication that children’s literature is unworthy of deep discussion. “Satire needs a clear target and clarity of purpose,” Literary Hub warned. “If the point is unclear, your joke might be misconstrued as reality.”

I followed the story with great interest, because I’m the author of The Paris Review for Young Readers, including the notorious Carle interview. I was surprised to learn that a paragraph I wrote six years ago has, in all likelihood, found more readers than anything I’ve published under my own name. As the chaos unfolded, I experienced a combination of pride and dread—what I imagine it’s like to spend a counterfeit bill so old that you’ve forgotten it’s fake. Six years is not such a long time, but the world that bought into my Carle interview is in some ways unrecognizable from the one in which I wrote it: before alternative facts, before widespread concerns about information literacy, before 15 percent of Americans believed in adrenochrome-guzzling satanist pedophiles. A hoax is designed to be misconstrued as reality—a fact that seems to have eluded some people—and though mine has succeeded beyond my wildest Obama-era fantasies, it stirred up fragments of the past, broken links, and undigested, polarizing half-truths. It has, in short, given me a stomachache.

 

The cover of the first and only issue of The Paris Review for Young Readers, our 2015 April Fools’ Day prank.

 

I began working as the online editor of The Paris Review in 2014. By then, April Fools’ was already gauche: mainly an occasion for tech companies to launder their reputations through elaborate put-ons. People ran for cover as, every April 1, the internet was carpet-bombed with ill-begotten corporate pranks. But I liked it. The most effective hoaxes lived in the space between joking and not joking, and this could be a playful, thoughtful space, where people negotiated their beliefs and desires in sublimated ways. It’s fascinating to seek out the limits of credulousness.

Plus, the Review had a precedent to uphold. George Plimpton, who edited the magazine for fifty years, and whose plaster bust looked on our every affair, had once pulled off a truly unassailable April Fools’ hoax. He and I had this in common: we enjoyed not only fooling but being fooled, a pleasure that, as I was about to learn the hard way, was far from universal. In 1981, Plimpton took the bait when the Daily Mail reported that a Japanese runner had been spotted jogging across the English countryside long after the London marathon had ended; because of a translation error, the story alleged, the runner believed its duration was twenty-six days, not twenty-six miles. Overjoyed by his own gullibility, Plimpton decided to try his hand at hoaxing. For Sports Illustrated, in 1985, he wrote a story about Sidd Finch, a reclusive yoga-guru pitcher with a 168-mile-an-hour fastball. The Mets had discovered the orphaned Sidd (short for Siddhartha) and wanted him to play baseball, if they could talk him out of becoming a monk or learning the French horn. Many of the major news networks fell for it, and soon beleaguered reporters were combing the locker rooms in search of the errant Finch, who never did turn up, though some claimed to have met him. Plimpton had lavished attention on the prank, flying to Florida to memorize the floor plans of the Mets’ spring training camp.

It was incumbent on the new generation to further Plimpton’s work, even without a travel budget. With the support of my colleagues, I began an illiberal campaign against truth. The results paled beside an Adonis like Sidd Finch, and they did little to raise the Review’s esteem. First came the Easter issue, from April 2014, with a cover by Thomas Kinkade, the Painter of Light™. It contained a “portfolio” of selfies by Salman Rushdie, borrowed with affection from his Instagram. A prominent literary agency, the same aforementioned, promptly requested (demanded) its removal. I’d also written a fake interview with Cormac McCarthy, mostly about barbecue. This has occasionally passed as real—rumor has it that Michiko Kakutani once tweeted it—but later, in New Mexico, when I met the actual McCarthy and told him what I’d done, he dismissed my labor of love with a single sentence. “I would sooner stick my finger in a light-bulb socket,” he said, “than be interviewed by The Partisan Review.” Such was his adamant uninterest that he didn’t even name the right magazine.

In 2015, I resolved to do better. Children are precious; they were an obvious target. Children’s literature, at its worst, bottles and ferments that preciousness with adult insecurity, which is exactly what I hoped to do. The magazine had recently published an interview with the psychotherapist and essayist Adam Phillips, parts of which I’d committed to memory, I liked them so much. “One of the things that is interesting about children is how much appetite they have,” Phillips said. “How much appetite they have—but also how conflicted they can be about their appetites”:

Children are incredibly picky about their food. They can go through periods where they will only have an orange peeled in a certain way. Or milk in a certain cup … There’s something very frightening about one’s appetite. So that one is trying to contain a voraciousness in a very specific, limiting, narrowed way. It’s as though, were the child not to have the milk in that cup, it would be a catastrophe. And the child is right. It would be a catastrophe, because that specific way, that habit, contains what is felt to be a very fearful appetite. An appetite is fearful because it connects you with the world in very unpredictable ways.

These insights came back to me whenever I had a bowl of cereal—so, multiple times a day. As a preschooler, in the back of the family Honda, I’d once fallen into a tantrum and demanded a box of Lucky Charms. I was so relentless that my parents, usually strict, had given in, stopping at a supermarket to produce that manna, frosted toasted oats with marshmallows. And then I had eaten hardly any. This became an embarrassing chapter in the family lore. I’d attributed my breakdown, apart from my being a little shit, to the power of advertising: Lucky Charms were “magically delicious,” a slogan that generated a want that I confused for a need. Here was an alternate theory, an apoplexy of containment; all food was magic, all hunger dangerous.

I thought I could bring some of this into a spoof of The Very Hungry Caterpillar, another story of unbridled appetite. This is the gist of the book: the caterpillar eats. In the end he turns into a butterfly, which is nice, but the main attraction is his boundless craving. “But he was still hungry”: a refrain familiar to all who’ve lingered in the light of the refrigerator. Whatever Eric Carle’s feelings about psychoanalysis, the man was a student of appetites. Feeling clever, I dressed him up in the language of that student—ludic, jouissance, and other favorites of the ivory tower—and had him argue vociferously for the merits of overeating. I thought this was a hilarious stance for a writer who’d risen to success on a wave of salami and cherry pie, and it got at something unique about Caterpillar, which flirts with the insatiable in a curious way. My imaginary Carle venerated children past the point of reason. He favored the abolition of adulthood. He mainlined Christmas music and spouted off like a drunken Lacanian. Yes, I thought, this is my masterpiece, so plausibly implausible. I cracked myself up imagining a magazine like Highlights for Children shot through with the pretensions of The Paris Review, collecting dust in some pediatrician’s waiting room until it caught the eye of a status-conscious parent. “Look, honey, Timmy can read the pull-out section on the objective correlative”—that sort of thing. When we launched the “magazine” on April 1, some were amused, but few were fooled. Their loss, I thought. Pearls before swine. As Phillips had said, “We are children for a very long time.”

It takes a certain blundering confidence to perpetrate a hoax. You have to believe that you control the narrative—that you will remain in control. Even Plimpton, who never lacked for bravado, approached the task with trepidation. Jonathan Dee, his assistant in the Sidd Finch days, recalled that Plimpton was “a wreck” then: “I still remember my naïve astonishment at the sight of a world-famous, successful writer actually agonizing over whether something he’d written was good enough, funny enough, believable enough, or whether the whole thing would wind up making him seem like a national jackass.” As one presently regarded by more than a few people as a national jackass, I feel his pain, though it says a lot that, in 2015, I felt none of it; I couldn’t wait to pull one over on the unsuspecting masses.

I had to wait six years, it turns out—for a time when I seldom feel in control of anything. This is a nonsensical thing to say about a hoax, but I think I could celebrate it without compunction if I’d come by it more honestly. It succeeded not by my sparkling wit, but by lying dormant through a half-decade of rampant confusion and public deceit, reemerging only when people grieved its subject, a cherished writer. Not exactly the reception a con artist dreams of. Be that as it may, the interview still makes me smile, and I hope I’ve dissuaded those who insist on reading it, or any parody, as mockery.

As for the stomachache: go ahead and elide that part of The Very Hungry Caterpillar, if you want. It may be that a vaxxed and waxed America, looking toward summer, is eager for a message of permissiveness, especially from a trusted voice of childhood. If that’s how you feel, you’re in luck. There’s a quote from Carle—a real one, I promise—that could encourage your indulgence. “Right after the Wall fell, I was signing books in the former East Germany and was invited by a group of young librarians to have lunch with them,” he told the Guardian in 2009. “One said the caterpillar is capitalist, he eats into every food one little bit and then the food rots away. Wasteful capitalist.” Draw the logical conclusion: a healthy appetite is soundly anticapitalist, and the caterpillar was, if anything, not hungry enough. Eat, then, what the profligate rich have left behind. Eat for the good of the worker, the good of the world. Eat.

 

Dan Piepenbring is an advisory editor of The Paris Review. He is the editor of The Beautiful Ones, Prince’s posthumous memoir, and the coauthor, with Tom O’Neill, of Chaos: Charles Manson, the CIA, and the Secret History of the Sixties.

Read the whole story
istoner
7 days ago
reply
delightful
Saint Paul, MN, USA
Share this story
Delete

Starship bloopers

3 Comments and 5 Shares

(I need to blog more often, so here's one of hopefully a series of shorter, more frequent, opinions ...)

Anent the "Heinlein was a fascist" accusations that are a hardy perennial on the internet, especially in discussions of Starship Troopers (the book, not the movie, which I have not seen because it's a movie): I'd like to offer a nuanced opinion.

In the 1930s, Heinlein was a soft socialist—he was considered sufficiently left wing and "unreliable" that he was not recalled for active duty in the US Navy during the second world war. After he married Virginia Gerstenfeld, his third and last wife, his views gradually shifted to the right—however he tended towards the libertarian right rather than the religious/paleoconservative right. (These distinctions do not mean in 2021 what they might have meant in 1971; today's libertarian/neo-nazi nexus has mostly emerged in the 21st century, and Heinlein was a vehement opponent of Nazism.) So the surface picture is your stereotype of a socially liberal centrist/soft leftist who moved to the right as he grew older.

But to muddy the waters, Heinlein was always happy to pick up a bonkers ideological shibboleth and run with it in his fiction. He was sufficiently flexible to write from the first person viewpoint of unreliable/misguided narrators, to juxtapose their beliefs against a background that highlighted their weaknesses, and even to end the story with the narrator—but not the reader—unaware of this.

In Starship Troopers Heinlein was again playing unreliable narrator games. On the surface, ST appears to be a war novel loosely based on WW2 ("bugs" are Nazis; "skinnies" are either Italian or Japanese Axis forces), but each element of the subtext relates to the ideological awakening of his protagonist, everyman Johnny Rico (note: not many white American SF writers would have picked a Filipino hero for a novel in the 1950s). And the moral impetus is a discussion of how to exist in a universe populated by existential threats with which peaceful coexistence is impossible. The political framework Heinlein dreamed up for his human population—voting rights as a quid pro quo for military (or civilian public) service—isn't that far from the early Roman Republic, although in Rico's eyes it's presented as something new, a post-war settlement. Heinlein, as opposed to his protagonist, is demonstrating it as a solution to how to run a polity in a state of total war without losing democratic accountability. (Even his presentation of corporal and capital punishment is consistent with the early Roman Republic as a model.) The totalizing nature of the war in ST isn't at odds with the Roman interpretation: Carthago delenda est, anyone?

It seems to me that using the Roman Republic as a model is exactly the sort of cheat that Heinlein would employ. But then Starship Troopers became the type specimen for an entire subgenre of SF, namely Military-SF. It's not that MilSF wasn't written prior to Starship Troopers: merely that ST was compellingly written by the standards of SF circa 1959. And it was published against the creeping onset of the US involvement in the Vietnam War, and the early days of the New Wave in SF, so it was wildly influential beyond its author's expectations.

The annoying right wing Heinlein Mil-SF stans that came along in later decades—mostly from the 1970s onwards—embraced Starship Troopers as an idealized fascist utopia with the permanent war of All against All that is fundamental to fascist thought. In doing so they missed the point completely. It's no accident that fascist movements from Mussolini onwards appropriated Roman iconography (such as the Fasces ): insecure imperialists often claim legitimacy by claiming they're restoring an imagined golden age of empire. Indeed, this was the common design language of the British Empire's architecture, and just about every other European imperialist program of the past millennium. By picking the Roman Republic as a model for a beleagured polity, Heinlein plugged into the underlying mythos of western imperialism. But by doing so he inadvertently obscured the moral lesson he was trying to deliver.

... And then Verhoeven came along and produced a movie that riffs off the wank fantasies of the Mil-SF stans and their barely-concealed fascist misinterpretation: famously, he claimed to have never read the book. I pass no judgement on whether Starship Troopers the move is good or bad: as I said, I haven't seen it. But movies have a cultural reach far greater than any book can hope to achieve, so that's the image of Starship Troopers that became indelibly embedded in the zeitgeist.

PS: I just want to leave you wondering: what would Starship Troopers have looked like if it had been directed by Fritz Lang, with Leni Reifenstahl in charge of the cameras?

PPS: I don't agree with Heinlein's moral framework, although I think I can see what he was getting at.

Read the whole story
istoner
7 days ago
reply
I love reading this kind of thing from Stross.

I'm not convinced ST can be so easily rehabilitated. As I recall (and I might be wrong, it's been decades) my main discomfort with the book was its worshipful portrayal of war, not its quick sketch of homeworld politics. Am I misremembering that the clear message of the text is that war makes men of boys and provides a source of meaning in life that is unavailable to people who haven't killed enemies on the battlefield?

I remember liking the satirical aspects of the Verhoeven film because I thought he was trying to make the case that it is only a tiny step from "military service is the best way to build good men" to full blown fascism. (If it's true that he never read the book, then this is probably too charitable a read of the film.)
Saint Paul, MN, USA
duerig
6 days ago
Given that ST is a 'coming of age' story that happens in the military, I think that 'war making men of boys' is going to come through if you read it in isolation. But a large number of Heinlein books are 'coming of age' and each one is in a different context (farming on Ganymede, survival training on an unknown planet, pilot on a civilian starship, fighting nazis on the moon, etc.). So I don't think that the message should be understood as 'this is THE glorified true path of manhood' and instead being 'here is another example in another universe'.
zwol
6 days ago
It's been a very long time since I read the book, but my memory says that what Heinlein was *going for* was the more defensible assertion that no one should be empowered to *start* a war who has not experienced the front line firsthand, because you can only truly know the cost that way. The trouble is that this is a subtle thread in a novel whose A-narrative is about Johnny Rico finding a source of meaning in life through military service, so your reading is much easier to take away from the text.
duerig
6 days ago
Thinking about it a bit more, those threads zwol talked about where the veteran-voting was a form of restraint and where it was about voting being awarded by service to the community were further muted because the main character's path was over-justified. The MC joins the military for childish reasons. And then many of the narrative beats show the MC's choice not just being right but absurdly so. The enemies aren't just bad people, they are single-minded conquerors who cannot be negotiated with. The MC's branch of the military has all kinds of fun gadgets and 'badass' weapons which they can use with impunity. The MC's hometown is later destroyed by the enemy both making joining up a path to safety and providing a personal emotional reason for fighting. There are long stretches of the book where the character is in class and the teacher is philosophically justifying both the society and choice of being in the military. Eventually, the MC's father who initially disapproved has now been reconciled to him. And further, joins the military too. I think a lot of MC's in Heinleins books end up being hyper-justified like this. But in this particular setting, it tends to read more like propaganda and the subtler story threads about restraint and that Johnny Rico is not yet another white male protagonist are easy to dismiss.
zwol
6 days ago
Agreed. Other authors are much better at this kind of subtlety, and it can still go right past some readers -- Zelazny's _Lord of Light_ comes to mind, where I've seen people describe it as an accurate portrayal of Hinduism even though it was crystal clear to _me_ (even at age 17) that it was meant to be a story about superhumans who _identify themselves_ as Hindu gods to justify their rule over the ordinary populace.
WorldMaker
3 days ago
I went into my Heinlein reading phase after Vonnegut, so the unreliable narrators of Heinlein (as cstross points out here) really stood out to me (always) and I read so much of Heinlein's work as if not outright satire, intentionally "clickbait" (as we'd say today) to force a response. Starship Troopers is *so* didactic (and again as cstross points out: so deeply Roman Empire didactic, especially how Socratic method didactic it is, though I didn't have that in my vocabulary at the age I read Starship Troopers) and so much of it is Johnny Rico's point of view which is just absolutely clueless and hard to take seriously at any point in the novel. At least for me and where I was coming from in reading the novel. Starship Troopers is in that way is Plato's "The Republic" but with an even dumber main character, and just as "The Republic" was never real but an extended exercise in "Is this a good way to go about this?" Starship Troopers was useful in that "I don't agree with the didactic things this book is telling me, but it is fascinating to think about, especially in trying to figure out why I don't agree." Especially in questioning "Is this a slippery slope to fascism?" (Again, as cstross points it: it mostly sort of worked for the Romans, but also given what we know today of the Romans and what we've tried to do with civilization since, maybe the Romans are more down the fascist slippery slope than we'd like to be today.) Which getting back to Verhoeven, he stated he never *finished* the book not that he didn't read the book, and I think his conclusion is exactly one the book supports (in being that Plato's "Republic" sort of asking the hard questions through example Socratic method). As someone else said, what Verhoeven did acts as an immediate shortcut for one interpretation of the book. (One that I naturally shared, again given my peculiar priming in how I read Heinlein, as SF's greatest clickbait shitposter.) It's also a great shortcut in that it is very Verhoeven and makes it the only sequel to Robocop worth watching. (The two of them are an extremely powerful and depressing double feature.) Also, in general I fear we've passed a Poe's Law singularity where satire is dead as too many people take satire at face value without engaging its depths. Both the extremely subtle like Heinlein writing a Plato's "Republic" about military service and Verhoeven satirizing fascism and Lord of Light as another example of people getting the wrong answer to what the thing was talking about.
WorldMaker
3 days ago
Also, Starship Troopers was one of the first places I can recall encountering a large didactic discussion on what constitutes a Just War and most of the book goes out of its way to establish that everything to do with the fight with the bugs was a Just War, and to me that's the clearest sign of all that Heinlein was shitposting. Heinlein's favorite curseword across his books was "tanj" (There Ain't No Justice). If there's a more obvious sign that Heinlein was telling his character one thing ("This is a Just War. Just Wars are possible.") and believing an entirely different thing ("There ain't no justice. There is no such thing as a Just War."), I don't know what it would take to be more obvious, in my opinion. The book is a worshipful portrayal of war by an idiot main character, and doing nothing but covertly trying to explain why war is awful and unjust.
Share this story
Delete
2 public comments
mkalus
4 days ago
reply
I always thought the movie was a short cut to the book. It took some of the themes of the book, but really only seems to have concentrated on the final chapter(s).

I can see why many wannabe fascists are drawn to it though, both the book and the movie, as in a lot of ways it portrays a world they think they all want to live in.
iPhone: 49.287476,-123.142136
zwol
6 days ago
reply
[this was meant to be a reply to the thread above and there doesn't seem to be a way to delete it now]
Pittsburgh, PA
Next Page of Stories