How to engage with arguments rationally, respectfully, and with humility Michael Marshall The Skeptic

In the weeks since the death of philosopher Daniel Dennett, tributes have circulated online, highlighting his proposed rules for engaging with an argument and criticising a position – often known as Rapaport’s Rules, after the game theorist Anatol Rapaport, who outlined the initial versions which were finessed by Dennett. Personally, I hadn’t seen these rules before, which is interesting, because they are essentially guiding principles I had iterated my way into over the course of my skeptical life, as well as being excellent foundations for compassionate skepticism. For those who haven’t seen them, they are:

1: You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

2: You should list any points of agreement (especially if they are not matters of general or widespread agreement).

3: You should mention anything you have learned from your target.

4: Only then are you permitted to say so much as a word of rebuttal or criticism.

In essence, we have a commitment to avoiding strawman arguments, to the point where you have to really understand someone before you carry on; pointing out where the person might be right – so, avoidance of demonising your opponent, or reducing them to a topic of disagreement; interrogating yourself to see what you have learned, if anything – and therefore having to both listen, and reflect on your own positions, rather than merely waiting for the chance to drop a gotcha question; and then active engagement.

These rules, in themselves, are great; even better, they take time and effort – both valuable investments in the art of disagreeing productively. How many of the most heated topics of debate and online discourse would be improved if the ability to throw out quick, low-effort and low-investment opinions, arguments and insults?

Given that these rules of engagement fit so neatly into the ethos of compassionate skepticism, I thought it might be interesting to see what else I might add to the list, based on the fifteen years of having difficult conversations with people we disagree with.

Be consistent in the standard of evidence you demand

If you demand a standard of the person you’re talking to, you have to be able to meet that standard too. This is something that comes up a lot. I saw it recently at a local flat earth meetup, where claims made by NASA were subjected to all manner of intense nitpicking, but the evidence brought forth to prove the world was flat was… the book of Genesis (which, naturally, received almost no scrutiny).

Elsewhere, we could look to parliament, to see the House conspiracist Andrew Bridgen, who exclusively refers to the Covid vaccine as the “experimental mRNA vaccine”. Why does he call it experimental? Because there are no placebo-controlled trials showing the long-term efficacy and safety profile of the vaccine. The fact that hundreds of millions of doses have been administered with relatively few cases of serious side effects doesn’t matter – it will be labelled experimental until an unreasonably high bar has been met.

Antivaxxers even label as experimental Covid vaccines that aren’t mRNA based, even where they’re based on existing and well-established technology. Again, in the absence of long-term safety data showing years or decades of safe use of this existing technology in this specific area, they’ll continue to call it experimental and untested. That’s the evidence bar they’ve set – long term RCTs on safety. Obviously, what they don’t take into account are the costs of not vaccinating people, because inaction isn’t neutral when you have a virus like Covid causing hundreds of thousands of deaths. You can wait for the long term studies showing safety a decade down the line, but to do so means allowing ten years of harm to mount up while sitting on a solution that by any other metric has been demonstrated to work safely.

But then, when it comes to ‘proving’ that the vaccine is dangerous, what’s the evidence bar that Andrew Bridgen and his fellow travellers put forth? It’s anecdotes, and collections of stories of athletes who “died suddenly” – including a Brazilian footballer who died suddenly on a football pitch… but, when you find the original news report, wasn’t actually a footballer, he just happened to be walking across a football pitch when he collapsed.

Here, then, is the evidence imbalance: when we have to prove the intervention is safe, it has to be the highest quality long-term study possible; when they want to illustrate that it is dangerous, anecdotes suffice. This is not real criticism, and it’s something skeptics, equally, should be guarded against when we make our arguments.

Subject your arguments to the same scrutiny you’d apply to others

In 2018, I attended the flat earth conference in Birmingham, to see why hundreds of people were gathering to share proofs of the flat earth. One speaker, Iru Landucci, shared video footage of astronauts floating around on the ISS, who Landucci claimed were simply held up on wires. As the three astronauts rolled forward in slow front flips, Iru pointed to his proof of the wires: as one of the astronauts span a little out of control, her colleague reached to steady her by tugging on the wire on her back. Landucci even magnified the area of the footage to show the exact moment of the tug, before explaining that obviously the wire has been photoshopped out, but the movement of the hand was clear.

How, then, could we be sure Landucci was right? He explained, the actions of the third astronaut gave the game away: he took a baseball out of his pocket, and floated it to distract us from the wire correction, hovering the ball in front of his face and performing tricks with it. As Landucci explained, it costs a large amount of energy to send up any object of weight into space, so why would the astronaut have something as pointlessly weighty as a baseball in his pocket? The only explanation is that he happened to have the ball with him on the day they filmed this wire sequence at the studio on a greenscreen, and so opportunistically used it as a distraction.

This point was sufficient to prove to Landucci that the video – and by extension, the space programme and the shape of the earth – was faked… but, of course, if he was right, what he didn’t bother to explain is how the third astronaut managed to levitate a baseball. Especially a baseball that had been in his pocket, and had been used unexpectedly, without prior warning, as a spur of the moment distraction.

The pixel-by-pixel scrutiny applied to the footage produced by NASA was completely absent for Landucci’s alternative explanation; things we agree with don’t get examined anywhere near as much as the things we disagree with, unless we make a conscious effort to be as critical of our own ideas as we are of other people’s.

Seek to understand the reasons and motivations behind a position

This may appear to be an extension of Dennett’s first rule, but it’s subtly different: you could easily explain an idea, in a way that your interlocuter would agree with, while still having no real understanding of why someone might hold that belief. But the motivations people have, and the reasons they came to hold their beliefs, are just as important and interesting a part of the conversation as their logic and factual basis.

I recall, many years ago, I interviewed an anti-abortion activist for my podcast Be Reasonable. This was someone who ran picketing protests outside of abortion clinics in several different cities, and who even went as far as to hand out baby clothes to women who were attending appointments for procedures. As someone who pickets abortion clinics in the name of God, it would have been easy to assume his motivations were his deeply-held religious faith, that he was an ardent bible-basher who would quote chapter and verse on the religious arguments against abortion.

Except, when I asked him what the New Testament had to say about abortion, rather than reeling off references which can be easily found online, instead he began searching for scraps of paper on his desk, and around his room. He told me that his brother had written down some Bible quotes that would help explain what the faith position on abortion was, and he was trying to find them, so he could cite them for me. After much rustling of paper, he left the room, and began searching other rooms of his house to find the Biblical justification for beliefs that were so core and motivational to him. In the end, the pause and the dead air was so excruciating to listen to, and I cut it from the show to spare him and my listeners from the awkwardness.

Given that familiarity with the Bible clearly wasn’t a driving motivation for his actions, I pivoted away from the Gospel, and began asking around what else might be behind his views, and it became clear that he believed in the sinfulness of sex, and that pregnancy was a punishment for sin that shouldn’t be easily avoided, and that he believed women also often use pregnancy as a way to trap men in financially-binding relationships. In my opinion, from the conversation we went on to have, his distorted views on sexual politics played more of a role in his reasons for picketing abortion clinics than anything scriptural.

The lesson I took away from this conversation, and from many others since, was to remember that just because we know someone’s stance on an issue, we shouldn’t presume to know their route into it, and we certainly shouldn’t fill that gap with whatever we’ve decided best fits our view of the world, especially if that relies on the least charitable version of somone’s intentions. To do so is disingenuous, uncurious, and antithetical to critical thinking.

Don’t treat as academic something which for someone else is existential

All ideas exist to be questioned – that’s absolutely true, and should always be the case. There can be no sacred cows. However, if you’re coming at a topic as an external party, while that topic is fundamental to someone’s life, there is an inherent imbalance that cannot be ignored or waved away. For example, if I were to have a conversation with my sister about the value of the state providing support and a social safety net for disabled people, I shouldn’t get surprised when my sister finds the discussion to be incredibly offensive and distasteful – because I might think I’m simply kicking around interesting policy questions about the structure of society and the limits of government, but she will rightly see it as reducing her wellbeing and her life as a disabled person to a mere pros and cons list. The fact that such a conversation might provoke more of an emotional reaction in her than me does not make her less of a critical thinker; equally, I am not the more intelligent, galaxy-brained, hyper-objective critical thinker in that situation, just because it’s not my rights that are on the table.

Some people express this in terms of being a ‘high decoupler’ or a ‘low decoupler’, the argument being that some people find it difficult to think in the abstract, and aren’t easily able to switch into considering hypotheticals. Those people would be labelled as low decouplers, who need to be given more caveats and explanations before they are able to perform the intellectual separation between the question and real-world consequences; whereas other people are high decouplers, who can switch into the abstract easily and engage in thought experiments relatively unencumbered.

While I do believe there’s merit to that idea in some situations, it is easy to see how it could be a concept that gets widely overplayed, and how being a high decoupler could even be inappropriately treated as a virtue. For example, it is absolutely academically possible to raise the question as to whether the state should pay for the over-65s to be vaccinated against Covid. One could go further still, and ask whether hypothetically, if the goal is to maximise the productivity of society, we should provide medical care of any kind to people over the age of 65 – after all, they are done having children, and they’ve likely finished or have almost finished working, so they have in front of them, at best, decades in which they are a drain on society’s resources.

That is an argument you could hypothetically make, in the abstract; however, if you make that argument in a room full of people over the age of 65, you shouldn’t be surprised if they don’t engage with it as dispassionately as you do. You’re debating whether society should leave those people to die, and that is a proposition they are likely to have an emotional response to – an emotional response that says nothing at all about how intellectually capable they are. In that case as in many others like it, it isn’t that you’re a hyper-objective, galaxy-brained, high decoupler, it’s that you’re engaging in a conversation in which you have no skin in the game, without being emotionally aware enough to recognise that fact and take it into account.

Consider your reasons for engaging

This might seem like an odd point to make, because after all, we’re all interested in skepticism because we are trying to make the world a better and more truthful place. But being honest with yourself as to your true motivations for having the conversation will help you engage more responsibly.

For example, as yourself, are you engaging because you want to be proven to be right? Or that you want the person you’re engaging with to admit that you’re right and they are wrong? Or are you performing to an audience, who you want to view you as smarter, superior, and better? And if so, how much value does such engagement actually have? In my opinion, it’s about as valuable spending time correcting spelling mistakes of Christians on Twitter, in order to prove intellectual superiority – a waste of time for all involved, and contributing nothing of worth.

Or perhaps you’ve decided to jump into the argument because the attention economy requires everyone to express an opinion on everything all the time, and social media runs on spicy hot takes? If that’s the case, it’s worth considering: what value is your input in particular offering? Why you? Because nobody needs to hear what Diet Coke has to say about the degree to which black lives matter, and there’s no-one waiting on tenterhooks to find out the Birmingham Beekeeper’s Association’s stance on self-governance for the Basque Country.

Or perhaps you genuinely want to help change someone’s mind, or to help them examine whether their reason and their conclusions are solid or faulty – if so, and if your interest in doing so is genuine, being conscious of that goal will guide the way you engage, the pushbacks and counterarguments you choose, and the way that you decide to frame them.

Avoid mockery, point-scoring and dunking

This one is especially hard, because we all know how satisfying it can feel to have caught someone in an error, or to be able to demonstrate part of their argument to be flawed or ridiculous, and to slap them down to the rapturous applause of your fans and followers. It’s an impulse I struggle with, especially in the warm glow of a Twitter spat, but it’s important that you don’t confuse that kind of showboating for genuine criticism, especially if you’re not also engaging properly with the ideas in all of the ways we’ve talked about so far. Being able to ridicule someone’s argument isn’t the same as refuting it, and belittling someone is often antithetical to the kind of engagement that might prove to be productive.

Avoid revers ‘cherry-picking’

We are all familiar with the notion of cherry-picking – where we seek out the evidence that best fits our examples, even if that means overlooking vast swathes of research that doesn’t fit the picture we’d like to paint. The reverse of cherry-picking is sometimes (less than ideally) known as ‘nut-picking’, in which you look for the absolute worst illustration of an idea you disagree with, and then pretend that it is consistent with the majority of people who hold that belief. It is easy to find examples of homeopaths making wildly misleading claims, but if not every homeopath believes they can cure separation anxiety with a cure made from the Berlin Wall, you haven’t actually debunked homeopathy, you’ve merely debunked that particular homeopath.

The thing we have to remember is that the internet is big, and filled with all manner of people – some genuine, and some who say extreme things merely for attention and shock value. If you go looking, you’ll find someone who claims to believe whatever wild idea you care to look for. And just as how Rule 34 of the internet states that if you can think of it, there’s already porn of it, I’m hereby asserting Rule 34b: if you can think of a ridiculous belief, you can also find someone on the internet who already believes it. But that person is merely one anecdote, and skeptics know that we don’t base arguments on anecdotes. We can’t counter an idea by pointing to the most extreme version of it possible.

Remember the person you’re talking to, or about, is also human.

This should actually be the most obvious piece of advice, given all debate, conversation and discussion involves at least a pair of human beings, but the humanity of those we disagree with is so often the first casualty of any dispute, with civility and compassion following closely behind. It is easy to flatten people – especially in online discussions and disagreements – down to a two-dimensional viewpoint, or a collection of narrow viewpoints, but people are all people; they have lives, hopes, dreams, fears, ambitions, allergies and fingernails.

The people you disagree with most fervently are real human beings, who are almost certainly not trying to be the villain – keep that in mind when you talk to them. This isn’t as simple or as trite as to above all #BeKind, but instead to understand that it is actually possible to be compassionate and understanding while still being skeptical. And conversely, the quickest way to lose the moral high ground is to lower yourself into the gutter.  

Acknowledge your own humility and limitations, and examine your own biases.

Essentially, practice epistemic humility. For example, part of being a skeptic might involve reading and analysing scientific studies – and I know that, despite being a skeptical activist for fifteen years and a professional skeptical investigator for over a decade, I am not the person to be able to do that expertly. I am not skilled at working out the statistical flaws in a paper, or the methodological issues in a study, and I’m not great at understanding how papers are rated and weighted in systematic reviews and meta-analyses. Rather than assume otherwise, and risk making errors, I turn to other people when I need to do that, so they can check my work and my thinking, because I’m very aware that my inexperience means I’m more likely to make an error than to spot something everyone else has missed.

This goes doubly true when I come across something I agree with: if I’m not experienced in an area, and I see an analysis that agrees with what I personally want to be true, it’s especially hard for me to stop to evaluate whether it’s accurate, or whether the person who put it together has missed something, or whether they had sufficient experience and expertise to really understand it. It’s crucial to take the time to pause and find other ways to check it before I repeat what might end up being misinformation that happens to align with my biases.

Our own biases also obscure from us the flaws in our own arguments, even where we might be considered experienced or experts, if we aren’t cautious about giving them the scrutiny we apply elsewhere. It is precisely that kind of arrogant thinking that leads to Nobel Disease, where experienced scientists get blinkered to the notion that their thinking can be flawed or that their expertise is narrow. Always keeping in mind that we can be wrong, that we might have made an error and that our reasoning could be flawed, will help protect us for falling into a hole dug by our own biases.  

So, those are my tips – somewhat less concise than Dennett’s original four, but a useful set of tools all the same.

But the number one thing to bear in mind is this: even if you try your best, you are going to fail at each of these principles, almost all of the time, some of them more than others, but you will fall short of these standards. We all will. But that’s OK, because being a skeptic isn’t to be a hyper-objective rationalist, who is unflinching and infallible in their ability to apply logic and critical thinking. Being a skeptic, in my opinion, is the application of these kind of principles in the knowledge that we are going to apply them imperfectly; it’s the act of trying despite the knowledge that you’ll fall short sometimes. And it’s about being honest about those occasions of shortcomings, rather than continually writing yourself as the hero in your own story, regardless of how cartoonish and two dimensional you have to paint the people you disagree with in order to maintain that delusion.

The post How to engage with arguments rationally, respectfully, and with humility appeared first on The Skeptic.

Daniel Dennett’s rules for engaging with arguments are a good start, but here’s what I’ve learned from a decade of interviewing people with fringe beliefs
The post How to engage with arguments rationally, respectfully, and with humility appeared first on The Skeptic.