njaramillo

better late than never

chileno 🇨🇱 / latino
gamedev / programmer / writer
director @ Ibis Interactive
esp / eng



mtrc
@mtrc

This is a post I've been trying to write for a while - like, years - and I've finally gotten it down. I want to stress that I'm not a sociologist, or a historian, this is not an academic treatise or anything like that. It's just a bunch of memories and thoughts, and I don't have a complete picture of all the political and social changes of the last few decades. (Update: thank you for the lovely responses! I will reply to every one, it just might take a little while.)

A few days ago someone sent me a clip of Elon Musk talking to Joe Rogan. In a wild act of self-hatred, I decided to play the clip. Here's a transcript of what he says:

Musk: If you start thinking that humans are bad, then the natural conclusion is that humans should die out. Now, I'm heading to an international AI safety conference later tonight, leaving in about three hours, and I'm gonna meet with the British Prime Minister and a number of other people. So you have to say, like, how could AI go wrong? Well, if the AI gets programmed by the extinctionists it will... its utility function will be the extinction of humanity.
Rogan: -pause- Well yeah... clearly.
Musk: They won't even think it's bad, like that guy. It's messed up.
Rogan: There's a lot of decisions that AI would make that would be similar to eugenics.

This is a blog post about the TV show QI, how the belligerent arrogance of a few people set an example for a whole generation, and why loving science is not enough.


Delusions

In 2005 I was studying my A-Levels, and despite all advice to the contrary I'd kept on with two non-science subjects: English, and Philosophy. As part of our Philosophy studies they wanted to take us to a special student-focused conference that would have people speaking about the topics we were studying. The headliners were Richard Dawkins and Peter Vardy; the former a scientist and staunch critic of all things religious, and the latter a philosopher and theologian. They were framed like boxers coming for a title match, each one defending a huge corner of human culture.

I don't remember a lot from the conference, but one thing stands out very strongly in my memory: Richard Dawkins spent the entire thing being mean, belligerent and incoherent. At one point he was on a panel with some religious leaders and was asking them patronising or flawed questions and trying to show up holes in their logic. A lot of the students thought it was great, but despite being both an atheist and someone who wanted to go study a science subject at university, listening to him talk just made me feel sad. What I now realise, looking back, is that Dawkins' imminent pop culture explosion (the God Delusion was published a year later, in 2006) was part of a cultural shift in the role of science, facts and who we listen to.

In the UK a couple of years prior to this a new TV show appeared called QI, or Quite Interesting. The premise of the show was that team members would be asked difficult questions, often with trick answers that were common misconceptions. Giving a wrong answer wouldn't hurt you at all, but giving a wrong answer that was also a commonly-held but wrong belief would cause sirens to blare, lights to flash, and massive points loss. Like any TV show with a gimmick, QI leaned further into this the more popular it got. Questions got more and more loaded, trick answers became more tenuous, in some cases famous trick questions were then revisited in later series to further trick people by explaining they had since been disproved or that a certain special case meant they no longer applied.

I liked QI at the time and I still watch it now when I stumble across it. There's some funny people on it sometimes, and it especially is fun when someone intentionally subverts the show's premise. If you like QI too, maybe more than me, please do not be offended by what I'm about to say: QI encouraged hundreds of thousands of people to become gigantic arseholes. The thing with QI is that on paper it was about the joy of learning new things and sharing them with others, but what a lot of people seemed to take away from it was that one-upmanship and showing off how smart you are is good or admirable. For a lot of people their favourite character on the show wasn't the wise host or the silly panelists, but the Klaxon that blared whenever someone said something wrong. You probably know someone who has modelled their personality around becoming a human QI Klaxon. Learning and knowledge, instead of being a big nuanced thing that everyone is contributing to and taking from, is instead broken down into tiny isolated facts that were sharpened to a point and used to poke people in the eye with.

Now, I'm not saying the show intended this, and I'm definitely not saying QI was a singular point that changed the world forever. It's just emblematic of a certain kind of argumentation that I saw a lot of at the time, and I saw the same thing on the stage with Dawkins at that conference as a student too. It's the idea that being smart is simply about knowing stuff - the more stuff you know, the smarter you are. If the stuff is complicated and hard to understand, then you're even smarter still. And simply being smart isn't enough - you have to demonstrate it, and the best way to demonstrate it is to deploy it against someone else, whether that's an opponent on a gameshow, a Muslim at a philosophy conference, or a person you disagree with on the Internet.

QI breaks knowledge down in this way because, y'know, it's a comedy show in a 30 minute timeslot, it doesn't really have a choice. But stuff like this was emerging at a time when the Internet was also fragmenting into tiny bite-sized things where there was no space or time for nuance or elaboration either.

IFL It Here

I was invited to an event last year where science journalists come to train and practice their skills - I was volunteering to be a practice interviewee so they could get some experience interacting with real scientists1. Afterwards, many of the organisers and speakers were chatting over drinks, and someone introduced themselves as working for "IFLS". I did the classic thing you do when you're socially awkward and talking to serious people which is I smiled and nodded and pretended I knew what that was. After about ten minutes I realised I actually did know where she was from - she wrote for something I used to know under a different name: I Fucking Love Science.

IFLS (I'm only calling it that for brevity, much as it pains me) is a good example of science-as-aesthetic that emerged over the last ten to fifteen years on internet social media sites such as Facebook. Much like QI, IFLS and its ilk break down scientific ideas into something small enough to be digestible in the medium they use, which was originally a Facebook page but now includes YouTube, Instagram and more, as IFLS has become a veritable brand name. A positive spin to put on this is that it is making science accessible for the wider world, adding some clickbait tricks to help people engage with serious ideas - and IFLS has expanded a bit into a more straightforward science journalism outlet these days, which explains why they also felt the need to drop the Fuck. A less favourable spin would be that by reducing science down to the same quips, clips and soundbites as everything else on the Internet, you're actually doing the opposite of helping people engage with science. Like QI, you're reducing the idea of knowledge and learning down to little facts you can memorise, share and repeat.

The fetishisation of science in this way has a lot of other problematic side effects, too. One is that it narrows our idea of what being smart or doing science actually is, both in terms of what activities constitute science and also where its boundaries are. Scientists have always looked down on humanities researchers to some degree, but pressure on that link has intensified in the 80s, 90s and 00s as the push for 'STEM' subjects2 became stronger and stronger. Studies in school (at least in the UK) are compartmentalised - your physics lessons are about very specific things and do not overlap into other subjects. So by pushing STEM as an important thing, we also push in people's minds a specific idea of what it means to do science, it means these things that we have drawn hard red lines around in the school curriculum. If you do work that doesn't look like pure science, you are called 'interdisciplinary', or you describe yourself as doing work 'at the intersection' of multiple things.

And I feel I should point out here that I actually, do, also, fucking love science. I remember several things from my years as an undergrad that made me feel so excited and happy to have learned, I remember really buzzing with a visceral kind of response to grasping certain new ideas. There are lots of fascinating things out there that are hidden away in subjects we think of as dry. But I also don't really think about or care about where knowledge comes from. The students I work with have backgrounds in design, writing, art, archaeology, game development and more, many of them check multiple of these boxes. We pursue the questions we think are valuable and interesting, and sometimes they look like what IFLS would call 'science' and sometimes they don't. But that's really hard to do, and all of these systems - including the systems scientists design themselves for evaluation and recognition - are constantly pushing back and encouraging us to reclassify what we do, to fit inside narrower boxes, to align with boundaries that suit other purposes. But as much as I love science sometimes, it's also pretty boring a lot of the time. I don't even mean that as a negative, a lot of nice things in life are boring or mundane or involve repetitive, thankless work.

Another problem is that, as Dawkins showed in 2005, a lot of people think that stating facts allows you to win arguments and solve problems, as if there is a secret hidden QI scoring team just behind you in real life, silently grading all the posts you're making in that Twitter thread. Around 2005-2010 I would say I mostly saw this used the way Dawkins used it - to attack people who were perceived as being 'outside' of or 'against' science, usually people that could be looked down on. If you used Reddit or Facebook during this period, particularly a little later when Facebook was opened up to non-students, you might have seen people making fun of religious people or posts, or replying to them with snappy one-liners. I used to judge people who said or shared this stuff very harshly - I think over time I've tried to adjust and have some compassion for some people who might have, for instance, grown in very religious repressive spaces and thus have some trauma associated with that. But in general it was people cosplaying as rational, enlightened people so they could feel and act superior towards others, and there's a direct throughline from that to the open Islamophobia we often see today, for example.

I think during this time probably a lot of people either didn't think this was a problem or perhaps sincerely believed it was good. After all, these people were wrong about something on the internet and they needed to be told why. But as the century has worn on, that desire to equate smarts with facts, and arguments with one-liners, and logic with correctness, and religion or spirituality with stupidity, has contributed to a new unfortunate trend of absolute, total noise in online debates.

I Will Never Log Off

There are a lot of differences between the science education we get in schools, and the experience of science in everyday research labs and universities, but I think one of the most important ones is the difference between how knowledge is treated. Good researchers view knowledge and ideas as essentially temporary, unreliable, and partial. For the most part facts are just things we currently believe, and they're useful to build stuff on, but many of them are open to being disproven, elaborated on, broken, expanded or changed over time. In school, though, we mostly deal in immutable facts, which is why we get examined on our ability to memorise and understand them. So we get this idea that science means facts, and that smart people deal in facts, and that an argument is a series of facts that leads to an irrefutable conclusion. This is what people mean when they talk about 'logic' and 'being logical'. Fact A implies B, B implies C, A is true, therefore C is true. If you disagree with this, you are being illogical and irrational.

One place you see this manifest is any prominent public issue that involves currently developing scientific consensus - the easiest recent example, of course, would be the Covid-19 pandemic. During the early stage of the pandemic people would often find and post scientific papers that supported whatever view they had - and because literally tens of thousands of people were writing papers about Covid-19 from every angle, it was not hard to find something to support your view. So if you wanted to find a study that suggested, for example, that Covid-19 actually wasn't very dangerous at all, then you could. A scientist said this and published it in a paper - it's facts. Then some well-meaning person will think, well, we know Covid-19 is actually dangerous, so there must be flaws in this paper, and they'll do a nice Twitter thread pointing it out, and people will RT it because it also sounds like Facts and it explains why the Other Facts were the wrong kind of Facts.

In reality, scientific consensus - especially about something as new and rapidly-changing as Covid-19 - is changing all the time. Everyone - including the authors - know that studies do not tell us the whole picture, and we have a range of mathematical and scientific processes to try and assess claims and data, and build better pictures over time. Any evidence-based or experimental paper that has ever been published on any topic will contain weaknesses if you look hard enough, and if you wanted to you could write a Twitter thread picking it apart. That's just how scientific publishing works. The rightness of wrongness of a single article cannot tell us anything meaningful about something so new and complex. This is because scientific papers are not, actually, facts - they are reports, explanation of work done, the conclusions those people drew about that work, and their personal interpretation of where that might point. We can think of them as facts within their own space, if you like - the data you collect in a study is real and accurate. If you ask 100 people on the street what their favourite chocolate bar is, they might all say, I dunno, Aero Mint. That's not false, that really happened. Morally wrong, maybe. But the study is just a small fraction, a sideways glimpse at the real phenomenon or problem you are trying to investigate.

All the stuff we've spoken about so far - bitesize viral science, QI's gotcha facts, Dawkins' blunt arrogance, cherry-picking the science that supports what you want to say, all take advantage of this model of thinking. However, at some point, this way of thinking became particularly vulnerable to exploitation3. If you believe that facts can't be argued with ("facts don't care about your feelings" is a popular refrain from a certain kind of person who can be found on every part of the political spectrum) and you can convince people that you are a smart person who knows a lot of facts, then you can argue for pretty much anything. And while a racist or sexist or any other kind of bigot or extremist might be more easily dismissed when framed in emotional terms, factual claims feel different to people. Suddenly this isn't racism or sexism, it's just common sense, and after all, you can't deny the logic of their argument.

How do you achieve that? What does it mean to perform smartness? I often think of the phrase "a stupid person's idea of a smart person" which is a description that's been applied to several of the people mentioned in this post so far, including QI's Stephen Fry, Joe Rogan and Elon Musk. I think it's a bit of an unfair phrase - it's not really a stupid person's idea, but rather society's idea as a whole. And there's no more fascinating an example of what society thinks a smart person is than Elon Musk.

Issuing A Correction On A Previous Post Of Mine

Ever since Musk first opened his mouth about AI, I have been fascinated watching how people's opinions shifted on him. I have always maintained he is a colossal idiot, but a lot of people - including highly respected scientists, engineers and policymakers - have believed him to be anything from an AI genius through to an almost messianic saviour figure. I used to think this was because Musk was good at the grift of appearing smart, but I don't think that's even true any more really, he's demonstrated his complete lack of sense and consistency time and time again and it has barely shifted some people's view of him. Musk's specific form of business bullshit - confidence, buzzwords, a sense of superiority - just fits too perfectly with how we think smart people speak and act.

So here's the transcript from the start of this post again. I just want to go over a few things, as someone with a PhD in artificial intelligence and over a decade of experience researching in the area Musk is talking about, and break down exactly what he is saying.

Musk: If you start thinking that humans are bad, then the natural conclusion is that humans should die out.

You don't need to know anything about AI to know that this doesn't make any sense at all. I think Elon Musk is bad, but I'm happy to let him keep living4. I think Facebook is bad but I understand it has its uses. I think cavities are bad, but I know I don't need to pull out all my teeth to avoid them. Even in the case of optimisation - which Musk is kind of getting at here, the idea that AI would just keep pushing something to its most extreme solution - we have countless examples of both natural and human-engineered systems that do not optimise for extremes. This is pure political bullshit. But even by saying things like 'natural conclusion' here, Musk is trying to make you think these are the words of a wise philosopher. The natural conclusion - this is logical. You can't argue with it.

Now, I'm heading to an international AI safety conference later tonight, leaving in about three hours, and I'm gonna meet with the British Prime Minister and a number of other people.

Ok this isn't factually inaccurate but I just want to duck in here to point out that Sky News' Sam Coates described this meeting as "one of the maddest events I have ever covered", and Musk's invitation to the AI Summit is a prime example of how reputations compound and reinforce themselves to promote and maintain the status of the worst human beings.

So you have to say, like, how could AI go wrong? Well, if the AI gets programmed by the extinctionists

Musk is supposedly an experienced public speaker and an expert on AI, so while this might seem like a minor criticism, we do not say an AI system "gets programmed". You do not "program" the kind of AI Musk is talking about. You make decisions about how they're structured, and you make decisions about what data to train them with, and what goals to seek out. It's clumsy and inaccurate, and he's really only phrasing it this way to create a causal link between the beliefs of these bad people (the 'extinctionists'5) and the AI.

it will... its utility function will be the extinction of humanity.

This is the thing I really wanted to dwell on. Musk knows that "utility function" is a specific, technical term from within AI. He knows that saying it will make him sound smart. But he clearly does not know what it is, or if he does, he's being incredibly obtuse here. A utility function is a way for an AI algorithm to judge how well a solution fits a particular problem. Let's say I want to know how to encourage myself to work better in the mornings. My utility function might be how many words I write between 9am and midday. The actual specific things I do to achieve that - drinking coffee, playing music, avoiding email - are the solutions I'm trying. The utility function measures how good those solutions are.

Making humanity extinct would not make sense as a utility function. The utility function - in this bizarre case - would be better expressed as something like, "reduce carbon in the atmosphere". In such a scenario, Musk is suggesting the best solution for this the AI could find would be to kill everyone. The only reason this sounds plausible is because you saw it in a movie once.

Why does this distinction matter? Well, two things, first let's see how noted philosopher of our time Joe Rogan responds to this:

Rogan: -pause- Well yeah... clearly.

He doesn't know what to say because he doesn't know what a utility function is. Musk knows this, that's why he used the term, because he knows he will not get pressed on this point because people are afraid to ask questions about things they themselves do not understand. When you're doing science communication, you either need to not use certain technical terms, or you need to find ways to explain them. Musk is doing neither here specifically because he knows it helps set him up as a Smart Guy.

The second reason this distinction matters is that in this bizarre hypothetical situation that everyone is obsessed with, we never discuss whether, for example, the utility function could be adjusted to include other things? Could we have the utility function be, for example, "reduce carbon in the atmosphere without killing anyone"6. This would undermine Musk's argument, and so having people understand what he is actually saying is dangerous. Instead, it's more important that they engage with what's he's saying on the surface level, just like all the other science-as-aesthetic stuff we've discussed so far. What does his argument sound like? How does it make me feel? People do the emotional calculus, and then conclude it's scientific after they've decided if they agree or not.

There are a million ways you can build a piece of software, and a million ways you can break it. We have built AI systems that evolve, change, and limit themselves in countless ways, and shaped each and every one to the kinds of goals we want to have for it. Similarly, there are many, many plans for the future that allow humanity to keep living on a planet that they take better care of. So Musk's claim here is ideology wrapped up in science: he wants to argue that both environmentalism and AI are dangerous, and is trying to get you to agree that this belief is logical by using fancy words and trains of thought plucked right out of science fiction. Once someone gets into a position of authority that derives from a belief they are smart, it's remarkably hard to get them out of it, especially if everyone else believes they are not smart (as many of us are trained to do).

How To Be Smart

What are the ramifications of this? One important one is that we have elevated an entire class of people to positions of immense power and influence because they perform smartness rather than actually demonstrating competence at doing anything. In almost every case you'll be able to think of, this has had disastrous results. Worse still, we've created a culture in which the effort required to break down the claims, lies and bad behaviour of these people is exponentially more than the effort required to do the bad thing in the first place. You can't break down an entire way of thinking with the same virality that created it - the only way would be a long, slow and painful culture shift, and I don't know how that happens or if it's even feasible. Attempts to combat this virality with virality of its own leads to the culture of dunk quote chains, out-of-context mockery and one-up gotchas that ultimately do nothing to actually shift our understanding of the problem7.

I think there's also a deeply complex entanglement between this exact way of performing smartness and the movements of capital and the investor class over the last decade. The rise of AI, as well as the smaller trends that swam alongside it like VR and web3, have been guided by people who sold the right dream with the perfect balance of big words and total bullshit - they knew exactly how to appear smart to rebuff questions about whether they could actually do anything they were claiming. I've heard and seen this happen even locally, in cafes as people yell on video calls or pitch to people in suits. I have completely given up any hope of 'debunking' any claim made by an AI company or influencer in any meaningful way now - it is an inferno that has to burn itself out. If we're lucky, perhaps we can save a few important institutions from the blaze while we wait for that to happen.

But I think most depressingly, the main consequence of this is that all the wrong people learned the wrong lesson from it. When I talk to people about science or AI, they will volunteer their stupidity without being asked. They will tell me they know nothing, am not as smart as me, could never understand computers, or are much worse at this than other people. Even people who fight for the relevance of the humanities in the face of STEM will tell me, to my face, that they are not smart enough to understand what I do, or that I must be a genius or super-smart person to work in AI 8. People whose voices are badly needed currently, as we navigate dangerous futures powered by technology, believe that they are the last people who should be speaking. I'm not conspiracy-minded enough to tell you that's by design, I just think it's a very unfortunate side effect that has accelerated the position we're in right now. The wrong kind of person got encouraged by the madness of the last decade, and the other kind of person got told to be quiet.

I don't think smart people exist. I think that some of us have developed extremes of knowledge of skill about certain things, and some of us develop that by going viral on TikTok doing yoyo tricks, and others develop it by getting incredibly good at writing compact Python code to solve tricky data problems, or by perfecting making our partner's favourite meal just as they like it. Just as we create hierarchies of culture, where it's more socially acceptable to know a lot about fine art than comics, we create hierarchies of knowledge, skill and expertise too. If you know a lot about mathematics, you are smart. If you know a lot about media studies, you wasted taxpayer money. I know this first-hand because depending on what room I'm in, and depending on how I describe my work, I find myself in different categories all the time.

I truly hope that in the future we can change our cultural thinking about science, about intelligence and about education. I hope we can expose some of the people who have lied and blustered their way to being in charge, and rehabilitate the people who were told they were too stupid to have an opinion. And I hope we can begin to understand science as a messy, complicated and imprecise thing, that often doesn't look or feel much like 'science' at all. I think that one of the healthiest things we could do as a society right now in the west is make everyone feel like they are good enough to learn about and participate in debates about science, technology and the future; while also acknowledging the limits of what 'knowing' things or being 'smart' actually are.

Thanks to everyone who gave feedback on a draft of this post, and thank you to you for reading!


  1. No jokes please, we're all thinking it.

  2. Science, Technology, Engineering and Maths (or Medicine?) - an acronym basically referring to the subjects generally called 'the Sciences'. This became a watchword in education policy and other areas and is now a very loaded term today.

  3. I mean you can argue that it's always been a form of exploitation, you know what I mean. More exploitation.

  4. Well,

  5. Kor Bosch points out that the use of the word 'extinctionists' is also intentionally inflammatory here and totally pulled out of thin air. I didn't mention this directly because it's not dlrectly linked to the technical details, but actually Kor is right that this is a good example of how people cover their bias and emotional appeals with a scientific persona. Calling people you don't like 'extinctionists' is about one level above Trump nicknaming in coherency stakes.

  6. I'm hesitant to include this example because it implies that I think even the basis of the thought experiment is worth engaging with, which I don't. The science-fiction idea that an AI with the capacity to control globally-important systems would be given as vague an instruction as "improve the climate" is colossally stupid, and relies on so many naieve and idiotic human mistakes that it really has nothing to do with AI at all, even if it were possible.

  7. If you follow me on Twitter you'll know I dunk on AI people a lot, I quote them, I make lazy jokes. I'm not going to get into that here, but to be clear, I don't consider any of that an attempt to convince anyone of anything or change people's minds. I do it because I'm losing my mind on a daily basis and if I don't find a way to laugh at it then I'll need to go live in the wilderness somewhere instead.

  8. Again, despite all evidence to the contrary that I regularly post online.


You must log in to comment.

in reply to @mtrc's post:

Really good post! I love reading in-depth stuff like this.

I had a similar revelation while studying philosophy: there are no hard and fast answers to anything, ever. You can use rhetoric to tell any story you want. Giving hard evidence is just one of many tools that you can use to convince people that you're right. So much of the public political and academic sphere is based around reputation, appearance, rhetorical tecnhique, framing, etc etc.

Believability is not a replacement for truth.

Yeah! In my day job people often ask me to define creativity, and I feel like they're often disappointed when I tell them I think it's something we collectively define. But that doesn't make it less real, and actually appreciating that there are very few hard and fast truths out there and working together to navigate that is part of life!

As a religious person, I think the stuff you described - of euphoric redditors slam-dunking on everyone - feels like its trending downwards these days. And even leftist spaces are never as hostile as I fear them to be.

Now please tell us about how BBT contributed to the Acting Smart problem!

P.S. from my limited understanding, wouldn't an AGI be grown rather than programmed, and thus be somewhat human-like unlikely to maximize paperclips into a Gray Goo scenario?

I'm really glad to hear the slam-dunking is on the downswing. I kind of had a similar sense. I think I'm right in saying that the generation younger than me seem to like chatting and sharing about spiritual stuff a bit more, I wonder if that's helping people be a bit more open-minded to faith-based beliefs that they don't share.

The AGI question is hard to answer - I don't really think AGI is a thing, no-one can really define it, they just dodge the question. But we can think of ChatGPT etc as 'grown' by learning from tasks. Even there though we do direct them - we give it rewards and tell it what to aim for.

Thanks for reading :)

My favorite person on QI will always be Alan Davies. The hapless man who is constantly set up to make himself look stupid but never stops trying anyway while we all laugh at the big moron who keeps coming back for more.

I've caught some episodes from later seasons and the funny thing is that through a combination of being the only person on every episode and his own personal work, he's now become a savant who occasionally just destroys all the other contestants, which is an amazing anime-ass character arc.

"I am wiser than this man, for neither of us appears to know anything great and good; but he fancies he knows something, although he knows nothing; whereas I, as I do not know anything, do not fancy I do. In this trifling particular, then, I appear to be wiser than he, because I do not fancy I know what I do not know." -- Socrates

The actual smartest people on earth, if such a thing even exists, are the people who have the most sense of doubt about their own knowledge, and this has been a disaster for humanity for nearly as long as there have been humans.

I did the tech conference circuit a lot for a while when I was still living in Europe, and I met some genuinely brilliant people who I'd be considered name dropping for bringing up ... and every one of them was among the most humble people in the room.

It was always the junior and mid-level devs in a conversation circle most presenting as if they knew a goddamn thing, and with no shame whatsoever to be doing it, even in front of "man who literally invented the programming language they're talking about".

ok i need to explore a thought here, i don't know if it's tangential or not to this post, but i still gotta get it out

for those who weren't immersed in the religious arguing that happened for a few years after the publication of the god delusion, tons and tons of theists (mostly, but not all, christian) would absolutely pull the kind of stuff this post complains about dawkins doing all the time. endless trite gotchas, people genuinely going "if humans evolved from monkeys why are there still monkeys". yes really, that wasn't a joke! it happened!

and in the realm of more formal debate, there were plenty of smarmy creationists happy to abuse the setting (a reason that a lot of people nowadays think debate is inherently flawed). famously, creationist duane gish ended up lending his name to the gish gallop, where you just ask as many pointless questions about different aspects of evolution as possible and demand your opponent explain them all. any decent evolutionary biologist would be able to give coherent and reasonable answers, with explanations of the current limits of our knowledge, to all gish's questions if given enough time, but the structure of debates meant that no matter how hard gish's opponent tried, they would never be able to answer every one of gish's questions sufficiently, allowing him to declare victory

and this kind of stuff still happens today! a couple years ago i got to see an atheist say something kinda dumb on twitter and then get pilloried by a bunch of equally-smug (and often way worse) theists, breaking out more shitty gotchas like 'if god isn't real, then why do oranges have peels to protect their fruit?" complain about richard dawkins all you want (which you should, he's a huge dick) but ray comfort still casts a gigantic, hideous banana-shaped shadow across the land.

i'm left thinking about how easily we focus on the anger and bitterness and assholishness of one group and not the other. i'm thinking about who is and is not allowed to be mean, how we interpret going too far, etc. i've lived my entire life in the american midwest, and i'm used to midwestern nice. politeness glorified, weaponized, turned into a means of manipulation: they were so polite to you, why aren't you being nice to them by doing what they want? (i'm not even from the upper midwest, the stories i hear from minnesota frighten me hard)

and that means i have to ask, what are we missing when we talk about this kind of smug, arrogant 'smartness'? obviously, something that leads to the popularity of useless, cruel shitheads like dawkins, fry, and musk deserves critique. but where did this come from? what is it a reaction to? are we focusing on the parts of it that annoy us the most while inadvertently giving other parts, maybe even worse parts, a pass? and most importantly, how do we make sure we do better than this smugness instead of just reverting to another terrible mode of thinking about the world?

yes i know those questions are leading, but i don't know how to phrase them better right now.

i think for the common practicioner this kind of shitty gotcha-ism is more an exercise in power than anything else. the power to victory, to render one's chosen opponent objectively Wrong by the authority of a higher, inexorable power: twitter screenshot of @widdr commenting on a person arguing about a PEMDAS engagement bait: "they feel like they're granted a deontological superiority by pure and immutable principles from a higher plane of being, so it's basically a religious fervor in side-taking over a pseudoscientific dispute. like this guy, who took a quick break from posting bluemaga shit for this"

if that's true i sure don't know what to do about it because it's basically a human nature thing, cultivated in large part by the social environments those kinds of arguments stereotypically thrive in.

I don't know how to explain it, but "if no God, then how come banana" creationist don't have the cultural cachet of Dawkins et al. Everybody [we care to align with culturally or gets prime spots on TV] thinks those guys are a joke. Meanwhile, Dawkins got major science respect points, Bill Maher still has a show, etc..

As a Catholic Lithuanian, I know those guys off-hand, but I had to google Ray Comfort and never heard of gish gallop's namesake.

Now, you'll find some of the worst theology possible in the right-wing circles, but, like, what do you expect to find there?

Politeness, on the other hand, extends outside of midwest. It's long been a weapon wielded against the left because you're not being angry about life and death questions like M4A or accepting that trans people are people - you're IRRATIONAL and TRIGGERED!

Thanks for adding this! I think this is useful context that I only briefly alluded to in the piece. Terrible people exist in all spaces, and I agree that we often do center ourselves too much on one group and then ignore the problems caused by the other. For example, as terrible as 'techbros' are, I can tell you I've met quite a few 'artbros' who are just as bad on the other side of the fence, and as someone trying to do research in both spaces it basically leaves you feeling pretty unhappy! Haha.

I think for my part, I focus on these people because they are 'my' people to some extent. I work in academia, I'm told I am 'smart' a lot, I compete for airspace with people like Musk when talking about AI and trying to discuss problems and issues for the future. So I think my post centers on this from a "speak about what you know" perspective - I haven't been near religious communities for a long time, and I wasn't really religious myself even when I was near them, and they were pretty milquetoast English ones anyway. So I just wouldn't feel as confident talking about those issues. But you're right to raise these points I think, and nothing comes out of a vacuum entirely.

Thank you for writing this Mike, it very neatly encapsulates a lot of problems I often think about. And especially growing up and doing some debating at school, how that kind of Objective Victory structure was very ingrained as part of education and in particular British public schools.

Thank you Lisa! And thanks for giving me feedback on it too ^_^ Debating at school, haha, god. I only did it like, twice, but it was a whole scene. The American version of it is genuinely terrifying.

I'm a big fan of science. :) But I've never cared for the whole emphasis on STEM, because it represents an effort toward economic hegemony via high technology products, and not any particular love of or respect for science on the part of the people who own and run our society. I also happen to think that the humanities and the arts are what make life worth living, and that it's a great mistake to devalue them.

As for Dawkins particularly, I always felt he's exactly the sort of person you wouldn't want on your rhetorical side, because he's just such a prick that he turns everyone off to your point of view... it's a little shocking to imagine people responding positively to his behavior. I suppose I don't really understand people.

Really enjoyed the article. :)

I think you're right, although thankfully a lot of the people who are drawn to teach STEM subjects have an infectious love for it that does often inspire people to apprecaite the good bits, even if the system as a whole is geared towards the bad bits. I think often about how lucky I am to be paid with public money to make the world better for the public, and how sad it is that my job often conflicts with that idea and encourages me to do other things instead.

Thanks so much for reading and commenting :)

I don't mean to impugn the motives of the people who are actually doing the work, of course, the people who are actually in the thick of it. Given the sheer grind that can be involved in study and research, you really have to believe in what you're doing. But the boosterism by people who aren't invested in those fields (usually administrators or politicians, what's the difference), as a means to an end... I could do without that.

But yes, it's a very thought-provoking article. Thanks for composing and sharing it. :)

Spot on. One of my great personal realizations is that being intelligent doesn't imply being right. "Smart" people are often (usually?) just better at rationalizing than other people. It takes an incredible amount of humility and discipline to evaluate arguments in a truly logical way, and in general our society does not know how to do it. We really need philosophy to be the core of our education system.

Yes. Admitting failure or being wrong is so hard, and we incentivise everyone against it (including scientists, which is where you get a lot of big academic scandals from). We actually tried to work against this a bit in my field by hosting things like workshops explicitly for papers about 'failed' work.

great essay! perhaps this puts a finger towards one of the things that has always bothered me. the "Aesthetics of science" are so far removed from the realities of science. especially in the context of media.

the "super smart person (tony starks) are just going through the vaguely convincing motions of just knowing things and making things. and being good at everything along the way.

real science is messy, slow, incremental, and highly collaborative. This disconnect i feel like gets in the way of communication, because it sets expectations too high.

It also bothers me that capitalists tend to exploit this aesthetic for the air of credibility from people who dont understand better. (its mostly this part haha)

Nothing makes me more antsy than when a 'smart' character appears in a movie. It's always so bad. It's even worse when what they're saying or doing doesn't make sense as being smart. Even the notion of having multiple PhDs is very very funny because it's supposed to make someone sound smart but in almost all circumstances would be the most confusing and weird flex ever.

wish folks understood science as something everyone is doing constantly, like looking at and puzzling over a bug on the street. if they knew the significance of their own behaviors then maybe they'd be less susceptible to the funny science mimics

as a former "logic guy", the worst thing you can do is convince yourself is that you are a rational actor. because you are a fleshy fallible person, it makes you believe that everything you say and do is "rational" and "logical", and that everyone who opposes you is "irrational“ and "illogical". it makes you a profoundly insufferable person that will alienate your friends and push you into the company of truly terrible people.

deeply interesting thoughts, thank you. i'm from the humanities field (now game design) and back when i was doing more academic stuff we faced a similar issue - which i noticed even more once i started studying game design and realized a lot of people talk about video games the same way Musk talks about anything.

back then, we called it the "people scan but don't read, people hear but don't listen" problem: it's extremely easy to fool someone into thinking you're an authority if the other person isn't willing to stop and actually think about what's being presented to them - or if they don't have the humility to ask them what they mean with the words they're using.

there's a media aspect to this as well. Rogan is pretty much slurry in podcast form, but the same thing happens whenever you turn on the news: a serious-looking journalist will keep nodding while someone (often, but not always, a politician) will spout out stuff that's either misleading, false, or impossible to verify or parse without more information. i've personally felt the impression, especially when i worked in journalism-adjacent fields, that going "could you clarify this" is both showing my lack of knowledge of a subject and extremely unprofessional. it's an instinct that needs to actively be fought, and it's not easy, especially when it's tied to your employment

Yeah absolutely. The culture of appearing smart and challenging people to question you is really self-perpetuating too because, as people have noted, the people who really should be interjecting and questioning often feel the least able to because of the power imbalances, which this performance is intensifying. And then in large groups its even harder - students often report feeling like they're the only one who doesn't understand something so they don't ask, in reality the entire room all think that so no-one asks and no-one learns haha.

Great thoughtful read as usual. Also I cannot believe that this is how I found out that QI was an abbreviation.

I think some of it is is just our education system teaching Science for exams that we must get right as opposed to a process that doesn't really end (cuts off tangent me pitching Science history as an A level). And things like STEM and later STEA(rts)M are still firmly in the education as preparing you for the workplace, which I get, we need to pay for a roof and food, as if there couldn't be space for education for the sake of learning around like minded folk being of value on it's own.

Thanks Andrew ^_^ Haha I don't remember where I learned it, I think reading the back of one of their books because it's their company name or something.

I agree with you on the preparation for work stuff. It reminds me a bit of research funding too, because the best ideas often come out of blue sky research, but if you ask to do blue sky research it's like, no, do something that'll result in results, we can't let you do that.

BTW the "Q" in "QI" is very clever. The word "quite" in UK English is a lovely double-edged sword. It can mean either "very" or "not at all" depending on the context. In the context of the title of the show... well, it begs you to decide, doesn't it?

You know how before "computers" were boxes of electronics, we had "computers" who were people (usually women) who... computed? Well every time someone says "what if the AI decides" I replace "AI" with "The RAND Corporation" which was the AI we used in the 50s and 60s.

Hahaha, yes. So much stuff hidden behind language. I saw the "we're going to use AI to fix [completely random political issue]" headlines beginning to spark up again as we near an election.

I recently saw a 538 video about how AI would affect the US election. A fair bit of eye-rolling stuff, but they did finally get down to the point that sophisticated election campaigns have always used a huge number of human beings to plan their strategies, and the only change if they use AI is that it will have the same output, it just will cost them slightly less. What it DOES do is elevate those who can't afford to hire a room full of writers to produce lies. I go back and forth over whether this is a good thing.

Really enjoyed this post. Lots of good food for thought; I paused often, finding myself thinking about various personal experiences in education (once upon a time I studied a science at university!) and life outside of academia. The final bit is a hope I share: “I think that one of the healthiest things we could do as a society right now in the west is make everyone feel like they are good enough to learn about and participate in debates about science, technology and the future; while also acknowledging the limits of what 'knowing' things or being 'smart' actually are.”
Thank you :)

Thank you so much Halima! I'm so glad you enjoyed reading it. I've felt a bit powerless to really do anything about this lately, but wanting to make everyone feel like science belongs to them, and that they have a stake and a voice in it, used to be one of the big things I wanted to do as a scientist. It feels harder than ever today but I still really do believe in it. I hope we get a chance to meet in person again soon and chat!

Oh my goodness thank you for writing this. It looks like it was a tonne of work and represents a lot of distilled experience and thought.

As someone who got caught up and hurt by this aesthetic in the 00's (should have been an engineer not a scientist, but had already shackled my identity to getting a PhD) it rings so, so, so true to me. It's the reason I find Big Bang Theory so distasteful, why The Imitation Game left me so cold. I'd had the damn feeling of working on a tough, science-y problem in a group but the popular depictions felt so off.

For me, it eventually boiled down to be kind, be inclusive, don't ever wield your knowledge as a cudgel or stand for someone putting themself down because the world tells them their knowledge is "softer". I found learning about topics like Linguistics to be a great balm for this: the first principle there seems to be "people who speak the language know it and if it doesn't fit in your system then your system is broken, not the person who knows less about linguistics and more about the language; yes, even if the language is Klingon"). This post gave me a much clearer understanding of why a more aggressive attitude is so wrong.

Thoughtful writing like this also helps keep away the ennui of trying to earn a living as an honest code monkey in an industry that seems to keep inventing dumber and dumber ways of building the torment nexus (often by throwing person-years of care and effort at problems that could be easily solved by "not surveilling users" or "using technology that isn't trendy").

So again, thanks <3.

Oh damn, a lot of stuff resonating here - there was originally a huge section on Big Bang Theory in this article, and I also hated The Imitation Game. The use of knowledge as a cudgel is such a big thing, I hate it so much, it really activates something in me when I see it happen haha.

I'm sorry your current job isn't making you so happy, but it sounds like you're a thoughtful person with a lot of great experiences, so I'm very glad you're one of the people in the mix of that, rather than people who wouldn't give it a second thought :)

Yes, thinking about the shifts in internet culture and technology in general... it's so fascinating, if I had another academic life I think I'd become a historian of technology and just analyse that all day long.

This is a great post Mike! Thank you for writing it! I've been thinking about this sort of thing a lot as I finish my PhD and talk more with people outside of academia. People get intimidated by the fact that I have 2 going on 3 grad degrees and worry about boring me or not being smart enough to talk to me.

Smartness being a quality people possess according to our society is so damaging to actually communicating knowledge. Like you say towards the end of your post, whether we in our field are the "smart people" or the know nothings depends on who is the majority in the room. And all of that, I guess I can call it hierarchy and posturing means actually talking about knowledge, science, research, etc. becomes impossible in a way that feels very new

I love this post overall, and the amount of context about the changing of science communication is great, but I really want to thank you for the hours of fun as the idea of mismatched facts sent me on the path to entirely replicating the core of consensus reality theory with absolutely no foreknowledge that it was already published in 1966. Accidentally doing epistemology is a great use of an evening, and I think it's a great model for this phenomenon broadly as well as a ton of other bullshit going on right now.

Our conception of "success" is a phenomenon that exists in consensus reality, and influences how we judge others' worth; it influences how we interpret material reality into subjective experience. When someone has a lot of money, or a large following on social media, or "wins" in the "debate", we judge them as "successful" because consensus reality teaches us to do so. As this judgement of success re-enters consensus reality it collides with meritocracy, which claims that if someone is successful, they must be competent. Now, the consensus reality is that they are competent, despite not having displayed competency. Meanwhile, those who display competency are only judged competent by those competent enough to recognize them, rather than benefiting from the mass appeal of generic meritocratic "success", thus being less likely to be judged the ideal candidate.

Can't guarantee it's Correct, but I wouldn't have been able to articulate that in a thousand words this morning, so thanks for the inspiration to do a little science of my own!

This sums up so many of the problems I had in high school - it was all about "memorise this thing that'll be on the test" rather than actually learning anything.

My ability to consume literature was judged not on my taking away anything from the the book, but that I agreed with the teacher's interpretation of it. We were told to watch out for "recurring themes" without actually being told what a "recurring theme" actually was, and were even asked if we'd spotted the recurring theme while the class was on chapter one and the theme in question was mentioned for the first time and therefore hadn't actually recurred yet. Oh, and actually going "screw it" and just reading the whole damn book over a weekend got us chastised if we brought anything up from it because "we haven't got to that chapter yet" - so we were supposed to somehow discuss foreshadowing without being allowed to bring up the events they foreshadowed.

I was terrible at maths in my very first years of primary school because it was all based on "memorise these times tables" and my apparent ability shot through the roof when we learned how to multiply arbitrary numbers instead of just memorising the answers. Likewise when I did high school physics, I had the equations of motion memorised for years before my maths class got onto the very first calc lesson and it suddenly clicked how they were all related to each other.

I suppose part of the issue is that your average layperson doesn't need to know the why for certain things and just needs to remember what it is, but the real problems come when people treat those "good enough for the layman" facts as the be all and end all of ultimate scientific knowledge that must not be questioned.

I remember despising Fahrenheit 451 because I had zero clue what I was looking for during my sophomore year summer assignment (which is bullshit in itself, but I digress). By comparison, when I read Homegoing (by Yaa Gyasi) during senior year, I was enthralled and read ahead of when I was supposed to for my assignment. One thing that helped was the relative lack of pressure. This was one book in the middle of the semester, as opposed to the thing that will determine my grade at the start of the semester. Also having teacher input is nice. I suspect my re-readings of Fahrenheit will be mired by that traumatic memory, while Homegoing will forever have pleasant memories associated with it (reading it, not the subject matter. The book is about the trauma of slavery through the centuries through the lens of one black family tree)

I need to give the Great Gatsby another go now that my reading of it isn't tainted by someone who's read the book a thousand times and is assigning marks based on the cliff's notes. I swear I've learned more about literature interpretation from twitter than I ever learned in that class.

To give you an idea of the kind of teacher I had, part of the scottish higher grade english course was a "review of personal reading" which was a fancy way of saying a book review where you had to turn in an essay about a novel, and she had this whole list of things about identifying an "appropriate" novel to review that included such hot tips as "be suspicious of anything where the author's name is in larger text than the title" and outright saying that certain genres like horror or fantasy don't count as literature.

You know the kind of person. The kind of literature snob who, when presented with literature that breaks their rules, will come up with some crap like "oh it transcends the genre" or some other high-handed way of saying that Discworld doesn't count as a fantasy novel because it's actually good, anything rather than admit that their rules about what counts or not are flawed.

One of the most frustrating things about this perception of "public intelligence" is that there are people who side with them because they look smart, but it takes far more intelligence to systematically dismantle these wannabe intellectuals' shitty arguments. It's a structure that favors weed-smoking, fast-talking con men over the actual intelligentsia of the internet. On an unrelated note, so many of them lean right. That's weird.

In response to the "facts don't care about your feelings" thing I've started saying "that may be true, but feelings don't care about your facts", as a way to say that your opinion is often dictated by your feelings on a certain subject rather than what we decided on as fact

After having had countless arguments in my life I've come to see that if you want to convince someone that your argument has merit, simply stating facts and citing studies is entirely ineffective. You need to argue with the emotions a person is feeling on said subject instead

Though acknowledging this and applying it in real life or online can be very difficult for me still

Excellent post and well written

It reminds me of recently giving the most barebones layman's explanation of ChatGPT to someone, which I had picked up from my reading and watching videos on the subject. Their response was to act like I must be an expert on it and I should go into studying it. Like what the fuck are you talking about, I don't know shit about AI, this was a bare surface level understanding of the subject. "It's a glorified Markov chain and shouldn't be trusted with anything important" is not Deep Knowledge. But since I had done ANY research, it sounded like expertise to them.

It was a very weird interaction and just. I wasn't even trying to give that impression, I was just ranting about how it isn't what people claim it is. If that's someone's reaction to irritated rambling, of course the general public believes someone who claims to know their shit, they have no basis to compare or question it.

Which is a problem when grifters are the ones with the charisma (...or whatever the fuck Musk has) to supplant actual experts. Which isn't to say experts can't be charismatic, but grifters are trying to make a career off it. The "I'd have a beer with him" effect.

Maybe if high school taught skills like how to vet sources and analyze information, instead of waiting until college, we would have less of this shit. Maybe it already is, since the younger generations seem pretty done with bullshit. The problem is with the majority of the population who would no longer benefit, anyway, but there might be some hope there for the future.

As someone born in 2000 who grew up as a gifted kid that developed a lot of anxiety and weird psychological dysfunction over my intelligence, Half of this is stuff I've been saying for years now and the other is stuff I really needed to hear, and I thank you very deeply for all of it. (Especially introducing me to the idea of a Utility Function! I can apply that to my life now!)