wait, if you can stabilize someone who's decompensating by beaming them into a transporter pattern buffer and then deleting the transport, why does anyone ever die on a starfleet ship
can't they just have the computer monitor everyone's vital signs and do this automatically
It seems to me that the entirety of Star Trek has an only-rarely-acknowledged, always-present subplot about a society which is right on the edge of Singularity and is consciously choosing not to step over it. Although they usually don't address this, when they do address it it's a huge deal— minor breaches of the unspoken Technological Lines We Don't Cross, like Soong's genetic manipulation (and everything downstream from that, like the Illyrians or [spoilers ds9] Julian Bashir) or Data's strong AI or time travel, become the basis of entire character arcs or movies. The fact that this is sometimes such a huge deal but then sometimes gets breached in tiny unacknowledged ways (like, it's very difficult to explain why the computer on the Enterprise-D isn't a strong AI, or [spoilers Picard s3] why nobody in Federation society treated transporters apparently moving to lossy compression, roughly the most horrifying hypothetical-technology decision I can imagine, as even noteworthy until it got exploited at the end of Picard, or why Discovery gets away with a tenth of what they do) feels at first like poor writing, but considering how "consistently inconsistent" it is it actually feels better to interpret this as a society which is very bad at thinking about technology.
The way Star Trek society deals with technology is contradictory, and it is contradictory in a way that mimics the way real society deals with technology— bright lines that are maintained although it would improve or even save lives if we were willing to cross them, but also people crossing bright lines freely and uncommented-on if they can find some mental trick of saying "oh but that's not really genetic modification" or whatever. Things that are technologically easy but structure-of-culture "impossible" to reify, such that constantly Geordi La Forge or your one Maker friend is just cobbling together a little one-off hack that ought to change society forever if it were productized but never gets repeated outside that one-off hack. I don't think much of the writing in Star Trek but this one element is so pervasive, and so actually human, it makes me want to give them the benefit of the doubt. Why don't people use transporters and replicators to restore the dead from the moment five seconds before they died? Well, why does the United States of America clamp down stem cell and embryonic research to a point of near-immobility, but think basically nothing of fertility clinics overproducing embryos and leaving them in freezers until they're discarded? For this to be an answer it doesn't even have to be the case that "you've already crossed the line, go harder" is the correct answer— it doesn't even have to be the case that we should be creating human embryos to experiment on. It just has to be the case that nobody wants to have a conversation about it.
Federation engineering school doesn't have to have given everyone an Ethics class where they teach that Human Lifelines Must Not Go Nonlinear and that's why we don't automatically drop people into a transporter buffer the moment before death and then transport them back out with the chest wound edited out. Nobody has to have made a specific decision that this is Not Okay (but editing a transporter pattern to take away someone's guns, that's okay). All that has to happen for this to be Real is for everyone to be afraid to think completely clearly. Everyone has to be used to certain suggestions, when you make them in an engineering context, getting abstract pushback, and everyone has to get used to self-censoring. Eventually Transporters Don't Do That because no one will allow themselves to have the idea to make transporters do that.
And this is where it gets a little weird: In Star Trek, they actually can't allow themselves to have the idea. There's a sort of very weak anthropic principle here where preventing themselves from Having The Idea, any of at least six Ideas, is the only thing that allows Star Trek to be a recognizable story at all. Consider a timeline where time travel, strong AI, "hard light" holograms, or transporters/replicators as medical technology were taken to the obvious places that TNG society or probably even TOS society could take them, and within fifty years there's just… nothing, probably, or rather nothing recognizable. Probably everyone is uploaded into servers, or society is erased by a Borg-like bad path, or even in the best case scenario we're in the Culture and the storylines aren't emotionally resonant to 21st century Paramount+ subscribers. And the people inside the Star Trek stories probably know this much. They can see what happens at step 7 so they're stopping at step 3.
Is this a metaphor? I'm not sure if this part is a metaphor. The transporter medtech thing seems to have IRL analogies but this doesn't so much.
Imagine that, say, every person on earth had end-to-end encrypted messaging (that's a good example of an IRL "but why don't we—" technology that could exist but due to various soft pushbacks continually doesn't). It wouldn't actually change very much (not even law enforcement, since the Jan 6 prosecutions have made it very publicly visible that courts don't find it hard to get hold of one end of a Signal chat). There are lots of ways in which IRL tech could progress but doesn't, but we're not actually close to singularity, IRL. Maybe the writers feel like we are? It seems unlikely to me there's actually some sort of writers-room mandate or that the writers coordinated to decide that Federation citizens will have a messy relationship with high tech in a way that mirrors Americans' messy relationship with high tech. Rather the writers are people who are living in a technological society and they could have a sense that just a bit of restraint is all that's keeping society from plunging into unrecognizability (the back half of Picard s3 was morally repulsive, writing-wise).
More-recent Star Treks, working in several time periods simultaneously, have introduced a sort of cycle where society keeps developing strong AI, and then it causes a disaster, and then society intentionally pedals back AI tech, and then a generation passes and everybody forgets the last disaster and starts bringing strong AI back¹. (Discovery, when it was introduced, seemed to have a higher level of technology than Star Trek TOS despite taking place about a decade earlier; the series eventually makes this canonical by suggesting at some point in the ten-year span the Federation intentionally reduced the amount of automation on its ships for infosec reasons.) It seems very easy to come up with real-world analogues for this process, but most of them require you to take little-c conservative positions on science (for example, you could imagine this cycle for nuclear power— but probably only if you think nuclear power is a bad idea). I have doubts the writers intentionally made this a cycle; I think that Picard just had phenomenally lazy writing, the s1 writers cribbed from Discovery (and Mass fucking Effect), and the s3 writers were (repeatedly) indifferent to whether they were repeating their own previous seasons thematically. But they did it, and it works as a plausible cycle, with big enough internal-timeline gaps between the repetitions it feels plausible as a mistake a single society could keep making. One candidate for a real-world tech this could be a metaphor for is social media: Our IRL society went long on social media in the first part of the last decade, and now is visibly pedaling back, between the social media corps tearing themselves apart and every government on earth passing hamstringing laws on online communication that probably would have been Unthinkable in 2010.
There is some sense in which social media did bring us to unrecognizability, not technological-singularity unrecognizability, but politically, in the sense that the far right was so effective at leveraging social media it almost brought (still might bring) fascism about on a scale so intense there might not be any going back. I personally think the solution on this is to fix social media to be less exploitable by bad actors, not to scale back social media as a concept, but this does fit into the Star Trek analogy, where technological advancement leads us to not just discomfort but a point-of-no-return Brink. Does it actually work to metaphorically equate fascism to technological singularity? I don't like that. It feels like the kind of thinking that gives us… well, Picard s3. Could there be some inchoate anxiety of this sort brewing in the minds of the various Star Trek writers rooms? Maybe at this point I'm finally reading in too much.
Anyway, to conclude, the reason Starfleet transporters don't prevent all death on a ship by blinking anyone who's about to die into a pattern buffer is that the transporters are manufactured by Apple, the transporters can only integrate with software that's been approved with Apple's signing keys, and for liability reasons Apple's TOS bans apps that apply the transporter for medical purposes. QED
¹ Discovery, Picard spoilers: Section 31 develops Control in Discovery, it almost eradicates all life; the Federation covers this up but puts strong limits on AI across the board; years pass, Data shows up and gets everyone used to synths; synths become common; the Mars shipyards disaster occurs; synths get banned across the board; years pass; Picard s1 happens and people get used to synths again; just a few years later in Picard s3, the moment AI is thinkable tech again, Starfleet attempts to wire its entire fleet for remote control and there is a disaster. There's a Lower Decks storyline you can fit into this timeline too and I seriously wonder if Picard ripped Lower Decks off.