This is Part 3 of a multi-part series on mass layoffs at The Trevor Project during union negotiations. Read Part 1 here, and then Part 2 here.
Content discussing suicidal LGBTQ+ youth follows. I tried to write with care, if sometimes dark humor as to make this easier to handle reading.
nonprofit doesn't mean no overhead; employees still must be paid, resources acquired. graphic design and bandwith for hotlines, ai chat bots and business parking lots
“A couple of years ago when Amit started, he
wanted us to really think about two core pillars
to growth. We needed to drive down what we call our
costs-per-youth-served by 50%.”
and i mean lemme tell ya all these suicidal children are really driving up the price of this suicide hotline. gotta half the cost of each suicidal child by the end of this fiscal year, or how else will we meet our quota?
”That means that we can help two times the number
of young people with the same amount of funding that
we have. And the second pillar is that we’ll never sacrifice
quality for scale. We’ll always maintain or improve the quality
that we provide to youth in crisis.”
- John Callery, Former Senior Vice President of Tech
for The Trevor Project, to Fast Company, 2022
you may call your sea a lake, but you're part of the ocean.
'We' Means 'You' When It Comes To Labor

1.8 million suicidal youth is not 1.8 million callers. these are children in danger that need outreach.
it should not be treated as a market.
but, i guess if we're trying to reach 'capacity' for just shy of 2 million youth, even as a nonprofit, how can Trevor most... efficiently achieve that?
most... affordably? most... cheaply, even?
1: Hire Advisors, Not Employees - So Say The Advisors

in 2019, to expand Trevor's volunteer capacity and, uh, "strengthen its technological capacities", accounting firm consolidate PricewaterHouseCoopers International Limited used their charitable foundation to allot a $6 million grant to Trevor over the course of four years

the only catch, as every good charitable foundation has, is that PwC US LLP would do the consulting on just how that $6 million gets spent.

"pro bono", of course.
The goal was simple, but critical: recruit, onboard
and retain more volunteers faster to respond to
growing needs. And, end long delays between applying,
interviewing and training.
The Trevor Project was eager to transcend traditional
ways of operating to eliminate wasted time, so PwC
brought our BXT approach to help equip their volunteers
and support youth in need.
ah yeah, so we were taking too long screening the people who'll be on the phone with suicidal teenagers, which is why we need $6 million american dollars for... BXT?
the thing graphics cards have? what the fuck is BXT??


you're telling me this 'charitable foundation' donated $6 million dollars to collaborate with The Trevor Project, to- do what, exactly?
configure OneDrive for them??

these were the guys you needed for that, amit?! i mean, i agree keeping track of your volunteers is a good idea, but if you really listened to the "LGBTQouth" as you call them, they'd have already convinced you to switch to linux.
what qualifies these guys to take our mental health resource for gay children in crisis into the 21st century?

yeah, like turning chronic illness into opiate addiction! amirite, amit?
jesus christ.
act like a nonprofit if you're going to claim to be one. fuck all.
still, this is all business fluff. capitalist foreplay. as to how "embedding Calendy links" helps increase volunteer capacity by the goal of, reminder, ten-fold

well. there's a new one.
2: Use AI To Decide Which Suicidal Child Needs Help First

Leveraging AI in suicide prevention has
gained traction over the years.
Trevor Project, thankfully, knew that a cry for help being answered by a chatbot could be even worse than a delay. but, instead of, you know, seeing this as a compelling reason to not incorporate AI, they found a compromise.
With Google’s help, The Trevor Project will be able
to assess suicide risk level of youth in crisis more quickly,
allowing counselors to better tailor their support and to provide
relevant resources and consistent quality of care.
and really, when dealing with suicidal children, comprising technology is only as dangerous as you acknowledge it; just ask Trevor Project's AI expert, John Callery
“Sitting at the intersection of social
impact, bleeding-edge technology,
and ethics, we at Trevor recognize
the responsibility to address systemic
challenges to ensure the fair and
beneficial use of AI. We have a set of
principles that define our fundamental
value system for developing technology
within the communities that exist.
Right now, we have a lot of great data
that shows that our model is treating
people across these groups fairly, and we
have regular mechanisms for checking
that on a weekly basis to see if there are
any anomalies.”
our AI isn't racist, trust us! johnny-boy over here checked!
and aaand! he's got Great Data™ coming back every monday to doublecheck the AI didn't learn racism over the weekend.
[Gaunt] offers pragmatic advice for those seeking
to reduce the bias in their data. “Define the problem
and goals up front. Doing so in advance will inform the
model’s training formula and can help your system stay
as objective as possible,” she said. “Without predetermined
problems and goals, your training formula could
unintentionally be optimized to produce irrelevant results.”
"tell the robot ahead of time not to be racist" shit good thinking, write that down!!
better yet, get google to do it for you!!
The Trevor Project applied for Google’s AI Impact
Challenge and was selected as one of 20 finalists
from 2,602 applications. Google granted The Trevor
Project $1.5 million and a team of Google Fellows to
help the organization problem-solve with AI.
“And from there, Google kind of flipped up the heat
on how to set goals, how to approach responsible AI,
[and] how to productionize AI,” Callery said.
just be sure to remind them not to be racist! put it on a sticky note next to 'don't be evil!'.
speaking of- you can spend hours, months, entire fiscal years debating the ethics of AI- but none of this matters here and now. speculation doesn't matter here and now.
you know what does matter here and now? getting qualified people on the line with suicidal kids.

an AI cannot be qualified to field suicidal DMs.

if you don't sound suicidal enough to the AI, you'll have to wait for another unpaid volunteer to be told by the machine it's your turn to live.
this is an appalling use of this appalling technology. a volunteer trained by AI, being told by AI which suicidal child is most efficient to speak with next. a complete callousness to compassion at every level, all in the name of 'growth'.
“We didn’t set out to and are not setting out to
design an AI system that will take the place of a
counselor, or that will directly interact with a
person who might be in crisis.”
-Dan Fichter to MIT Technology Review, Feb 2021
funny thing about "setting out" is, eventually-
For the Trevor Project, someone reaching out via
text or chat is met with a few basic questions such as
“How upset are you?” or “Do you have thoughts of suicide?”
From there, Google’s natural language processing model
ALBERT gauges responses, and those considered at a
high risk for self-harm are prioritized in the queue to
speak with a human counselor.
-you arrive somewhere, don't you?
3: Use A Suicidal AI Child To Train Unpaid Volunteers

this is riley. they're having one of the worst days of their life, by design.
they are an AI chatbot which used, at least for a time, GPT-2 (yes, the one trained on reddit links) as a base.
next, The Trevor Project trained it on transcripts of older semi-scripted roleplay exercises by counselors helping to train one another.
to avoid the 'What If The AI Learns Racism?' problem (that Trevor has their white tech dude checking The Data weekly on), Riley was narrowed in for just this task.
the suicidal AI child robot only knows how to be a suicidal AI child robot.
“Emulating youth language really does feel genuine.
I think, now, the model might do a better job of
that than the adult staff.”
-Jen Carter, Global Head of Tech
and Volunteering at Google, 2021
look- i'm not flatly against the concept of crisis simulators. and for LGBTQ youth mental health, the only shape it can really take is a convincing sounding voice of a child in crisis.
but remember, all of this wasn't done to better train counselors; growth is the primary goal, don't you recall? Riley was developed under the explicit intention to train more volunteers, to get them on the phone faster. same with drew.

oh yeah, drew is the other suicidal child robot
“Starting from the first conception of the
Crisis Contact Simulator two years ago, it
has always been our hope to develop a variety
of training role-play personas that represent the
diverse experiences and intersectional identities
of the LGBTQ young people we serve, each with
their own stories and feelings.
-Dan Fichter, Head of AI and Engineering
at The Trevor Project, 2021
can we stop talking about this like it's a product launch?
“This project is a perfect example of how we can
leverage industry gold-standard technology innovations
and apply them to our own life-saving work. I’m so proud
of our dynamic technology team for developing tools that
directly support our mission, while also creating a new
paradigm that can set an example for other mental health
and crisis services organizations.”
-Amit Paley, CEO & Executive Director
of The Trevor Project, 2021
can we stop, turning suicidal ideation into products?
The organization currently employs a technology team of
more than 30 full-time staff dedicated to product development,
AI and machine learning, engineering, UX,
and technology operations.
Looking ahead, The Trevor Project intends to continue
exploring technology applications to grow its impact
by investing in new tools to scale.
-The Trevor Project, 2021
CAN WE FUCKING STOP THIS?!
4: Pay The Fewest People Possible At Every Stage

who supervises this damned training, anyways?
the Training Coordinators, of course. who are they? it could be you! after all, they're literally always hiring
Using our online learning platform, Training Coordinators
provide structured support and expectations for volunteers,
deliver clear and compassionate feedback, and promote
volunteer success through rigor and kindness.
Please note: Because the Training Coordinator role is
mission-critical to our organization and because we employ
a large number of these positions, we interview for this role
even when we don’t have a currently open position.
you, too, can be thrown on a pile of resumes to hopefully someday be underpaid and overworked in the name of growth
oh yeah and help some kids or whatever
so chatgpt trains the volunteers, overseen by exhausted coordinators, to gain the "important soft skills" and "world-class crisis intervention training" promised by Trevor, and then?
and fucking then?

make them do the heart-breaking work.
make them, your unpaid volunteers, into a marketable demographic for little white pills.
did Purdue teach you that, amit? or did the AI help you learn something new?