catball

Meowdy Pawdner

  • she /they

pictures of my rats: @rats
yiddish folktale bot (currently offline): @Yiddish-Folktales

Seattle area
trans 🏳️‍⚧️ somewhere between (30 - 35)


Personal website
catball.dev/
Mastodon (not sure if I'll use this)
digipres.club/@cat
Pillowfort (not sure if I'll use this)
www.pillowfort.social/catball
Monthly Newsletter (email me to join)
newsletter AT computer DOT garden
Monthly Nudesletter (18+ only, email me to join)
nudesletter AT computer DOT garden
Rat Pics (placeholder, will update)
rats.computer.garden/
Website League main profile
transgender.city/@cat
Website League nudes profile
transgender.city/@hotcat
Website League rat pics
transgender.city/@rats

if you're trying to retrieve information, but you have to do it in natural language, you're likely1 going to be prone to putting a lot of presuppositions into your questions, giving the bot a good signal of what traits you possess (compared to plain queries that could be you being curious about a subject)

for example

  • a search query like "husky diet" only tells the model that you might have some interest in what huskies eat, but
  • "hey Chatbot, what kind of food is healthy for my husky" gives a much stronger signal that you own a husky and will have continued interest in buying husky-related things

(and the bad for privacy bit being that now the advertiser has a more accurate model of facts about you, which they could potentially sell / share / have stolen)

this post also brought to you by thinking about "Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering" by Kim et al 2021


  1. I haven't gone and found data to support this, but I'll betcha posing something in the form of a natural language conversation will make it more likely for an interlocutor pose something in the form of a question with presuppositions that will end up implying facts about themselves, compared to search queries


You must log in to comment.

in reply to @catball's post:

yeah cynically I've been wondering if all those "The AI is my girlfriend" "the AI is the best therapsit I've ever had" are just fake articles trying to get people to conceptualize that use.

but I do wonder at what point information retention becomes HIPPA required, like at some point you could pump it full of radioactive PII presumably

I'm honestly not too surprised how people get attached to their chatbot gf/therapists after Weizenbaum's ELIZA, but still bad

good question about HIPPA and PII. I dont know how those regulations work exactly, but that would be a nice way to duress megacorpos into less gathering data. Then again, big companies can just make / already have a HIPPA compliant data store, and stuff like apple health gathers tons of PII/HIPPA data that you sign away in their ToS so maybe it's just another checkbox ToS waiver of rights