• any (tme)

commie tankie chink bitch
im an adult but this is a (mostly) sfw page (as in, might discuss nsfw topics)


staff
@staff

hi everyone,

today we are bringing you an update to our community guidelines, containing the promised missing stair policy and a new policy around AI-generated content. you can read the full guidelines at https://cohost.org/rc/content/community-guidelines.

both additional policies are below the cut. no changes have been made outside these two new sections.

we also have a new feature launching today, and a financial update coming on monday. keep your eyes out for those. we'll be making a small patch notes post to put in the cohost corner so these these important changes don't get buried or brushed under the rug.


missing stair policy

there is value in being in the same place as everyone, but this value quickly diminishes when assholes crash the party.

in addition to the guidelines above, cohost reserves the right to remove anyone for reasons outside the rules under certain conditions. You may be banned from cohost if you are frequently reported for or found to be shitstirring, getting into arguments, or otherwise being routinely unpleasant to others.

this doesn’t mean you can’t get into an argument, say a snide remark, or get fired up about something. it does mean that if it becomes a habit you may face consequences. at first, we will offer warnings. actions taken will escalate from there if insufficient.

we still encourage you to use first-line responses, such as blocking, if someone is being disruptive. however, if you believe another user is demonstrating a pattern of being a pain in the ass, please send in a report.

it is not our intention to excessively police the tone of users, nor ignore the context in which things are said. when someone speaks out against injustice or wrongdoing, broadly or specifically, it is common for people to retaliate by attacking that person’s tone. as such, any policy seeking to root out assholes can be easily weaponized against marginalized people. applications of this policy will be context-dependent judgment calls with this in mind; we will take both the context of the conversation and the broader dynamics at play into account.


ai-generated content

you are allowed to post content produced by “off the shelf” generative models (”AI-generated” content) to cohost, however, there are additional restrictions you must follow.
  • if the entirety of your post is AI-generated, the post must be tagged as “AI-generated”;
  • if your post only contains some AI-generated content, it must contain a disclaimer before the AI-generated content. the “AI-generated” tag is recommended.

additionally, you may not:

  • post AI-generated content intended to mislead someone or spread misinformation;
    • e.g., an AI-generated voice clip mimicking an important political figure saying the Navy Seal copypasta would be fine. an AI-generated voice clip of an important political figure discussing a war would not.
  • post AI-generated pornography depicting real or living persons;
  • create a page solely to post AI-generated content;
  • solicit donations or payment for AI-generated content.

@omegastag shared with:

You must log in to comment.

in reply to @staff's post:

create a page solely to post AI-generated content;

does this also count for e.g. "curated" pictures, e.g. if someone's talent is, say, generating horribly gross pictures of impossible foods (hello dukedonuts), is an account that is solely images of horrible ai generated impossible foods verboten, or is this more "don't just point a gibberish generator unfiltered at us"

e: to clarify i have no intent of posting horrible fucked up ai generated foods but it is a use case i am unfortunately familiar with

Okay, semantics. "Procedural" and "algorithm" just means a set of instructions for computers. A procedure can be as simple as printing one line of text. "Generative AI" is a form of "procedural generation", Stable Diffusion for example uses layers of computer-made visual noise as part of the workflow.

The difference with "web 3.0 AI" and "web 2.0 procedural generation" is the process, regulation, usage and how it's advertised by tech companies.

AI models like Generative Adversarial Networks (GAN) and Stable Diffusion, rely on large amounts of training data, and embedded string prompts for user input. While procedural generation setups, like perlin or fractal noise, uses functions or equations to assign pseudo-random values. These values can be input for other steps to create animation, textures, and so on. To reiterate, AI needs training data while procedural is a component or standalone code.

Was this helpful?

not staff but I suspect no. markov chain techniques are way older than any of the current "generative AI" based on transformers model that need massive amounts of stolen data to be remotely competent and usually only require a relatively small corpus of text (I remember a twitter archive was all that was needed to make an "ebooks" bot of myself back in the 2010s)

agreed, and i feel like ceding most of "any code that generates anything" to the AI hypelords (who have a vested interest in calling even ancient techniques part of their $$NewThing$$) would be a massive L in so many respects, and i trust staff to draw the line sensibly.

we think there is a relevant difference between something like a markov chain for jokes and image generation software you type Robe, massive breasts, Elsa portrait, wavy hair, Frozen, big breasts, illustration, concept art, digital painting, highly detailed, trending on artstation into

I'm really confused about what the AI rules are attempting to allow. What is the use case that prevents just having the rule be "no AI content"? I mean I can think of a few, but none that would lead to this exact series of restrictions--for example I myself wrote a post about the way Kagi's FastGPT pulls information from Cohost, so it of course included AI content as examples to show what it is pulling and how, so you might have a rule allowing for example`s of AI output used in broader discussion of AI. But the rules as written sound like the goal is to allow...a little bit of AI? But not too much? And bots are okay even if the output is indistinguishable between fill-in-the-blanks-style bots and LLM bots?

I'm just not sure what type of post these rules are carving out protections for as written. I'm sure there are good reasons to not just write "no AI at all" and walk away (again, like the example I gave) but these feel like they were written to allow for some kinds of posts and I can't picture what they'd be.

i think you deduced it pretty clearly. there are certain cases where examples might be used, of course.

in addition, as the technology progresses, what is and isn't "AI-generated" becomes very blurry; writing a rule that simply says "No AI" is barely enforceable and would make us spend endless hours litigating. discussion around this is already happening in these comments. it would become impossible to keep up with if we had to carry that into actual enforcement. by restricting our definition to off-the-shelf models and focusing on a few specific bad behaviors, we can create enforceable policy.

our goal with this policy is to cut down on the behaviors we think are clearly bad, giving us room to adapt the policy in the future as we see the technology progress, learn more about what works and what doesn't, and figure out what y'all want to see. it's less carving out specific things we should be allowed, and more shutting down things we think shouldn't be.

in short, this is a space where heavy-handed policy gains us very little at a heavy cost.

I agree a heavy handed policy falls into a lot of pitfalls.

What I have to wonder is like...if I generate a bunch of AI art then use that AI art to write a little story prompt underneath, doesn't the art still fall afoul of the reason people hate AI art in the first place? Because it's all built on stolen data? Even if I add my own work with it, I've still used the stolen data for my own gain. I don't know if the amount of the post that is AI makes sense as the delimiter.

Feel like the core of the problem with AI specifically is the sheer volume of no-effort garbage it allows losers to shit out all over the internet, with no mediating step where they contribute anything original themselves, not simply that they can't prove they've exhaustively cited and compensated their sources.

Like, a whole lot of people here are very open about pirating stuff, and they've almost certainly taken inspiration from a song/show/whatever they torrented at some point. Posting files would probably catch some heat, but nobody seems interested in ferreting out exactly where the my chemical romance mp3 someone was listening to as they drew their fursona came from

If you can't figure out how this relates to your hypothetical of someone writing stories inspired by AI pictures, I'm not going to further waste either of our time

Exactly this. Slop-spam and misinformation and like, deepfaking using people's likenesses are obviously unethical usecases and should be banned. But the positive reaction to this new policy makes me worried that a lot of people on this site like... think IP is real. Or worse, that copyright is legitimate.

The impression I get is that its supposed to allow for someone who engages with Cohost in different ways which may include, at times, posting AI generated content while shutting down someone who would make an account that just posts Midjourney slop, with the distinction that the former is a "valid" if controversial way to participate in the platform and the latter is basically spam. But I agree that its not super clear and I also haven't seen a lot of examples of the latter being a problem. Have other people had issues with AI spam on Cohost? Genuine question.

I have my three cents to add on this one:

e.g., an AI-generated voice clip mimicking an important political figure saying the Navy Seal copypasta would be fine.

I'd love to see "as long as it's properly tagged, following the other rule" appended to it at least for clarity, because not everyone knows the same jokes and copypastas as the person that's posting and without proper tags it's no different than misinformation. Please, do not ever assume when writing any kind of community guidelines that there is an universal set of "obvious knowledge", e.g. "everyone knows this is a joke".

(sorry if my tone is too harsh, i have strong feelings on that specific issue - but overall i'm happy with this guidelines update!)

Agreed! I'm not familiar with the copypasta that's being referred to, so in the absence of other clues, this sort of thing might not be immediately obvious as AI-generated misinformation to me

That's a good question.
Responding to second sentence: I don't think i'd be able to explain my reasoning well enough, so let me just say - tagging jokes and satire is too much to ask from internet users (overall, not only on cohost) and it's not the issue worth fighting for compared to others. For me it'd be somewhat enough if a confused reader would feel comfortable asking for clarification and not get dismissed with "lol you don't get it" - that's all i ask for tbh

Back to question in the first half: But in the case mentioned in guidelines update - it's really more about impersonation than AI itself. It's not much different than voice-alike or skillful rotoscopy being used to put words into mouth of a public figure (or anyone else). It doesn't matter if it's using one tool or another to achieve the same effect.

So now that you mention it - yeah, it really should have been about impersonation rather than just AI.

i like this new policy on AI-generated stuff imo

just saying "AI isn't allowed" wouldn't cut it, since it's becoming increasingly impossible to distinguish what is AI and what isn't, and you can't just outright ban AI stuff without some people seeping through and/or others getting falsely warned or banned.

obviously, it's likely that some things might still slip through the cracks--there's no doubt about that. but i feel like this is a step in the right direction when it comes to combating AI, especially when people can do almost anything with it now.

I don't know that this is a good reason to ban/allow something. Most rules have stuff that will slip through the cracks. You can ban block evading, but people will still do it and you'll never be able to know they did it unless they out themselves as having done it. You can ban art theft, but someone can post someone else's art and pass it off as their own for who knows how long before someone figures it out. I don't think the fact that it can be hard to tell means you shouldn't have rules against doing it. I'm not saying the solution is to say "AI isn't allowed", I'm just against the idea that you shouldn't make rules against behaviors you don't want on the grounds that you can't have a 100% success rate detecting it.

You may be banned from cohost if you are frequently reported for or found to be shitstirring, getting into arguments, or otherwise being routinely unpleasant to others.

I'd recommend removing the "reported for" part of this?

Saying you'll deal with people who are causing problems makes sense, but otherwise it opens the door to the possibility that mass-reporting people will get them banned, and that's already weaponised around the internet.

Even if it's not how the staff intend for the rule to be applied behind the curtain, it might simplify life by reducing incidents where people TRY mass-reporting somebody.

idk, it just seems like what you have makes sense without invoking reports?

The staff are going to have a better impression of who is causing repeated problems than members of the community are, and whenever I've seen those lines blurred it's always gone badly.

The staff are going to have a better impression of who is causing repeated problems than members of the community are

That change is at least partially in response to a recent series of issues that arose because this is not in fact true. Would be convenient if it were, yeah

I've seen a number of posts discussing that a lot of the problem accounts/posts/replies had been reported some time ago and consistently since then, so I'm not sure that the recent changes are about not being aware of them?

I'm not really happy with AI-generated "content" being allowed this liberally, but I also acknowledge that the moderating effort involved in keeping it out entirely would be considerable and likely beyond what is manageable for the current team. I would still prefer if it was a little more clearly unwelcome.

i don't understand why it is allowed with restrictions. i dont understand why ""off the shelf" generative models" are allowed under any circumstance.

i don't think "funny meme" or "illustrative example" are worthwhile exceptions. Last i checked, to my knowledge, LLMs like Stable Diffusion steal people's work. Did that change recently? Are prompters no longer stealing art anymore? Are they doing ethical AI now?

My reading is that this is to allow for inclusion of AI art either in a "journalistic/reporting" context (check out what these freaks are doing over here) or in an educational context (let's look at these AI art examples so we know how to identify AI art). And there are already a few of these kinds of posts already on cohost because people love having opinions :)

eg.
https://cohost.org/rc/tagged/ai%20art
https://cohost.org/rc/tagged/AI%20slop

(by looking at these tags I think i found a report-worthy account! i guess not to be a hypocrite i'll have to go and report it now)

There's one specific post i remember but i can't find right now that was actually pretty long and well-researched, explaining how the OP teaches local journalists and news organizations how to clock AI art before they accidentally publish it, and it used a ton of examples in that kind of
"educational" context.

EDIT: found it! https://cohost.org/mtrc/post/4905509-fingers-and-toes-sp
i think this is a good post and it should be allowed to exist

in terms of precedent, this is the general idea of "Fair Use" as it exists in US law:

https://en.wikipedia.org/wiki/Fair_use

I don't actually care about US law any, except to say that the reason the US needs such a complicated and convoluted fair use doctrine is because they want ideas to be property, but it turns out that prevents a whole lot of normal and useful stuff that naturally happens in the real world.

I don't want ideas to be property (I'm a marxist, I don't like property :p), but likewise: I don't want AI slop but I do want people to be able to just look at things and comment on them as they see them, yeah?

And that's what I think the staff rules are meant to do here: protecting being social in the normal way without letting AI slop invade, or making the existence of AI slop mean you have to walk on eggshells whenever you just want to talk about things happening in the real world. It just doesn't call out these cases, cause, you know, actually doing that as the law does is really nasty and complicated and it sucks

i also dont want ideas to be property. i dont really know US law either, im not well-read, i dont know marx very well, he seems okay

i do want people to be able to talk about things and comment on them...within reason? do you think this argument applies to things that aren't strictly copyright related? like, any kind of objectionable content. aren't there lots of things we dont replicate directly on this here website even as we talk about them?

i also do not think artists not wanting their art to be plagiarized, because they need to eat, for example, is comparable to a business using IP law to prevent fair use of their work (often violently)?

in the end i guess i just dont understand why its impossible to just...not use Stable Diffusion and Midjourney and such specifically. Why is it impossible to just not use the big plagiarism machines?

there are a lot of problems with image generation and i'm not trying to defend it in general, but this line of attack isn't quite relevant here. note that there is generally not a rule against posting other people's copyrighted works wholesale, except as enforced by the government via the DMCA. you can't report someone's post because they pasted a Garfield strip into it. since strictly-worse IP theft of this kind is fine, these are probably not the grounds on which to argue against generated images.

i dont really want to attack, like, i just really want to understand because, im not a very smart or knowledgeable person, i just do art

so, i dont understand how the act of plagiarising a recognizable piece of media like Garfield (just taking your example) or any plagiarism at all being technically allowed because we can't really enforce it means that they in turn should allow on purpose a specific, large scale set of tools that we know take people's work without permission.

i dont understand how one person doing something vaguely illegal can be used to explicitly justify anyone using a known unethical tool

Because intellectual property is a fake concept that harms people's liberties as artists. Neither posting the garfield strip in a transformative way, nor using an algorithm to copy someone's style directly, are unethical or illegal (environmental costs notwithstanding)

that is bizarre to me

i dont get how anyone would be okay with someone impersonating them, plagiarising them, profiting off their work, making a large-scale tool to automate all this

let alone say its ethical

Ough I typed out a whole reply but it got consumed by the mobile refresh gesture so I guess I'll have to type this out again xd

Anyway, plagarism is only a problem in informational contexts. Like, where it harms the reader's ability to judge the source and credibility of the information they're getting.

In the arts, copying is good and I think people should do more of it. Using algorithms to copy pepple is just one way of doing it (and not really to my tastes, but that's besides the point) and employs an undeniably transformative proccess even in the least transformarive of use cases.

As for profiting off of it.... well, profit can be unethical for a lot of reasons. AI content mills are unethical, sure, but only as much as any other pre-2019 content mill! Having a computer do it doesn't make the whole affair somehow more evil as if it wasn't before.

And regarding impersonation... I don't think I said anything about that topic in my comment.

The scale and speed at which LLMs can churn out content increases their harms to such a degree that they're meaningfully more harmful than pre-LLM content farms.

Additionally, even if you view intellectual property as a fake, harmful concept (which I am sympathetic to in the abstract), in a world where it is enforced by the law the ability of only some privileged people to circumvent it for their own profit (LLM providers) is a particularly harmful inequality to the people whose work is being trained on and cannot circumvent it themselves.

Which, to be fair, may not mean it's directly unethical technically, but still uniquely harmful.

I agree with all of this! But the right way to deal with this inequality is through copyright abolition, not through calls for copyright enforcement. And the head of this particular comment thread is about how the technology should be prohibited wholesale because it's "stealing," which misidentifies the source of harm. If we ban it for spam concerns that's one thing, but we can't ban it due to "theft" concerns while still pretending to be serious about copyright abolition.

Wait what? I thought we established that IP theft isn't a real thing here; so what harm are you talking about? Do you believe that something such a IP theft actually exists and needs to be curbed through prohibitive actions, or are we talking about banning it for other reasons like spam and misinformation?

There are laws that enforce rules around a concept of "IP Theft" regardless of whether we consider it ethical or sensical. LLMs appear to not be subject to these laws that individual artists are, and that difference causes harm (individuals can't leverage the works of others in the ways LLMs do legally and either pay them or lose in competition for commissions and the like). Abolishing copyright would fix this, but making LLMs subject to the same laws as individuals would alleviate some of the harm in the interim since abolishing copyright is almost certainly going to take a long time.

They already are subject to the same laws as individuals. In order to use copyright law to come after them, you'd have to strengthen copyright law so much that it would make sampling and ytpmvs illegal. Or worse, allow entities to copyright entire styles. This is about the worst thing I can imaginefor art in all of its forms. You say harm reduction, I say harm.

it's not that they can't really enforce it, it's that they don't want to. disallowing the unauthorized posting of copyrighted images would destroy any social media platform; it's the main thing they're used for.

in general, posting copyrighted images is not in itself something a platform like cohost can treat as an ethical problem. they can and should treat fraud as banworthy, whether that's passing work off as your own when it isn't, or trying to make money by selling someone else's work (without sufficient transformation) without their permission. but just "stealing" Garfield strips is never going to be actionable.

of course there's a complex line between mass-market stuff like garfield and, say, fanart by an indie artist trying to build a reputation. you can see in the "behaviors to avoid" section of the current community guidelines that cohost is drawing that line loosely at "posting people’s content without attribution" plus "if the creator asks you to remove your post, you should listen to them." this seems fairly reasonable to me.

so the sale of generated images is pretty ethically dubious, for the reasons you mention. (even that is complex, because they clearly are transformed, and you don't want to litigate it in a way that makes selling collages illegal. but it's also pretty clearly on the other side of a line, and i certainly hope we end up with laws that protect artists.)

but selling images (or using them in your for-profit videogame, etc) is very different from posting them on social media, the same way that posting a Garfield strip is different from selling bootleg collections of Garfield strips on Amazon. of course there's a new problem, which is that the creator can't meaningfully invoke the "asks you to remove your post" clause, because they can't know or prove that your generated image has their work in it.

i guess in typing this my opinion has evolved a bit, in that i do think that fact is conceivably consistent grounds for a more wide-reaching ban. but i still don't think it would be good policy, or that they'll try it, for several reasons:

  • many, many users simply do not see generated images as theft, and will feel personally insulted and angry if they are told that posting them is unethical. mandating that they be tagged is honestly an admirably aggressive stance in my view.

  • the harm to the creators whose work has been stolen is impossible to quantify, in a way that categorically distinguishes it from the "reposting fanart" case. there's just really no case to be made that you are harming someone's ability to grow their own audience by posting generated images that may have had their work as one of millions of inputs.

  • not all generated images are theft; there are models that only include work in the public domain or by people who have consented. it is impossible to prove (or in many cases even for the original poster to be aware) whether or not a given image was generated in this way.

the problem to me is allowing these specific tools: Stable Diffusion, Midjourney, etc.

like i also feel personally insulted and angry that these tools are used by anyone because i am an artist. like it feels like people dont give a shit about artists?

and yeah i agree that you can't argue that these LLMs impede one artist on here to grow their audience so not on those grounds, no.

just a general idea of like, do you need to use the plagiarism machines? these ones, that are so general and that are known to use stolen work?

because it is impossible to prove, and it is impossible for me to know, it should therefore not be given the benefit of the doubt. imo, like. this is why i asked, semi-rhetorically all the way up there, whether there was AI that didn't steal. because it would be difficult for me to argue against that, even if i dont like it

Why would you make a policy regarding AI content that isn't outright removing and banning it? The staff of this website, out of all people, should be aware there's no ethical or positive use of AI to begin with. This is honestly shameful behavior to have, much less to post as an update.

like the person above me said, that ai "steals" content isn't really a problem unique to ai. if someone wanted to post someone else's straight up actual original work and claim it as their own, there isn't really a rule against that. plus stock image companies are coming out with image generation models that they own all the rights to.

also, staff mentioned a few implementation hurdles to a total ai ban near the top of this thread.

as someone who would very much like to see OpenAI and its ilk annihilated, and who was using neural networks to produce elements of artworks as far back in 2016, i think this policy is good.

it disallows slop-spam, and allows people to filter out others noodling around with largescale generative AI output, while still allowing people to experiment using mathematical techniques that don't involve using someone else's API to access a model trained on The Entire Internet (and slurping up one of the smaller great lakes with a Lorax-style straw to do it)

i still need to figure out what the best way to train a GPT-2 model is these days. and the best way to train one intentionally wrong

yes, exactly this. I like when people play with NN and ML algos, and I'd try and join in too if models were small enough to work on my system in a reasonable amount of time. what I don't like, is when people pass off commecial "AI" as if it is an answer engine, or when people pretend as if it is doing anything except boiling sixty rainforests to decide whether it's going to bullshit about your prompt based on fifteen years of 4chan and reddit posts, or use an equal amount of energy to produce "As a language model I cannot"-type corporate platitudes

I liked when a sub-five-megabyte program and model written in C# would download, run, and produce about 28 seconds of MIDI piano that almost sounds musical, "instantly" with a bunch of knobs and sliders to turn to "weight" the input. Something like that, it doesn't matter as much if it was trained on as large of a corpus - the model it produces is so vanishingly small, that nothing of the input except "vibes" tends to survive, as opposed to the current Plagiarism Machine, which can cough up libraries' worth of rights-laundered music, art and prose in a matter of minutes.

This! And also, sometimes I just wanna, like, laugh at the nonsense it spits out but the folks who want to outright ban it…idk it kinda feels like "okay so posting the content itself is a no-no, perhaps regardless of the context; is discussion of it also not allowed?"

Yeah it feels weird to create a policy explicitly targeting it. Has there like, been a problem with generative text/image spam on this website? If there is I certainly haven't noticed it. Not really sure what the goal is.

ban ai outright, ESPECIALLY shit generated by off-the-shelf 'models' all made with stolen art. this is some weak-ass shit.

glad to see yall finally doing something about the racism tho.

this seems like a worthy first version of the policy, and there's nothing better you can guarantee than that, so thanks for the work you've put into it. As someone who has had a firsthand look at content moderation decisions I'll be interested to see how it develops. Thanks for the thought and transparency!

honestly, i don't know why people are so upset about this site not fully banning AI outright. most social media sites don't have any sort of restrictions on generative AI at all, so cohost putting restrictions on AI already makes it far better than most other social media sites at all. also, i think these rules allow for much more nuance than a full ban on generative AI would.

Tbh I think AI disclosure is reasonable and banning it altogether is infeasible, so this seems good to me.

I understand that people tend to have heated opinions on this topic and so a lot of people are going to be upset no matter what decision you make, but this seems like the best way to balance not welcoming slop bots vs. not banning someone because they posted AI images in an arguably fair context.

I'd rather not see ANY AI content. It's trained on stolen work from artists who won't get paid. And while you can't solicit donations here, you can link to other places where you can solicit donations (like a Youtube or a Twitch,etc.)

Unless the person made it themselves from their own data. AI should be banned. Period.

But you gotta always allow the door to be open a crack so someone can stick their foot in as you did with the whole "suggestive images of minors is allowed" BS in 2023. So the anime pedos can get their rocks off on sexualized images of children.. "as long as they aren't explicit wink wink"
Here you are allowing people to post AI shit that will totally skirt the limit of the rule.

Not exclusively. It's also a site where people talk and write about things they're thinking about, studying, worried about. Including generative neural networks. And discussing a thing without including examples of what you're talking about is absurd to the point that not even the US has allowed corporate copyright to squash it.

I overall really like the AI policy, but I do have a concern, and it's a pretty big one.

"You may not create a page solely to post AI-generated content"

There are legitimate reasons to do this. People i know on Twitter have received brutal harassment to the point of stalking and actionable threats of violence and doxxing, for tagging their (high-effort) AI-assisted content. This harassment often took place off-platform, under sufficient opsec to avoid being associable with the harasser's identities on the platform. This harassment leveled off significantly when they stopped tagging their work (something they wish they didn't have to do.)

Of course, Cohost is pretty good about punishing abusive users, from what I've experienced. But at this level of adversary, there's not much one can expect Cohost staff to do.

I have no problem with mandatory tagging of AI content. But the other option to stem this harassment would be to distance oneself from said content. While the desire to not have pages that post heaps and heaps of sludge is understandable, people who want to find community for content they put significant effort into are left with little choice under the current policy but to accept this as a potential consequence.

If this point can't be removed, some guidance might help (what constitutes "a page solely to post AI-generated content"? is discussing one's posts sufficient? is discussing the tech with others sufficient?)

I also think some other definitions (to what degree must an AI model be involved with a work to fit the requirements of the AI policies?) might also help, in the general sense.

I appreciate that staff are, apparently, trying to throw a bone, but there's some misunderstanding about the overlap of "people who are interested in genAI" and "people who are interested in Cohost" that mostly come from misrepresentations/generalizations of the former.