people who create AI models don't actually want to train on other AI art because it creates a feedback loop
a small note to expand upon this: yesterday I found out that Stable Diffusion (another AI art generator) silently watermarks all of its output invisibly to the human eye, to attempt to ensure that it can detect and prune out images it created from its input. it's likely that other model developers have at least considered doing this as well, and one would assume that at some point they'd settle on a standardized way of doing it, since it's in all their best interest to ensure that they can recognize all of their output.