I use outdated technology just for fun, listen to crappy music, and watch a lot of horror movies. Expect posts about These Things. I talk a lot.
Check tags like Star Trek Archive and Media Piracy to find things I share for others.
i've asked it to make instruction manuals for robot girls and tbh it's p great

me when i look at all the new pictures my computer has generated
these are really cool and zooming in on the text makes me want to make a mock writing system out of ai impressions of writing systems
I genuinely love how this gets at a strange mental object I am fully obsessed with: Drawings of text.
Not poorly rendered text, or strange writing systems, but text interpreted and rendered by an artist representing it without copying. The vibe of a letter without being a letter. The shape of a number without being a number. The flow of a 漢字 without being a 漢字; ideograms written by someone from a secret fourth country that had their own reforms in character shape.
It's such a fascinating thing to me.
can you have it generate a CT scan of an anime girl? see what's inside?
this is really neat, i remain fascinated by the sorts of things you can get sd to produce along these lines. seeing the prompts you’re using to get this sort of lorebook style is neat too.
on a more technical note: i can’t really tell from looking at its repo what differentiates it, but i’m curious why you’re using a separate upscaling extension vs the built-in automatic1111 SD upscale script. also hadn’t seen 7th layer before (i’ve mostly been fucking around with anything v3, which gives good results but has largely converged into a single art style).
i haven’t publicly posted any of the SD-fucking-around i’ve been doing but it’s neat to see other people’s work outside the bizarrely horny photoreal stuff.
The script is really tight-lipped why it yields different results, but in my experiments, you can safely drive the denoising higher than with latent upscale or GAN upscale, it will make up coherent details with a lot less deep frying, reasonable speed, and no visible seams despite cutting up the image in chunks.
(I won't pretend I dislike the bizarrely horny stuff, but finding any good SD info means having to wade through so much of the mediocre tastes of 4chan coomers, at this point I think most deserve their heterosexuality license revoked, for the crime of being boring)
interesting, i’ll have to try playing around with it.
and yeah on the bizarrely horny front it really is for me a lot just the unoriginality. you have a computer that can generate any image you can describe and you spend all your time making up different permutations of “massive breasts.” let’s see something fresh
my working theory is that none of these people have the expertise or patience to get results out of the latent spaces with, how would you say it, more sparsely populated input data. like when the model has less to go on and doesn't know what you're talking about everything about the image gets worse: there are those weird cigarette burns in the image and the limbs get all fucked up. with less prior examples to go on, or maybe examples that are less consistently described, everything is much closer to the stochastic noise floor.
so these boring normies end up just generating variations of images that there are literally millions of real examples for, but with bigger boobs since at least it knows how to do that in anime style based on hundreds of thousands of horny deviantarts
oh yeah i realize i forgot to mention. another thing about the upscale process is that "hires fix" as i understand it means, you get one roll of txt2img which doesn't get displayed or saved, then one roll of img2img, hope you got lucky twice. it's very much for people who want instant satisfaction pure prompting results.
i have much more of a brute force approach, i like to generate batches of hundreds of candidates, so i use a rather low amount of steps, to be able to make a dozen pics per minute. so if the computer comes up with something i really like, i have a lot more options open:
then after that, i can experiment how to best upscale it. if the base image is good, using latent upscale can improve it further but it is particularly touchy, denoising strength having hard thresholds where it ruins the picture (for my steup, below 0.5 = comes out as a blur, over 0.6 = body horror deepdream greebles). and if my base txt2img has great composition but needs heavy fixing (e.g., horrible face), i can brute force a lot of img2img attempts with a normal (not latent) upscale at a rather high denoising strength, and pick the best out of the batch
gonne be honest most of my beef with machine learning stuff is generally like, spawned from the very... hollow? way Techbros™ use it. but used like this, to just Generate Stuff And See What The Machine Wants To Give You, it goes hard
my favorite genre of Journalism gotta be when they generate a fake photo of elon to put at the top of a post because it is hard to find a real one to use
"elon elon everywhere," i say, head slowly sinking beneath photos of the single techbroest man alive, "but not a drop to drink". the computer prints me out an image of a glass of water with a weird reflection. i thank it for its service as finally i too am claimed by the waves of musk
i think the "bad" thing about AI art is that techbros will use it and techbros using AI art is pretty shit, but thats because the techbros are shit. it fucking rocks that i can just ask the computer for tea, earl grey, hot and it will paint it
yeah like. i think the core concept of "this computer can generate cool logic breaking anime girls and weirdass environments and nonexistent people when asked!" totally rips, but like... techbros simultaneously dont guide it well enough for machine learning image gen to be a strange unique tool for creating and dont let it be free enough to just spit out what it thinks works from its simple thoughts yknow. they are not letting it be itself they just want imitation something else
either that or what they want out of it is hot garbage from the get. which generally thats Tech Guys for you
i wonder if you could a) find the areas where nonsense text is and figure out size & color via a neural net and b) fill in the "gaps" with an LLM with the same prompt as the image
they're working on new models that can write text (and demonstrate it in the most exasperating way possible lol https://twitter.com/deepfloydai/status/1614436348101894144 ) but i don't want to lose the latent language, i want it to grow to acquire recognizable glyphs and a clear syntax
the last girl you wrote a bit about is wearing the future bladee merch just something i thought i'd note among how incredible everything else here is
i shoulda pinged you when i noticed lol