That was astonishingly easy to get working, and now on my laptop 3060 I can write a new prompt and generate another 10-odd samples every few minutes. Of course, I do mean 10 odd samples: most of the human images it’s giving me have six fingers on one hand and/or a vaguely fetal-alcohol-syndrome vibe about the face, and none of them could be mistaken for a photo or even art by a competent artist yet. But they’re already better than any art I could make, and I’ve barely begun to experiment with “prompt engineering”; maybe I should have done that on easier subjects before jumping into the uncanny valley of realistic human images headfirst.
Only optimizedSD/optimized_txt2img.py
works for me so far, though. scripts/txt2img.py
, as well as any version of img2img.py
, dies on my 6GB card with RuntimeError: CUDA out of memory.
Update: in the optimization fork at https://github.com/basujindal/stable-diffusion , optimized_txt2img.py
works on my GPU as well.
The arguments for instrumental convergence apply not just to Resource Acquisition as a universal subgoal but also to Quick Resource Acquistion as a universal subgoal. Even if “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else”, the sooner it repurposes those atoms the larger a light-cone it gets to use them in. Even if an Unfriendly AI sees humans as a threat and “soon” might be off the table, “sudden” is still obviously good tactics. Nuclear war plus protracted conventional war, Skynet-style, makes a great movie, but would be foolish vs even biowarfare. Depending on what is physically possible for a germ to do (and I know of no reason why “long asymptomatic latent phase” and “highly contagious” and “short lethal active phase” isn’t a consistent combination, except that you could only reach it by deliberate engineering rather than gradual evolution), we could all be dead before anyone was sure we were at war.