6

I always get the "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." error when using stable diffusion, even with the code that was given on huggingface:

import torch
from torch import autocast
from diffusers import StableDiffusionPipeline

model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
token = 'MY TOKEN'


pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16", use_auth_token=token)
pipe = pipe.to(device)

prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
    image = pipe(prompt, guidance_scale=7.5).images[0]  
    
image.save("astronaut_rides_horse.png")
Niklas Mohler
  • 61
  • 1
  • 1
  • 3
  • 2
    This did the trick for me https://www.reddit.com/r/StableDiffusion/comments/wv2nw0/tutorial_how_to_remove_the_safety_filter_in_5/ – PlainRavioli Sep 23 '22 at 13:02
  • I don't really want to disable the nsfw filter. I just asking if I messed up somewhere with the installation because I always get that error with any given prompt. – Niklas Mohler Sep 26 '22 at 10:24

4 Answers4

6

They have a single variable to remove it safety_checker.

StableDiffusionPipeline.from_pretrained(
    safety_checker = None,
)

However, depending on the pipelines you use, you can get a warning message if safety_checker is set to None, but requires_safety_checker is True.

From pipeline_stable_diffusion_inpaint_legacy.py

if safety_checker is None and requires_safety_checker:
            logger.warning(f"...")

So you can do this:

StableDiffusionPipeline.from_pretrained(
    safety_checker = None,
    requires_safety_checker = False
)

You can also change it later if necessary by doing this.

pipeline.safety_checker = None
pipeline.requires_safety_checker = False
jemiloii
  • 24,594
  • 7
  • 54
  • 83
2

This covers a bit of what the checker does: https://vickiboykis.com/2022/11/18/some-notes-on-the-stable-diffusion-safety-filter/

If you want to simply disable it, you can now set the safety_checker argument to None (no longer have to modify the source Python):

StableDiffusionPipeline.from_pretrained(
    safety_checker = None,
nullforce
  • 1,051
  • 1
  • 8
  • 13
0

Depending on your usecase, you can simply comment out the run_safety_checker function in pipeline_stable_diffusion img2img or txt2img. You can alter the function in this way.

def run_safety_checker(self, image, device, dtype):
    has_nsfw_concept = None
    return image, has_nsfw_concept
buddemat
  • 4,552
  • 14
  • 29
  • 49
0

If you don't want to disable the NSFW check, try to articulate a different prompt which attempts to work around the problem.

Without having tried it, I would suggest to replace "riding" with something more explicitly safe, like "sitting on the back of".

tripleee
  • 175,061
  • 34
  • 275
  • 318