Every time I try to publish pictures that have the term "Maine coon cat" embedded into the prompts, it is auto-flagged as a violation because of "coon."
Similarly, when an image is flagged for 13+, it is occasionally auto-flagged as a violation because it is misidentified as a sexualized child.
Currently, the only recourse is just to accept this assessment and have the image automatically deleted (in both cases, I can re-submit with no embedded prompt information, and manually add it without issue, which seems inconsistent).
I propose a way to submit the image for moderator review, where they can verify that the flag was a false positive. I imagine this would work similarly to how you can flag already-published images for review which are in violation but have not been caught by the automated system.
Please authenticate to join the conversation.
Awaiting Dev Review
๐ก Feature Request
Over 2 years ago

AstroTibs
Get notified by email when there are changes.
Awaiting Dev Review
๐ก Feature Request
Over 2 years ago

AstroTibs
Get notified by email when there are changes.