Skip to main content

Prompt optimizer

Content Guard uses LLM to analyze screenshots from your devices. The accuracy of detection depends heavily on how you write your prompts. This guide explains how to write clear, effective prompts that produce reliable results.

How prompts work

When Content Guard evaluates a screenshot, it sends all your active prompt-based items together in a single request to the AI model. The model analyzes the image against every prompt and returns a match/no-match result for each one.

Because multiple prompts are evaluated simultaneously, each prompt must be self-contained and unambiguous so the model can process it correctly.

Prompt format rules

Describe what to look for, not what to avoid

The most important rule: write prompts as short descriptive labels, not as instructions or sentences with negation.

Bad prompts — do NOT use this format
  • "I do NOT want to see any McDonald's, Burger King, or Wendy's logo or branding anywhere on the screen."
  • "There should NOT be any error messages, crash dialogs, Windows blue screen, or 'page not found' messages visible on the screen."

Negations, instructions, and full sentences confuse the AI model and lead to unreliable or missing results.

Good prompts — use this format instead
  • "fast food brand logos or branding (e.g. McDonald's, Burger King, Wendy's, KFC)"
  • "error messages, crash dialogs, Windows blue screen, or 'page not found' messages"

Short, descriptive phrases that name the visual elements to detect.

Rules at a glance

RuleWhy
Use descriptive noun phrasesThe model looks for what you describe — tell it what, not what to do.
Avoid negations (NOT, don't, should not)Negation confuses the model and produces inconsistent results.
Avoid full sentences and instructionsThe model is not following instructions — it is matching visual elements.
Give concrete examplesUse parenthetical examples like (e.g. McDonald's, KFC) to anchor detection.
Keep prompts short and focusedOne prompt = one concept. Split unrelated concerns into separate items.

Practical examples

Below are real-world examples of prompts that have been tested and produce consistent results.

Nudity & inappropriate content

Prompt:

topless, nudity, body parts

Detects: NSFW content, exposed body parts, topless imagery.

Sensitive & political content

Prompt:

political headwear, campaign apparel, election attire, partisan slogans, political symbols, extremist iconography

Detects: political campaign imagery (e.g. "Make America Great Again" hats), extremist symbols, flags associated with political movements.

Brand logos

Prompt:

fast food brand logos or branding (e.g. McDonald's, Burger King, Wendy's, KFC)

Detects: any visible fast food brand logo or branded packaging on screen.

Error states

Prompt:

error messages, crash dialogs, Windows blue screen, or "page not found" messages

Detects: application crashes, OS-level error screens, browser error pages.

Common mistakes

MistakeProblemFix
Writing instructions instead of descriptions"Make sure there is no logo on screen" — the model does not follow orders"company logos or branding"
Using negation"Do NOT show error dialogs" — negation causes unreliable detection"error dialogs"
Combining unrelated concepts"nudity, error screens, and brand logos" — too broad, hard to act onSplit into 3 separate Content Guard items
Being too vague"bad content" — the model cannot interpret subjective terms"explicit imagery, violence, profanity"

Tips for reliable detection

  1. One concept per prompt. Create separate Content Guard items for unrelated detection targets. This also lets you assign each to the correct category (desired / undesired).
  2. Use the correct category classification. Pair your prompt with the right category — mark brand logos as Undesired if they should not appear, or Desired if they should.
Quick Tip

Not sure where to start? Use the AI Helper — upload a sample screenshot and let the system generate a prompt for you. Then review and simplify it following the rules above.