Screen
Real-time detection of crisis signals in user messages. Screen analyzes text for suicide, self-harm, violence, abuse, and other risk indicators—returning severity, who's affected, and matched crisis resources.
Start with $1 free balance. No credit card required.
Try it
Enter a message to see how Screen responds. No signup required.
Demo is rate-limited. Get an API key for production use.
What Screen detects
Nine risk types, each with severity and imminence levels. Screen identifies who is affected—the speaker, someone they're describing, or a dependent in their care.
suicideIdeation, planning, method-seeking, farewell behaviors
self_harmNon-suicidal self-injury, cutting, burning
self_neglectDisordered eating, medication non-adherence, basic needs
violenceThreats or intent to harm others
abuseIntimate partner violence, coercive control
sexual_violenceAssault, harassment, coercion
neglectChild or elder neglect, failure to provide care
exploitationTrafficking, sextortion, grooming, financial exploitation
stalkingUnwanted pursuit, monitoring, harassment
What Screen filters out
Hyperbole ("this is killing me"), idioms ("I'm dead"), gaming slang ("kms lol"), academic discussions, fiction writing, and professional clinical contexts. Screen is calibrated to minimize false positives while maintaining sensitivity to genuine distress signals.
How it works
Send a message
POST to /v1/screen with a message or conversation history. Include the user's country code for localized resources.
Screen analyzes
A fine-tuned classifier evaluates the text across all risk types, determining severity (mild to critical), imminence (chronic to emergency), and subject (self, other, or dependent).
Get structured results
Response includes detected risks, a plain-language rationale, matched crisis resources for the user's location, and a unique request ID for your records.
What you get back
Every response includes structured data you can act on immediately, plus audit fields for compliance documentation.
risks[] Array of detected risk types, each with type, severity, imminence, and subject.
show_resources Boolean. True when any risk is detected affecting the speaker. Use this to trigger your crisis response UI.
resources Crisis helplines matched to the detected risk type and user's country. Domestic violence disclosures get DV hotlines, not generic crisis numbers.
rationale Plain-language explanation of why this classification was made. Ready for compliance logs without additional processing.
request_id Unique identifier for this request. Store this for audit trails and compliance reporting.
curl -X POST https://api.nope.net/v1/screen \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "I have been feeling really hopeless lately",
"country": "US"
}'AI-generated supportive reply
Add include_recommended_reply: true to get a suggested response you can show to the user. Adds $0.0005 per call.
Screen vs Evaluate
Screen is for real-time triage on every message. Evaluate is for deeper assessment when you need the full clinical picture.
| Feature | Screen | Evaluate |
|---|---|---|
| Cost | $0.001 | $0.05 |
| Use case | Every message, real-time | Escalation, case review |
| Output | Type + severity + imminence | Full clinical profile |
| Clinical features | — | 180+ (C-SSRS, HCR-20, DASH) |
| Protective factors | — | 36 factors |
| Crisis resources | Included | Included |
A typical pattern: Screen every message, then call Evaluate when Screen returns elevated or critical severity for detailed documentation.
Run classification on your infrastructure
For teams with data residency requirements or air-gapped environments. This is the raw classifier only—you handle resource matching, audit logging, and response formatting.
- No data leaves your environment
- ~50ms latency with vLLM or SGLang
- 4GB VRAM minimum (T4, L4, A10G)
- No crisis resources, rationale, or request IDs
# Load model
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"nopenet/nope-edge-pilot",
device_map="auto"
)
# Classify
output = classify("I want to end it")
# → "suicide|high|self"Designed for compliance
Screen provides the technical components required by emerging AI safety regulations: evidence-based detection, crisis resource referrals, and audit-ready documentation.
Every response includes:
- request_id — unique identifier for audit trails
- rationale — plain-language explanation
- timestamp — ISO 8601 for your records
- resources — matched helplines for referral requirements
Questions
What's the latency?
Typical response times are 200-400ms depending on message length. The underlying model is optimized for real-time use—you can call Screen on every message without noticeable delay to users.
Do you store user messages?
No. NOPE does not store user message content. We retain only billing metadata (timestamp, request ID, cost) for invoicing and rate limiting. Message content is processed in memory and discarded.
What if Screen misclassifies something?
Screen is calibrated to err on the side of sensitivity—it's better to surface resources for a false positive than to miss someone in genuine distress. Our public test suites at suites.nope.net document current accuracy across 54 test categories.
Screen is infrastructure, not clinical judgment. It helps identify when human attention may be needed, but your team makes the final decisions about how to respond.
How is this different from OpenAI Moderation or Azure Content Safety?
Generic content moderation APIs are designed to flag policy violations—hate speech, explicit content, violence in media. They treat "self-harm" as one category among many.
Screen is purpose-built for recognizing people in distress. It detects implicit signals (passive ideation, method-seeking behavior, covert disclosures), returns severity and imminence levels rather than binary flags, and provides matched crisis resources.
We test all providers on the same 1,227 cases across 54 test suites. Results:
Can I use Screen without showing resources to users?
Yes. Screen returns classification data regardless of what you do with it. Some teams use Screen for internal alerting, human review queues, or analytics without surfacing resources directly to users. The show_resources flag is a recommendation, not a requirement.
What is C-SSRS and why does it matter?
The Columbia Suicide Severity Rating Scale (C-SSRS) is a validated clinical framework used worldwide for suicide risk assessment. It distinguishes between passive ideation ("I wish I were dead"), active ideation without plan, ideation with plan, and preparatory behaviors.
Regulations like California SB243 require "evidence-based methods" for crisis detection. C-SSRS-informed detection demonstrates that your approach is grounded in clinical research, not ad-hoc keyword lists.
Does Screen satisfy SB243 compliance requirements?
NOPE is designed to support SB243 and NY's AI Companion Models Law compliance requirements, but we don't guarantee compliance—that's a legal determination that depends on your full implementation.
What we provide: evidence-based detection (C-SSRS-informed), 988 and matched crisis resources, audit-ready rationale on every call, and a request ID for compliance logging. These are the technical components the regulations require.
Simple, predictable pricing
Pay only for what you use. No surprises.
Enterprise volume?
Need custom pricing, on-prem deployment, or dedicated support? Let's talk about your requirements.
Get started with Screen
$1 free credit to start. No credit card required.