[go: up one dir, main page]

Skip to main content
Skip to main content

Screen

Real-time detection of crisis signals in user messages. Screen analyzes text for suicide, self-harm, violence, abuse, and other risk indicators—returning severity, who's affected, and matched crisis resources.

$0.001 per call

Start with $1 free balance. No credit card required.

Try it

Enter a message to see how Screen responds. No signup required.

Try:

Demo is rate-limited. Get an API key for production use.

What Screen detects

Nine risk types, each with severity and imminence levels. Screen identifies who is affected—the speaker, someone they're describing, or a dependent in their care.

suicide

Ideation, planning, method-seeking, farewell behaviors

"I've been thinking about how to end it"
self_harm

Non-suicidal self-injury, cutting, burning

"I've been cutting again"
self_neglect

Disordered eating, medication non-adherence, basic needs

"I haven't eaten in 4 days, I don't deserve food"
violence

Threats or intent to harm others

"I know where he lives and I'm going to make him pay"
abuse

Intimate partner violence, coercive control

"He checks my phone every night and I'm not allowed to see my friends"
sexual_violence

Assault, harassment, coercion

"Something happened at the party and I don't know who to tell"
neglect

Child or elder neglect, failure to provide care

"My mom hasn't given me food in two days"
exploitation

Trafficking, sextortion, grooming, financial exploitation

"He's threatening to leak my nudes unless I pay him"
stalking

Unwanted pursuit, monitoring, harassment

"My ex keeps showing up at my work"

What Screen filters out

Hyperbole ("this is killing me"), idioms ("I'm dead"), gaming slang ("kms lol"), academic discussions, fiction writing, and professional clinical contexts. Screen is calibrated to minimize false positives while maintaining sensitivity to genuine distress signals.

How it works

1

Send a message

POST to /v1/screen with a message or conversation history. Include the user's country code for localized resources.

2

Screen analyzes

A fine-tuned classifier evaluates the text across all risk types, determining severity (mild to critical), imminence (chronic to emergency), and subject (self, other, or dependent).

3

Get structured results

Response includes detected risks, a plain-language rationale, matched crisis resources for the user's location, and a unique request ID for your records.

What you get back

Every response includes structured data you can act on immediately, plus audit fields for compliance documentation.

risks[]

Array of detected risk types, each with type, severity, imminence, and subject.

show_resources

Boolean. True when any risk is detected affecting the speaker. Use this to trigger your crisis response UI.

resources

Crisis helplines matched to the detected risk type and user's country. Domestic violence disclosures get DV hotlines, not generic crisis numbers.

rationale

Plain-language explanation of why this classification was made. Ready for compliance logs without additional processing.

request_id

Unique identifier for this request. Store this for audit trails and compliance reporting.

curl -X POST https://api.nope.net/v1/screen \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "I have been feeling really hopeless lately",
    "country": "US"
  }'
Optional

AI-generated supportive reply

Add include_recommended_reply: true to get a suggested response you can show to the user. Adds $0.0005 per call.

Screen vs Evaluate

Screen is for real-time triage on every message. Evaluate is for deeper assessment when you need the full clinical picture.

Comparison of Screen and Evaluate API endpoints
FeatureScreenEvaluate
Cost$0.001$0.05
Use caseEvery message, real-timeEscalation, case review
OutputType + severity + imminenceFull clinical profile
Clinical features180+ (C-SSRS, HCR-20, DASH)
Protective factors36 factors
Crisis resourcesIncludedIncluded

A typical pattern: Screen every message, then call Evaluate when Screen returns elevated or critical severity for detailed documentation.

Run classification on your infrastructure

For teams with data residency requirements or air-gapped environments. This is the raw classifier only—you handle resource matching, audit logging, and response formatting.

  • No data leaves your environment
  • ~50ms latency with vLLM or SGLang
  • 4GB VRAM minimum (T4, L4, A10G)
  • No crisis resources, rationale, or request IDs
Request Edge access
# Load model
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "nopenet/nope-edge-pilot",
    device_map="auto"
)

# Classify
output = classify("I want to end it")
# → "suicide|high|self"

Designed for compliance

Screen provides the technical components required by emerging AI safety regulations: evidence-based detection, crisis resource referrals, and audit-ready documentation.

Every response includes:

  • request_id — unique identifier for audit trails
  • rationale — plain-language explanation
  • timestamp — ISO 8601 for your records
  • resources — matched helplines for referral requirements

Questions

What's the latency?

Typical response times are 200-400ms depending on message length. The underlying model is optimized for real-time use—you can call Screen on every message without noticeable delay to users.

Do you store user messages?

No. NOPE does not store user message content. We retain only billing metadata (timestamp, request ID, cost) for invoicing and rate limiting. Message content is processed in memory and discarded.

What if Screen misclassifies something?

Screen is calibrated to err on the side of sensitivity—it's better to surface resources for a false positive than to miss someone in genuine distress. Our public test suites at suites.nope.net document current accuracy across 54 test categories.

Screen is infrastructure, not clinical judgment. It helps identify when human attention may be needed, but your team makes the final decisions about how to respond.

How is this different from OpenAI Moderation or Azure Content Safety?

Generic content moderation APIs are designed to flag policy violations—hate speech, explicit content, violence in media. They treat "self-harm" as one category among many.

Screen is purpose-built for recognizing people in distress. It detects implicit signals (passive ideation, method-seeking behavior, covert disclosures), returns severity and imminence levels rather than binary flags, and provides matched crisis resources.

We test all providers on the same 1,227 cases across 54 test suites. Results:

NOPE Screen: 98% (16 missed)
Azure: 75% (238 missed)
OpenAI: 44% (539 missed)
LlamaGuard: 26% (712 missed)

Full methodology and results →

Can I use Screen without showing resources to users?

Yes. Screen returns classification data regardless of what you do with it. Some teams use Screen for internal alerting, human review queues, or analytics without surfacing resources directly to users. The show_resources flag is a recommendation, not a requirement.

What is C-SSRS and why does it matter?

The Columbia Suicide Severity Rating Scale (C-SSRS) is a validated clinical framework used worldwide for suicide risk assessment. It distinguishes between passive ideation ("I wish I were dead"), active ideation without plan, ideation with plan, and preparatory behaviors.

Regulations like California SB243 require "evidence-based methods" for crisis detection. C-SSRS-informed detection demonstrates that your approach is grounded in clinical research, not ad-hoc keyword lists.

Does Screen satisfy SB243 compliance requirements?

NOPE is designed to support SB243 and NY's AI Companion Models Law compliance requirements, but we don't guarantee compliance—that's a legal determination that depends on your full implementation.

What we provide: evidence-based detection (C-SSRS-informed), 988 and matched crisis resources, audit-ready rationale on every call, and a request ID for compliance logging. These are the technical components the regulations require.

Simple, predictable pricing

Pay only for what you use. No surprises.

Enterprise volume?

Need custom pricing, on-prem deployment, or dedicated support? Let's talk about your requirements.

Talk to Sales

Get started with Screen

$1 free credit to start. No credit card required.