You probably don't remember everything you've told your AI assistant. But your AI does. AI tools are starting to remember things across conversations. Your health questions, work context, relationships, preferences, all recalled and connected over time. That creates something new, not a profile in the ad-tech sense, more like a synthetic understanding of how you think and make decisions. None of the existing privacy frameworks really account for this. It's not "collected" data in the traditional sense, it's inferred. The next privacy frontier isn't about who has your data. It's about who understands you.
Cloaked
Technology, Information and Internet
Lowell, Massachusetts 5,738 followers
Take back control of your data.
About us
Cloaked offers people-first, everyday privacy solutions. We enable people to generate, guard, and reclaim their identity online - all in one place. Create unique, disposable identities that protect your real one. Generate unlimited emails, phone numbers, passwords, and soon, credit cards - for every website, purchase, app, and situationship. Our Data Removal engine finds and helps remove your personal information from data brokers and platforms that expose it. Our Call Guard blocks spam calls, filters unknown numbers, and keeps your real phone private. And with Cloaked Pay, you’ll be able to check out using secure virtual cards without revealing your actual financial details. Cloaked makes privacy effortless. One click to stay connected, protected, and in control.
- Website
-
https://www.cloaked.com/linkedin
External link for Cloaked
- Industry
- Technology, Information and Internet
- Company size
- 51-200 employees
- Headquarters
- Lowell, Massachusetts
- Type
- Privately Held
- Founded
- 2020
- Specialties
- Consumer, Security, and Privacy
Locations
-
Primary
Get directions
1075 Westford St
Lowell, Massachusetts 01851, US
-
Get directions
New York, New York 10017, US
Employees at Cloaked
Updates
-
Your AT&T breach data from 2024 is worth more now than when it was stolen. The original leak was 73 million records. The version circulating right now? 176 million. Still your leaked data, but enriched. A name from one breach gets paired with an SSN from another and a current email from a third. Each merge makes the profiles more complete, and more useful for fraud. That settlement check you got doesn't really close the book on anything. Criminal data markets don't delete records after a lawsuit, they repackage them. Breach data doesn't expire, it compounds.
-
For a long time, U.S. privacy rights existed mostly on paper. Recent regulator data shows a clear shift: - In California alone, more than 8,000 privacy complaints were logged by late 2025 - 51% involved deletion requests - 39% involved requests to limit the use of sensitive personal information This matters because regulators aren’t just counting requests, they’re examining how companies respond: - Were timelines met? - Was the response complete? - Is there documentation showing what was done and why? In 2026, poor request handling isn’t invisible anymore; it’s becoming one of the fastest ways to trigger regulatory scrutiny. Privacy programs now interact directly with the public, and every interaction leaves a record.
-
AI agents are officially graduating from “cool demo” to “actual coworker.” In 2026, AI agents are touching real systems: CRM data, internal tools, financial workflows, production environments. That’s where the value is (and where the risk shows up). Most security issues here come from very ordinary decisions that felt fine at the time: - An agent that shipped with more permissions than it needed - A workflow no one quite “owned” after launch - A system that worked great in testing and quietly went live So before an AI agent moves from experiment to production, do a quick sanity check: - Does it have the minimum access required, or just whatever was convenient? - Can someone see what it’s touching and when? - Is there a clear owner responsible for it after deployment? - Can it be paused or shut off instantly if something looks wrong? This isn’t about distrusting AI, it’s about treating agents like any other powerful system that operates at machine speed. The teams that do best here won’t be the ones with the fanciest agents, they’ll be the ones who treated them like real infrastructure from day one.
-
For years, many companies treated consent as a visual exercise. Put up a banner, add an opt-out form, and move on. That approach is breaking down. Recent enforcement shows regulators are now starting to really test technical truth, not surface intent: - Does opting out actually stop third-party trackers? - Are browser-level signals like Global Privacy Control honored? - Do opt-out forms change anything under the hood? A late-2025 California enforcement action fined a major retailer after regulators found that its opt-out form appeared to work but didn’t actually prevent data sharing via ad trackers. In 2026, consent isn’t about disclosure, it’s about whether your systems behave the way your interface promises. If “no” doesn’t technically mean no, regulators consider that misleading, even if the banner looks compliant.
-
If it feels like breach headlines don’t hit the way they used to, you’re not imagining it. It’s not because breaches are smaller or less serious. It’s because they’re happening so often that none of them get to be the story for very long. Attention moves on, timelines refresh, and the internet collectively shrugs. That’s where teams get into trouble. Because while the public gets tired, regulators absolutely do not. If anything, the focus has sharpened. In 2026, the question for companies isn’t “Did something happen?” It’s “What did you do next, how fast did you move, and can you prove it?” How quickly was the incident investigated? Who made the call on disclosure? What actually changed afterward, beyond a blog post and a promise? Companies with rehearsed response playbooks, clear ownership, and documented decisions tend to look calm even when things go wrong. The ones improvising tend to discover that silence in the press does not translate to patience from regulators. Breaches may feel quieter now, but the consequences aren’t.
-
Some genuinely good (and practical) privacy news out of California. Starting in 2026, Californians will be able to delete their personal data from hundreds of data brokers with a single request. No more hunting down individual opt-out forms. This comes from the Delete Act, and it’s being put into action through a new system called DROP. Once it’s live, data brokers registered in California will be required to: - Accept deletion requests through one central platform - Stop selling your personal information when you ask - Disclose what data they collect and share - Undergo audits to prove they’re actually complying This matters because data brokers quietly sit behind a lot of spam, scams, and unwanted targeting. Making deletion easier (and enforceable) meaningfully shifts the balance back toward consumers. It’s also a first. California is the only place offering this kind of centralized control today, and it sets a precedent other states are likely to follow. This doesn’t eliminate the need for ongoing privacy hygiene, but it’s a real step toward making data deletion something people can actually use, not just read about.
-
-
We’re wrapping up our deep dive of Call Guard in the Life at Cloaked series with a closer look at how the feature works in real life. In “What Cloaked Call Guard Does,” Arjun Bhatnagar and Kyler Ross walk through how Call Guard screens and analyzes incoming calls, how AI-powered conversational screening adds helpful context, and how users decide what gets through. The conversation also highlights real user experiences and what changes when unwanted calls stop being a constant distraction. This episode brings our Call Guard deep dive to a close, with more Life at Cloaked episodes ahead as we explore other parts of the product and how they come together. What feature should we dive into next? If you missed earlier episodes, start here: https://lnkd.in/eW7dyy-K #LifeAtCloaked #Privacy
-
AI phishing has quietly outgrown templates, and scammers don’t need to reuse the same wording anymore. They can generate millions of slightly different messages in seconds, which is why “that looks familiar” isn’t a great defense these days. What does stay consistent is who the message is meant for. For example, an email address you only use for shopping suddenly gets a message about “IT support” or “urgent verification.” That’s an address that might have leaked somewhere. This is where aliases start pulling real weight. Instead of asking, “Is this a scam?” you ask a much simpler question: “Does this belong here?” A few habits make that easy: - Use one alias per app or vendor - Have a loose sense of what’s normal for each one (who emails you, and why) - Treat unexpected messages as early warnings, not just noise - When something looks off, pause or replace that alias - Review the accounts it touched and rotate passwords and 2FA The upside is speed and calm: spot leaks early, contain the problem quickly, and move on without touching the rest of your digital life.
-
Life at Cloaked - Episode 6: Building Cloaked Call Guard We’re continuing to spend time with Call Guard in the Life at Cloaked series, and this episode is a more reflective look at how the feature came together. In “Building Cloaked Call Guard,” Arjun Bhatnagar, Kyler Ross, and Vincent Toms talk about the journey from early ideas to something people now use in their daily lives. They reflect on what surprised them, how real feedback shaped the product, and how AI became a thoughtful part of Call Guard rather than the headline. The focus was always on reducing friction, adding context, and giving people back a sense of calm and control. Building privacy tools means slowing down, listening closely, and being willing to change course. This episode captures a bit of that process. If you missed earlier episodes, start here: https://lnkd.in/eW7dyy-K #LifeAtCloaked #Privacy