29 Dec 2020
some ways that Facebook ads are optimized for deceptive advertising
(This post is frequently updated to include more links and examples.)
Why are there so many scam ads on Facebook? The over-simplified answer is that Facebook just doesn’t have enough ad reviewers for the number of ads they get. Since basically anyone with a credit card can advertise, and advertisers have access to tools for making huge numbers of ad variations, while Meta runs an aggressive program of layoffs, then of course lots of scam ads are going to get through.
Meta Battles an ‘Epidemic of Scams’ as Criminals Flood Instagram and Facebook
Meta is earning a fortune on a deluge of fraudulent ads, documents show
Facebook is also more attractive to scammers than other ad media. Deceptive advertisers already get more value from highly targetable ad media than honest advertisers do, because targeting gives the deceptive advertiser an additional benefit. Besides helping to reach possible buyers, a deceptive advertiser can also use targeting to avoid enforcers of laws and norms.
Understaffing and targeting are only parts of the story, though. It’s not so much that Facebook is uninterested in dealing with scams, it’s as if their ad system in general is the result of a cooperative software development project with the scammers. do Facebook and their scam advertisers constitute an “enterprise” for purposes of RICO? I don’t know, might be worth asking your lawyer if you got scammed or impersonated, though Some of the deliberate design decisions that went into Facebook ads are making things easier for deceptive advertisers at the expense of users and legit advertisers.
Custom Audiences don’t support list seeding. Until Facebook, every direct marketing medium has supported “seed” records, which look like ordinary records but get delivered back to the list owner or someone they know, so that they can monitor usage of the list. (I used them for a biotech company’s postal and email lists, even though we never sold or shared the list. Just to be on the safe side.) Using seed records is a basic direct marketing best practice and deters people who might see your list from misusing it.
The real question appears to be that a lookalike audience can be built by a respected and credible advertiser, but there is no way for them to
seedtheir audience to know if and when it is being used by someone else. The argument here is that legitimate advertisers would pay to seed their lookalike audience list, but Facebook does not provide that service (or allow anyone else to provide that service). — Bernard Smith
Facebook Custom Audiences are a way for scammers to use a stolen list without detection. Facebook Ad Settings lets a user see if they personally are in someone else’s Custom Audience, but there’s no way for a list owner to check if the seed records from their list ended up on one. Someone who steals a mailing list can sneak it into a new Custom Audience without getting caught by the list owner. Legit direct marketers who want to protect their lists would pay for the ability to use seed accounts on Facebook, but this functionality would interfere with Facebook’s support for scam advertisers, so they don’t offer it, or even allow anyone else to provide seed accounts. (A limited number of Test Users are allowed for app development, but these are not usable as seeds. Facebook uses the term “seeds” differently from the conventional meaning, to mean the starting names for a Lookalike Audience)
Users can be blocked from seeing the company that really
controls the targeting lists that they’re on. Suppose that a
dishonest advertiser wants to use a California resident’s PII, but they
don’t want to have to honor CCPA opt outs or register with the state.
Facebook promises transparency
and allows users to see who has
uploaded their info. But the dishonest advertiser can simply send the
hashed versions of the PII on their list to an intermediary firm, and
have that firm transfer the hashed PII to Facebook. Now when someone who
is on the list goes to “Advertisers using your activity or information”
on Facebook, they see the name of the intermediary firm instead. Even if
a bunch of people on the list do opt out, the deceptive advertiser’s own
copy of the list is intact. When they switch to a different intermediary
firm later, there are no opt-outs associated with the list. This also
seems to be a good way for extremely suspicious-looking advertisers to
hide from people who might report or investigate them. If I check
Facebook for exclusion lists used by scammers who think I might report
them, I see only the name of a generic-sounding targeted ad company, not
the actual dishonest Facebook page.
Ad Library helps hide deceptive ads at times when risk of discovery is high. Facebook’s Ad Library is designed to show only “active” ads, those that are running this very minute. A deceptive advertiser using a trademark or a person’s likeness without permission can simply turn their ad on and off based on when the victim is likely to be checking the Ad Library. For example, a seller of infringing knock-offs of a European brand can run the ads when European marketers, lawyers, and regulators are asleep but people in the Americas or Asia are awake and shopping. Ad Library makes it easier for scammers to copy honest advertisers than the other way around.
From the Silent Push report GhostVendors Exposed: Silent Push Uncovers Massive Network of 4000+ Fraudulent Domains Masquerading as Major Brands
Our team also confirmed how a Facebook advertiser can buy ads which show up in the Meta Ad Library while they are running, and then stop their campaigns, thereby removing all evidence of their posted ads from the Meta Ad Library. In early May 2025, we documented the appearance of ads from this threat actor group that were searchable in the Ad Library, five days later, all evidence of their presence was removed from the Ad Library due to the ad campaigns stopping.
The Social Media Lab at Toronto Metropolitan University covers The Hidden Game: How Scammers Use “Chameleon Ads” to Bypass Meta’s Moderation.
After setting up their Facebook pages and advertiser accounts, scammers initially upload harmless-looking ads for approval. Once the ad is approved, they quietly swap out the content, replacing images, text, or links with something entirely different. For instance, an ad that initially promotes running shoes might be altered to feature a fake endorsement from a prominent Canadian politician, linking to a cryptocurrency scam. By making these changes after the ad is approved, scammers evade detection, at least temporarily. To further avoid scrutiny, they may pause the campaign after a short period and revert the advertisement to its original, innocuous version.
Ad Library delays posting of scam ads. If you see a bunch of similar scam ads popping up, like this…
…but then you go to their Ad Library and get This advertiser isn’t
running ads in any country at this time,
read the fine print.
An ad will appear in the ad library within 24 hours from the time it gets its first impression. Any changes or updates made to an ad will also be reflected in the ad library within 24 hours.
Facebook deliberately gives their scam advertisers almost a full day to take a whack at you before revealing their ads in Ad Library (and, of course, if the ad comes down fast enough, it never shows up there.)
Independent crawling of ads is blocked by policy. On the open web, online ads can be crawled and logged by independent companies. This service is needed in order to check for malvertising and other problem ads. Inside the Facebook environment, however, independent checking on ads is prohibited. Facebook puts the goal of hiding problem ads ahead of facilitating the kinds of services that could help fix the situation.
Image search crawlers are blocked from ads. Many scammers make infringing copies of material from legit ads without permission. Pirated product photos are especially common. The photos in those scam ads above appear to have been taken from a legit retailer. If legit advertisers had the ability to search for ads similar to theirs, or for edited copies of their own photos, they would be able to find a lot. But, for example, TinEye is blocked from Ad Library, to make life easier for Facebook’s deceptive advertisers at the expense of legit ones. Wells Fargo has to ask customers to report fake Wells Fargo because Facebook cooperates with scammers pretending to be Wells Fargo, to hide fraudulent uses of Wells Fargo’s trademarks.
Categories of scams to look for
The reason that Facebook has to try to shut down research programs like NYU’s is that a project with the budget and skills of a small university team could pick up on a bunch of obvious scams with some tools based on existing open-source image matching software.
Some examples:
photos of public figures who do not endorse a particular category (such as personal finance experts on cryptocurrency ads)
well-known company logos (needs manual check, sometimes the advertiser is a dealer using the logo with permission)
rental housing scams—look for the same house or apartment photo showing up in ads from multiple landlords
But why?
The total revenue impact of Meta’s scam-friendly design is much greater than just the 10% of revenue that comes from the actual scam ads. Scam ads at Meta compete in a complex internal auction for the chance to appear for any given user. When a scam ad is bidding in all those auctions, it drives up the price of advertising for non-scam advertisers, too.
So the impact of a hypothetical future shift to less scam-friendly policies at Meta would be greater than it looks.
Deception avoidance and value exchange?
Or what if the deceptive ads are a necessary part of the system?
A common, conventional point of view about surveillance marketing is
that people choose to trade information about themselves for
better-targeted ads. But this is oversimplified even if you don’t get
into the details of whether or not people give actual consent to the
exchange. Realistically, there aren’t enough well-targeted ads trying to
reach you at any one time to make the more relevant ads
better
enough that even a high-status user would notice. If the Facebook ad
system is run at capacity, then as a user you’re generally going to be
getting mostly ads that are not perceptibly well-matched, but still
revenue positive for the company.
Allowing a certain percentage of deceptive ads changes the balance. With enough deceptive ads in the system, it becomes a better move for a high-status user to reveal more information. Revealing information might be able to get you enough additional legit ads that the level of risk and annoyance you experience moves down noticeably.
So—even in a idealized consent-based future technical and regulatory
environment, where users can’t be easily deceived into giving up more
information than they prefer to—some rational high-status users might
choose to trade away some personal information in order to attract more
legit ads and fewer scams. Facebook doesn’t have to do anything drastic
like offering reduced ad load in exchange for allowing better-matched
ads, they can just let you buy
your way out of some scams with
data.
Salomé Viljoen writes, in A Relational Theory of Data Governance,
[P]eople have a collective interest against the unjust social processes data flows may materialize, against being drafted into the project of one another’s oppression as a condition of digital life, and against being put into data relations that constitute instances of domination and oppression for themselves or others on the basis of group membership.
This ad system might be a good example of that kind of project. A Facebook user who chooses to avoid scams by providing data on their membership in a high-status group is diverting the scams that they would have gotten onto other people, both members of low-status groups and members of high-status groups who share less data.
The question of scam load and total ad load is different from the competition questions around total ad load. In a hypothetical competitive market for social networking services, companies could compete on ad load, but with network effects and winner take all market effects, a monopoly network can run at a higher ad load than, say, a single ad-supported service that participated in a federated system of intercommunicating social sites.
Bonus links
Oracle’s Hidden Hand Is Behind the Google Antitrust Lawsuits
Anti-Facebook agitators see their moment under Biden
Nice Try, Facebook. iOS Changes Aren’t Bad For Small Businesses, by Dipayan Ghosh, Wired
A Sneak Peek at the Apple Feature That Keeps Facebook Up at Night
Facebook Managers Trash Their Own Ad Targeting in Unsealed Remarks
Google, Facebook Agreed to Team Up Against Possible Antitrust Action