Bipartisan bill would force identity checks, empower FTC and AGs, and shift liability upstream to platforms.
Key Points:
- The bipartisan SCAM Act would make social media platforms legally responsible for verifying advertisers before running ads.
- Platforms would be required to take “reasonable steps” to prevent scam and deceptive advertising.
- Failure to comply could trigger FTC and state attorney general enforcement under unfair or deceptive practices laws.
- The bill targets the commercial act of selling ads, not user speech, to avoid direct Section 230 conflicts.
The SCAM Act
A new bipartisan bill — the Safeguarding Consumers from Advertising Misconduct (the SCAM Act) — could fundamentally change how social media companies run their ad businesses.
Introduced on Feb. 4, 2026, the bill is premised on a simple but sweeping idea: if you profit from an ad, you’re responsible for making sure it’s not a scam. That means platforms would have to verify who their advertisers are and take reasonable steps to stop fraudulent ad campaigns before they go live.
Think of this as moving from “Know Your Customer” to “Know Your Advertiser.” Platforms may need to adopt bank‑like identity checks — government IDs, business documents, and other proof—before an ad gets approved, bringing compliance expectations squarely into the advertising workflow.
Lawmakers are framing this as a targeted shift in liability toward the business transaction of selling ads rather than the speech those ads contain—a “Section 230 for ads” moment.
The political winds are notable: Senators Ruben Gallego (D‑AZ) and Bernie Moreno (R‑OH) are jointly sponsoring the bill, signaling rare bipartisan energy around consumer protection and banking stability. Advocacy and industry groups like the American Bankers Association and AARP have already lined up in support, underscoring the broad concern over online financial fraud targeting seniors and everyday consumers.
Threading the Section 230 needle
Importantly, the bill does not attempt to treat platforms as publishers of user content. Instead, it targets the commercial act of selling ad space and sets an expectation of due diligence around that transaction.
By anchoring enforcement in the FTC’s unfair or deceptive practices authority — and extending authority to state AGs — the bill gives regulators a clearer hook to police scam‑ad monetization, an area that has historically lived in a gray zone.
What’s driving the shift
One catalyst has been mounting evidence that scam ads are not edge cases — they can be a meaningful revenue line. A widely cited Reuters investigation alleged that Meta may have derived roughly 10% of its 2024 revenue (about $16 billion) from ads tied to scams and illicit products, intensifying pressure on platforms to move beyond voluntary self‑policing.
At the same time, the fraud landscape has evolved: crypto “pig‑butchering” schemes, fake exchanges, and high‑fidelity AI deepfakes are blurring traditional signals that consumers once used to spot fakes. The legislative logic is that if you cut off the ad on‑ramp, you stop the fraudulent transfer before it hits the ledger or the blockchain.
What the bill would require
- Platforms like Meta, YouTube and TikTok would be legally obligated to investigate their advertisers and take “reasonable steps” to prevent fraudulent or deceptive ads.
- Online services would need to verify advertiser identities before ads go live —potentially using government‑issued IDs or official business records.
- Platforms must offer more effective consumer tools to report scam content.
- The FTC and state attorneys general could bring civil actions treating non‑compliance as an unfair or deceptive act or practice (UDAP).
These measures collectively shift the compliance burden upstream—to the entity monetizing the initial consumer contact—rather than relying on banks to catch fraud at the last mile.
Who’s affected — and how
- Social media and ad‑tech: Expect a rapid build‑out of advertiser onboarding controls, document collection, and continuous monitoring, potentially aided by bank‑grade identity and risk‑scoring tools. This is a material operational change that touches sales, policy, engineering, privacy, and trust & safety teams.
- Payment processors: While the bill targets platforms, any payment processor tied to the ad revenue stream could face spillover scrutiny or reputational contagion if a platform’s vetting is found deficient.
- Digital asset firms: Because so many scams steer victims into crypto transactions, tightening ad verification could materially reduce fraudulent inflows and fake‑exchange acquisition funnels.
- Financial institutions: Banks and fintechs—who currently absorb substantial fraud costs and reimbursements—are likely to benefit from upstream prevention, which explains strong ABA support.
Practical steps to take now
- Map your advertiser risk posture: If you’re a platform or ad‑tech intermediary, inventory your current advertiser onboarding, verification, and monitoring processes against a bank‑grade standard. Look for gaps in identity proofing, beneficial ownership checks, sanctions screening, and ongoing anomaly detection.
- Pilot “KYA” controls: Treat advertiser onboarding like customer KYC—think government‑issued ID capture, verified corporate records, and real‑time fraud signals before campaigns go live.
- Prepare for UDAP exposure: Build an internal playbook for responding to scam reports and documenting “reasonable steps” taken, anticipating FTC and state AG inquiries.
- Coordinate with payments: If you facilitate or touch settlement for ad revenue, align with platform partners on responsibilities and escalation paths to minimize downstream risk.
- Harden reporting tools: Make it fast and intuitive for users to flag suspected scams, and ensure those reports feed directly into enforcement queues with measurable SLAs.
[View source.]