OAuth abuse has quietly become the phishing technique that slips past your MFA, your “security‑aware” users, and your cloud email filters. Recent campaigns abusing OAuth redirects and malicious apps in Microsoft Entra ID and Google Workspace show that “Log in with X” is now one of the easiest ways into your SaaS estate.
Phishing Without Passwords
In early March, Microsoft detailed phishing campaigns that used legitimate OAuth redirect URLs from Entra ID and Google Workspace to send victims to attacker‑controlled sites. Instead of stealing credentials, attackers crafted OAuth URLs with parameters like intentionally invalid scopes that forced an error and triggered a redirect from a trusted Microsoft or Google domain to their own infrastructure.
In some cases, the redirect path automatically downloaded a ZIP archive containing a malicious LNK file or HTML smuggling loader, which then executed PowerShell and side‑loaded a rogue DLL to gain full endpoint control. In others, the redirect landed victims on adversary‑in‑the‑middle phishing kits like EvilProxy that intercepted credentials and session cookies. This is phishing that looks and feels like a normal IdP flow: real microsoft.com URLs, branded sign‑in pages, and no obvious “sketchy” domain to hover over.
Meanwhile, consent‑phishing campaigns are exploding, where users are tricked into granting OAuth permissions to malicious applications rather than typing a password into a fake page. One set of campaigns analyzed by researchers created about 17,000 malicious apps, sent over 900,000 consent messages, and affected roughly 900 tenants and 3,000 user accounts.
A Real‑World Attack Chain
Let’s walk through how this looks in a Microsoft 365 org that “did everything right” on MFA.
-
The lure
An employee receives an email about a Teams recording or e‑signature request, with a URL starting on login.microsoftonline.com or accounts.google.com that includes standard OAuth parameters likeclient_id,redirect_uri, andscope. The domain looks fine, the TLS padlock is there, and the user has been told to trust SSO prompts. -
The malicious app and consent screen
Behind the scenes, the attacker registered a multi‑tenant app in their own Entra tenant and set the redirect URI to a domain they control. The consent page, rendered by Microsoft, asks for what look like reasonable permissions: Mail.Read, offline_access, Files.Read.All. -
The user passes MFA, sees a familiar Microsoft consent dialog, and clicks “Accept.” They’ve just handed a malicious app API‑level access to their mailbox and files without ever exposing their password.
-
Redirect abuse (with or without tokens)
In the redirect‑abuse variant, the attacker isn’t even after the tokens: they useprompt=noneand invalid scopes to force the IdP to immediately redirect to the app’sredirect_uriwithout showing any UI. Because the URL chain starts at a trusted IdP, many email and browser defenses allow it through, and the final landing page drops malware or a phishing framework payload. -
Long‑lived access via tokens
When consent is the goal, the app exchanges the authorization code for access and refresh tokens, then quietly uses Microsoft Graph or Google APIs to read mail, exfiltrate files, and send internal phishing messages. Those tokens can persist for months and remain valid even after password resets, because they live in the authorization layer, not the authentication layer.
At no point did the attacker bypass MFA or guess a password. They simply convinced the user and the IdP to do what OAuth is designed to do.
The Blind Spot: OAuth as UX, Not Security
Most organizations treat OAuth configuration, scopes, and consent pages as a product or UX concern, not as an identity firewall. A few recurring anti‑patterns show up across tenants hit by these campaigns:
-
Overly permissive user consent
Any user can grant high‑impact scopes like Mail.ReadWrite or Files.ReadWrite.All to any multi‑tenant app, including unverified publishers. -
No app inventory or owner mapping
Security teams can’t answer “Which apps have access to all mailboxes?” without opening the admin UI and scrolling. -
Misunderstanding token risk
Leaders assume SSO and MFA mean “we’re safe from account takeover,” but access and refresh tokens issued via OAuth continue working even if you rotate passwords and enforce strong MFA.
The result is exactly what Obsidian and others describe: consent phishing and token‑based attacks bypass MFA and operate almost entirely outside traditional detection guardrails.
A One‑Week Hardening Playbook
You don’t need a six‑month project to get materially safer. Here’s what you can do in roughly five working days.
Day 1–2: Lock Down Consent
-
In Microsoft Entra ID or Google Workspace, restrict user consent so that only low‑risk scopes (basic profile, email) from verified publishers are allowed without admin approval.
-
Define a scope risk table: scopes like
Mail.ReadWrite,Files.ReadWrite.All,Directory.Read.All,offline_accessare admin‑only or blocked; low‑risk scopes are allowed with user consent. -
Require publisher verification or internal publishing for any app that touches mailboxes, files, or directory data.
Day 2–3: Turn On Logs and Review Apps
-
Ensure sign‑in, audit, and OAuth consent logs are enabled and retained in Entra/Workspace and your core SaaS apps.
-
Pull a list of all OAuth applications in your tenant, including: publisher, scopes, who consented, and last activity.
-
Hunt for:
-
Apps with broad scopes and unverified or strange publishers.
-
Apps no one can identify as business‑critical.
-
Long‑unused apps that still have powerful permissions.
-
-
Remove or disable anything unjustified; for the rest, assign a clear business owner.
If you’re using a SaaS security or CASB tool, use its “OAuth apps” or “connected apps” view as a starting point.
Day 3–4: Train the Humans Who Can Click “Accept”
-
Use your logs to identify who has granted high‑risk consents in the last 6–12 months.
-
Ship a short, targeted training to admins, finance, and power users that shows:
-
An example consent screen with obviously excessive scopes (“has full access to all your mailboxes”).
-
How to verify the publisher and app name.
-
A simple rule: if you don’t recognize the app, or it’s asking for more than basic profile, stop and contact security.
-
Awareness matters here: in the research on consent phishing, users almost never read scopes, and most assumed that “Microsoft‑looking screen” equals safe.
Day 4–5: Build Detection Hooks and a Revocation Playbook
-
Add basic detection rules in your SIEM or IdP:
-
Alert on new app registrations with broad scopes.
-
Alert when a high‑privilege consent is granted by a non‑admin.
-
Look for unusual API usage patterns from OAuth apps (large mail or file reads, data exfiltration patterns).
-
-
Document a three‑step incident playbook:
-
Identify and disable the app.
-
Revoke all grants and tokens for that app in Entra/Workspace.
-
Triage impact: which mailboxes/files were accessed, what data left, and which accounts need follow‑up.
-
Microsoft’s guidance on recent redirect‑abuse campaigns explicitly recommends limiting user consent, reviewing app permissions regularly, and removing unused or over‑privileged apps; that’s the baseline.
Treat OAuth Like An Identity Firewall
OAuth was designed to make delegated access easy, not to serve as your security boundary—and attackers have noticed. Consent phishing, redirect abuse, and token‑based attacks all exploit the fact that most orgs treat “Log in with X” as a UX nicety, not as a sensitive control surface.
If you already have SSO and MFA, your next step isn’t more factors; it’s to govern who can grant what to which apps, how long those grants live, and how quickly you can revoke and investigate them. This week, you can narrow user consent, review your top connected apps, and make sure that everyone who can click “Accept” understands that they’re opening a door that passwords and MFA can’t close afterward.
Additional Sources:
Phishing campaign exploits OAuth redirection to bypass defenses
Microsoft Warns OAuth Redirect Abuse Delivers Malware to Government Targets