In many ASP.NET Core applications, especially as the project grows, the idea of relying on a single authentication scheme quickly stops being enough. Interactive users may sign in with OpenID Connect through Microsoft Entra ID, some external collaborators may use Google OAuth, while server-to-server clients or automation scripts access the system with API keys. As long as these cases remain limited, they tend to be handled separately and somewhat informally; the problem is that, sooner or later, they start overlapping with authorization, claims mapping, access policies, and application logic.
This is where multi-scheme authentication in ASP.NET Core becomes genuinely interesting. The framework already provides almost everything needed, provided the architecture is set up properly: distinct but consistent schemes, runtime selection when necessary, normalized claims, authorization policies that truly reflect the application’s real use cases, and a clear separation between authentication and authorization.
In this article we will look at how to build this kind of setup cleanly, combining OIDC, OAuth, and API keys, while also adding some of the pieces that often make a real difference in production projects: AddPolicyScheme to select the scheme at runtime, claims transformation, authorization requirement handlers, the programmatic use of IAuthorizationService.AuthorizeAsync(), and refresh token flow management.
Authentication vs Authorization
It is worth making this clear right away, because this is one of those misunderstandings that keeps producing confused architectures. Authentication answers the question “who are you?”, while authorization answers the question “what are you allowed to do?”. In a multi-scheme system this distinction matters even more, because the same user or client may arrive through different channels, but must still be evaluated according to a common set of rules.
A well-designed application can delegate identity to external providers, use local cookies to maintain the interactive session, accept API keys for machine-to-machine calls, and then apply consistent authorization policies based on roles, permissions, tenant scope, subscription plan, or other domain-specific attributes.
A typical scenario
A very common model looks like this:
- OpenID Connect as the primary scheme for interactive sign-in, for example through Microsoft Entra ID;
- an OAuth provider as a secondary scheme, such as Google, for external users or collaborators who do not belong to the main tenant;
- API keys for backend integrations, automated jobs, CLI tools, agents, or other non-interactive clients.
This is a sensible setup because it reflects three different needs. Human users benefit from federated sign-in and a cookie-backed session; external users can authenticate with a different provider; automated systems do not need browser redirects, visual consent pages, or interactive sessions, so API keys remain a practical solution, provided they are implemented properly.
Multi-Scheme Auth in ASP.NET Core
ASP.NET Core allows multiple authentication schemes to be registered at the same time. The delicate part is not adding the providers themselves, but deciding which scheme should be the default one and when it makes more sense to select it dynamically.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
builder.Services .AddAuthentication(options => { options.DefaultScheme = "smart"; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(CookieAuthenticationDefaults.AuthenticationScheme, options => { options.LoginPath = "/account/login"; options.AccessDeniedPath = "/account/access-denied"; }) .AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Authority = builder.Configuration["EntraId:Authority"]; options.ClientId = builder.Configuration["EntraId:ClientId"]; options.ClientSecret = builder.Configuration["EntraId:ClientSecret"]; options.ResponseType = "code"; options.SaveTokens = true; }) .AddGoogle("Google", options => { options.ClientId = builder.Configuration["Google:ClientId"]; options.ClientSecret = builder.Configuration["Google:ClientSecret"]; }) .AddScheme<AuthenticationSchemeOptions, ApiKeyAuthenticationHandler>("ApiKey", _ => { }); |
In this configuration there are already four distinct pieces: cookies for the local session, OIDC for the primary challenge, Google as an alternative scheme, and a custom handler for API keys. At that point, the real question becomes: who decides which scheme should authenticate the current request?
AddPolicyScheme: selecting the scheme at runtime
This is where AddPolicyScheme comes into play, and in hybrid systems it is often the cleanest option. Instead of locking every request to a static authentication scheme, you define a “smart” scheme that forwards the request to the correct handler depending on the context. In practice, the runtime determines whether the request is coming from a browser authenticated with cookies, from a client sending an Authorization: ApiKey ... header, or perhaps from another scheme entirely.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
builder.Services .AddAuthentication(options => { options.DefaultScheme = "smart"; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddPolicyScheme("smart", "Smart auth scheme", options => { options.ForwardDefaultSelector = context => { var authorization = context.Request.Headers.Authorization.ToString(); if (!string.IsNullOrWhiteSpace(authorization) && authorization.StartsWith("ApiKey ", StringComparison.OrdinalIgnoreCase)) { return "ApiKey"; } return CookieAuthenticationDefaults.AuthenticationScheme; }; }); |
This approach avoids controllers filled with conditional logic or duplicated attributes. It is particularly useful when browser users and automated clients coexist within the same application or even on the same HTTP surface. Of course, runtime scheme selection must be designed carefully: the rule should be simple, predictable, and difficult to interpret ambiguously.
If you also expect JWT bearer tokens for external APIs, AddPolicyScheme becomes even more useful, because it can forward to the cookie, API key, or JWT scheme depending on the prefix of the Authorization header.
OIDC and OAuth: how to federate identity
For interactive users, OpenID Connect remains the most natural choice. In ASP.NET Core it is often used together with a local cookie: the external provider authenticates the user, the middleware validates the returned token, and the application then materializes the session as a cookie. It is a proven model and a very convenient one for MVC applications, Razor Pages, and many hybrid web apps as well.
Using a secondary provider such as Google is just as simple from a technical point of view; the real complexity appears when identity data is not homogeneous. The subject identifier changes, the available claims change, the format of certain attributes changes, and sometimes even the way email, name, and groups are populated differs. This is why it pays to think early about claims normalization.
Claims transformation: normalizing data from different providers
When the application receives principals coming from different providers, one of the most useful things you can do is transform or enrich the claims in a centralized place. Without this step, the risk is ending up with policies and controllers full of ad hoc conditions such as “if it comes from Entra use this claim, if it comes from Google use that one”. That path becomes unmanageable very quickly.
IClaimsTransformation allows you to intercept the authenticated principal and produce a more consistent version for the rest of the application. It is the right place to normalize roles, map custom claims, add tenant context, copy identifiers into standardized claim types, or turn external information into internal application permissions.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
public sealed class AppClaimsTransformation : IClaimsTransformation { public Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal) { var identity = principal.Identity as ClaimsIdentity; if (identity is null || !identity.IsAuthenticated) { return Task.FromResult(principal); } if (!identity.HasClaim(c => c.Type == ClaimTypes.NameIdentifier)) { var sub = identity.FindFirst("sub")?.Value ?? identity.FindFirst(ClaimTypes.NameIdentifier)?.Value; if (!string.IsNullOrWhiteSpace(sub)) { identity.AddClaim(new Claim(ClaimTypes.NameIdentifier, sub)); } } var email = identity.FindFirst(ClaimTypes.Email)?.Value ?? identity.FindFirst("email")?.Value; if (!string.IsNullOrWhiteSpace(email) && !identity.HasClaim(c => c.Type == "app:email")) { identity.AddClaim(new Claim("app:email", email)); } return Task.FromResult(principal); } } |
|
1 |
builder.Services.AddTransient<IClaimsTransformation, AppClaimsTransformation>(); |
This phase is also a good place to connect external identities to a local model, for example an allowlist, application roles stored in the database, tenant membership, or granular permissions. In enterprise projects, this is one of those choices that tends to pay off much more than it initially seems.
API key authentication
API keys remain a practical solution for non-interactive clients, but only if they are treated with the same rigor we would apply to any other sensitive credential. What should be avoided is the classic lazy approach: a key generated once, stored in plaintext, compared as a raw string, with no scope, no rotation, and no revocation.
A good implementation should at least include:
- robustly generated keys, shown in plaintext only once;
- storage as salted hashes, not plaintext;
- scopes or permissions associated with each key;
- immediate revocation and, even better, expiration or rotation;
- auditing of relevant usage and failure events.
A custom handler can read the Authorization: ApiKey {key} header, validate the key, and build a ClaimsPrincipal consistent with the rest of the application:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
public sealed class ApiKeyAuthenticationHandler : AuthenticationHandler<AuthenticationSchemeOptions> { private readonly IApiKeyValidator _validator; public ApiKeyAuthenticationHandler( IOptionsMonitor<AuthenticationSchemeOptions> options, ILoggerFactory logger, UrlEncoder encoder, ISystemClock clock, IApiKeyValidator validator) : base(options, logger, encoder, clock) { _validator = validator; } protected override async Task<AuthenticateResult> HandleAuthenticateAsync() { var header = Request.Headers.Authorization.ToString(); if (string.IsNullOrWhiteSpace(header) || !header.StartsWith("ApiKey ", StringComparison.OrdinalIgnoreCase)) { return AuthenticateResult.NoResult(); } var rawKey = header["ApiKey ".Length..].Trim(); var result = await _validator.ValidateAsync(rawKey); if (!result.Succeeded) { return AuthenticateResult.Fail("Invalid API key."); } var claims = new List<Claim> { new Claim(ClaimTypes.NameIdentifier, result.SubjectId), new Claim("auth_type", "api_key") }; foreach (var permission in result.Permissions) { claims.Add(new Claim("permission", permission)); } var identity = new ClaimsIdentity(claims, Scheme.Name); var principal = new ClaimsPrincipal(identity); var ticket = new AuthenticationTicket(principal, Scheme.Name); return AuthenticateResult.Success(ticket); } } |
Authorization: Policies and Requirement Handlers
Once the request has been authenticated, the part that matters most to the domain begins: determining whether that principal is allowed to perform the requested operation. For simple cases, roles or direct claim checks inside policies are enough, but as soon as tenant scope, ownership, service plans, composite permissions, or contextual rules enter the picture, requirement handlers become the better option.
A custom requirement lets you express a business rule in a readable and reusable way. It is much better than scattering User.HasClaim(...) and User.IsInRole(...) checks across controllers, page models, and services.
|
1 2 3 4 5 6 7 8 9 |
public sealed class PermissionRequirement : IAuthorizationRequirement { public PermissionRequirement(string permission) { Permission = permission; } public string Permission { get; } } |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
public sealed class PermissionRequirementHandler : AuthorizationHandler<PermissionRequirement> { protected override Task HandleRequirementAsync( AuthorizationHandlerContext context, PermissionRequirement requirement) { var hasPermission = context.User.Claims.Any(c => c.Type == "permission" && string.Equals(c.Value, requirement.Permission, StringComparison.OrdinalIgnoreCase)); if (hasPermission) { context.Succeed(requirement); } return Task.CompletedTask; } } |
|
1 2 3 4 5 6 7 8 9 10 11 |
builder.Services.AddAuthorization(options => { options.AddPolicy("Documents.Read", policy => policy.RequireAuthenticatedUser() .AddRequirements(new PermissionRequirement("documents.read"))); options.AddPolicy("AdminOnly", policy => policy.RequireRole("Admin")); }); builder.Services.AddSingleton<IAuthorizationHandler, PermissionRequirementHandler>(); |
This approach scales well even when permissions do not come directly from the token or the API key, but must instead be resolved from application data. In that case the handler can query domain services, databases, or tenant context, provided it does so efficiently and without turning every request into a chain of expensive lookups.
IAuthorizationService.AuthorizeAsync
[Authorize] attributes remain extremely convenient, but they do not cover every scenario. As soon as the decision depends on a concrete resource, a record loaded at runtime, or more articulated application logic, it is usually better to use IAuthorizationService.AuthorizeAsync() programmatically.
This is the classic case for resource-based policies: a user may edit a document only if they belong to the correct tenant, or if they are the owner of the resource, or if they hold a specific administrative permission.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
public sealed class DocumentAuthorizationHandler : AuthorizationHandler<PermissionRequirement, Document> { protected override Task HandleRequirementAsync( AuthorizationHandlerContext context, PermissionRequirement requirement, Document resource) { var isOwner = context.User.FindFirst(ClaimTypes.NameIdentifier)?.Value == resource.OwnerId; var hasPermission = context.User.Claims.Any(c => c.Type == "permission" && c.Value == requirement.Permission); if (isOwner || hasPermission) { context.Succeed(requirement); } return Task.CompletedTask; } } |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
public sealed class DocumentsController : Controller { private readonly IAuthorizationService _authorizationService; private readonly IDocumentRepository _repository; public DocumentsController( IAuthorizationService authorizationService, IDocumentRepository repository) { _authorizationService = authorizationService; _repository = repository; } public async Task<IActionResult> Edit(Guid id) { var document = await _repository.GetByIdAsync(id); if (document is null) { return NotFound(); } var authResult = await _authorizationService.AuthorizeAsync( User, document, new PermissionRequirement("documents.write")); if (!authResult.Succeeded) { return Forbid(); } return View(document); } } |
This programmatic form is often more expressive and more correct than attributes when the resource being protected is not known at compile time.
Refresh token: do you really need it?
The topic of refresh tokens deserves some clarification, because it is often brought up even in scenarios where it is not actually needed. In a classic ASP.NET Core web application with OIDC sign-in and a local cookie, it is often the cookie that represents the application session, while the provider tokens remain behind the scenes. In that case, a refresh token may be useful if the application needs to call external APIs on behalf of the user over time, not simply to keep the local sign-in alive.
If, on the other hand, you are building a SPA or a client that works directly with access tokens and refresh tokens, then the flow becomes central and must be treated very carefully: secure storage, expiration, revocation, rotation, and protection against theft all matter.
In a server-side OIDC scenario, enabling token persistence is often the first step:
|
1 2 3 4 5 6 7 8 9 |
.AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Authority = builder.Configuration["EntraId:Authority"]; options.ClientId = builder.Configuration["EntraId:ClientId"]; options.ClientSecret = builder.Configuration["EntraId:ClientSecret"]; options.ResponseType = "code"; options.SaveTokens = true; options.Scope.Add("offline_access"); }); |
The offline_access scope is typically what enables the issuance of a refresh token, assuming the provider supports it and the IdP configuration allows it. From that point on, however, the issue is not simply “having a refresh token”, but managing it correctly. If it has to be stored server-side, it must be protected like any other sensitive secret; if it is used to obtain new access tokens for downstream APIs, it is wise to centralize the refresh logic and handle failures in a predictable way.
In many enterprise projects, it is better to avoid overly creative manual implementations and rely instead on provider libraries or components already designed for token caching and renewal. Refresh tokens are one of those areas where a naive simplification can quickly become a serious security problem.
Claims, local roles, and the authorization model
A multi-scheme system works really well when external identities are mapped back to a clear local model. In other words, authenticating with Entra, Google, or an API key should not change the way the domain reasons about roles, permissions, tenant scope, and operational capabilities. What changes is the source of identity, not the access logic.
It is often a good idea to maintain locally at least:
- an allowlist of users or clients actually allowed into the system;
- application roles distinct from those possibly provided by the external identity provider;
- memberships or scopes tied to the current tenant;
- uniform auditing for logins, denied access, key usage, revocations, and sensitive operations.
This separation avoids tying the authorization model too closely to the peculiarities of whichever identity provider happens to be in use.
Sign-out, revocation, and credential lifecycle
In a system with multiple authentication schemes, sign-out and revocation also need to be designed carefully. For browser sessions, this usually means clearing both the local cookie and, when appropriate, the session at the OIDC provider. For API keys, the concept is different: there is no sign-out, only revocation, rotation, expiration, and auditing of subsequent use.
The temptation to treat all channels as if they were equivalent is understandable, but it does not work. Each scheme has a different lifecycle, and the code should reflect that explicitly.
Conclusions
When designing a multi-scheme authentication system, one of the most useful things you can do is resist the temptation to invent too much. ASP.NET Core already provides the right building blocks to construct an identity system: external providers for OIDC and OAuth, cookie authentication, custom handlers, policy schemes, claims transformation, authorization handlers, and programmatic authorization. The real work lies in combining them without mixing their responsibilities.
A reasonable setup, in most cases, looks like this: external providers for identity, cookies for the interactive user session, API keys for automated clients; all of it orchestrated by a claims system normalized into a coherent format, together with local application policies that decide who can do what. It is not the only possible model, but it is one of the ones that scale best as the application grows.
Combining OIDC, OAuth, and API keys in ASP.NET Core is not especially difficult from a purely technical perspective; the real challenge is doing it without introducing inconsistencies between authentication, claims, access policies, and domain logic. AddPolicyScheme helps select the correct scheme at runtime, IClaimsTransformation makes it possible to normalize identities coming from different sources, requirement handlers make authorization more expressive, and IAuthorizationService.AuthorizeAsync() covers the cases where decisions depend on concrete resources rather than simple static attributes.
As is often the case in security, the problem is not so much putting multiple components together, but preventing exceptions, shortcuts, and implicit rules from accumulating over time until they become hard to govern. A well-designed multi-scheme setup, like the one outlined in this article, is meant precisely to prevent that.
References
- Overview of ASP.NET Core authentication - Official overview of the framework’s authentication mechanisms.
- Introduction to authorization in ASP.NET Core - Introduction to policies, requirements, and the authorization models supported by the framework.
- Policy-based authorization in ASP.NET Core - Official documentation on policies, requirement handlers, and resource-based authorization.
- Claims-based authorization in ASP.NET Core - Official guide to claims handling and their integration into authorization policies.
- External provider authentication in ASP.NET Core - Configuration of external providers, including Google and other social/OAuth schemes.
- Configure OpenID Connect Web authentication in ASP.NET Core - Practical setup of OpenID Connect in an ASP.NET Core web application.
- Refresh tokens in the Microsoft identity platform - Details on how refresh tokens work in the Microsoft identity ecosystem.
- OAuth 2.0 Refresh Tokens - Useful reference for the refresh token flow in OAuth 2.0.
- OpenID Connect Core 1.0 - The core specification for understanding the OIDC mechanisms behind federated authentication.

