Site icon Ryadel

ASP.NET Core Hardening Guide: Upload Security, Data Protection, CSRF, and HSTS

Key Elements That Threaten Your Online Security

When it comes to security in ASP.NET Core, the hard part is not remembering to enable HTTPS or configure Security Headers, which we have already covered extensively in the past. The hard part is treating security as a cross-cutting requirement, one that affects file uploads, secret management, antiforgery tokens, HTTP headers, application-level encryption, and protection against abuse. These are all areas where an application that looks solid on the surface can start to fail, often not because of a dramatic vulnerability, but because of a long series of overly permissive choices.

In this article we will go through a set of practical best practices to harden an ASP.NET Core web application, with a deliberately operational focus. The goal is not to assemble yet another generic checklist, but to look more broadly at the strategies and framework features that can be used to protect the most exposed entry points, and to understand which ones are actually worth adopting.

Threat model: mapping the threats

Before talking about code, it makes sense to define the threat model. Without that step, it becomes very easy to spend time protecting the wrong things while overlooking the areas that are actually exposed. In a typical ASP.NET Core application that handles authentication, file uploads, administrative areas, and HTTP APIs, the most common scenarios usually include the following:

  • uploads of untrusted content, including malicious files, MIME-spoofed files, corrupted documents, hostile compressed archives, or payloads specifically designed to break parsers and libraries;
  • cookie theft or improper cookie reuse, along with cross-site requests targeting endpoints that perform state-changing operations;
  • brute-force attacks and abuse of sensitive endpoints, especially login, password reset, public APIs, and computationally expensive endpoints;
  • accidental exposure of application secrets, API keys, connection strings, and cryptographic material;
  • misconfigurations on the reverse proxy side, or insufficient HTTP headers that leave preventable attack surface exposed;
  • legitimate but out-of-scope access, which requires auditing and traceability, not just authentication.

At the same time, it is important to be honest about the limits. If the host is compromised at the operating system level, an attacker with sufficient privileges may gain access to Data Protection keys, secrets stored in memory, and, more generally, anything the process can read. Defending against that level of compromise requires infrastructure controls, not just application-level ones.

Upload security

Uploads are one of the most delicate areas in any web application. The reason is straightforward: we are accepting arbitrary input, often large in size, that will later be stored, analyzed, indexed, or processed by downstream components. Trusting the file name or the content type sent by the browser is a classic mistake.

Extension and MIME type allowlists

The first sensible defense is an explicit check against a list of allowed extensions and accepted content types. An allowlist is almost always preferable to a denylist: supported formats should be few, declared explicitly, and easy to understand. A denylist tends to become incomplete very quickly and usually grows in a messy, inconsistent way.

This validation should happen as early as possible, ideally before the content is persisted in its final location. In many projects I have seen this check moved too far downstream, sometimes after saving the file temporarily to disk. It works, but it increases the operational surface for no real benefit.

This check alone is not enough, because content type can be forged and the extension does not provide strong guarantees either. Even so, it remains a useful first filter, especially to discard content that is clearly outside policy.

Content validation and antimalware scanning

If the application accepts documents coming from users, customers, or external integrations, malware scanning should be part of the standard flow. ClamAV is a common choice, especially in Linux and container-based environments, but the principle holds with other engines as well: the file should be analyzed before it enters the normal processing pipeline.

One important choice concerns the behavior when the scanner is unavailable. In production, a fail-closed approach makes sense in most cases: if the scanner does not respond, the upload is rejected. Fail-open can make sense in development or in a few very specific scenarios, but it should always be an explicit decision.

A robust flow usually looks like this:

  • initial validation of extension, MIME type, and size;
  • optional save to a temporary area, or direct streaming to a scanning service;
  • quarantine or immediate rejection in case of a suspicious result;
  • final persistence and downstream processing only after all checks have passed.

Streaming, maximum size, and timeouts

Another common mistake is reading the entire file into memory because “uploads will be small anyway”. That is a bet that eventually gets lost. Uploads should be handled as streams whenever possible, with explicit size limits both in ASP.NET Core and at the reverse proxy level.

If the application also supports URL-based ingestion, the controls need to be stricter still: tight timeouts, maximum downloadable size, blocking plain HTTP unless there is a very good reason not to, and careful handling of redirects and SSRF. This is one of those areas where systems quietly become too permissive over time.

File name sanitization and separate storage

The original file name should never be used as the physical identifier in storage. It is better to generate an internal name, use a path that does not derive from user input, and store the original file name only as metadata. It is a simple measure, but it prevents collisions, badly handled traversal issues, and several other avoidable problems.

Whenever possible, uploaded files should be stored outside the web root. Serving user-uploaded content directly as static assets is convenient, but it tends to create more problems than it solves.

Secret handling

A significant share of application security incidents starts here. API tokens committed to the repository, connection strings copied around in plain text, duplicated config files across environments, keys leaking into logs or telemetry. In ASP.NET Core, the right path is fairly clear: secrets should come from dedicated providers and flow through the configuration pipeline without improvised shortcuts.

Development, CI/CD, and production environments

In local development, User Secrets is often the cleanest option for developers. In containerized or pipeline-driven environments, environment variables are a common choice. In production, it is usually better to rely on a real secret store such as Azure Key Vault. The important part is maintaining a coherent override chain: the configuration keys should stay the same, while only the provider changes.

In production, if you use Azure Key Vault, the integration follows the same pattern:

Encrypting application secrets

There are cases where a secret is not just runtime configuration, but application data that must be stored: user-provided API keys, third-party credentials, integration tokens for different tenants, and similar cases. In those scenarios, saying “I keep it in configuration” is no longer enough, because the data ends up in the database.

This is where strong application-level encryption, such as AES-GCM, becomes the right tool, with a clear separation between the master key and the encrypted data. Key rotation must also be planned in advance: if the master key changes, you need a strategy to re-encrypt existing data without breaking everything already stored.

One detail that deserves more attention than it usually gets: plain-text secrets should never end up in logs, diagnostic serialization, generic audit records, or unfiltered exception payloads. It happens more often than many teams realize.

ASP.NET Core Data Protection

Many developers associate Data Protection only with authentication cookies and antiforgery tokens, but it is also useful in custom scenarios, provided that it is used carefully. The first thing to do is configure the key ring correctly, especially in production and even more so in multi-instance deployments.

Key persistence and application isolation

On a single instance, storing keys on a persistent file system may be enough. In a cluster, the instances need to share the same key ring, otherwise each node will start issuing protected payloads that the others cannot decrypt. It is also good practice to set an explicit application name, so different applications do not accidentally share the same key space.

If the application runs in Azure or in a more structured environment, it makes sense to evaluate centralized storage and dedicated services to protect the keys.

IDataProtector.CreateProtector for custom scenarios

When you need to protect an application value that does not require portable encryption across different stacks, Data Protection is often more convenient than a fully custom cryptographic solution. Typical examples include one-time tokens for internal workflows, opaque references exposed in URLs, protected serialization of sensitive identifiers, or small temporary payloads signed and encrypted by the server.

In those scenarios, IDataProtectionProvider.CreateProtector() lets you define a precise purpose string that isolates the protected material by usage context. That detail matters a lot: two protectors with different purposes should never be interchangeable.

For expiring tokens, the time-limited protector is usually the better option:

I would not use Data Protection for everything. If you need interoperability with other systems, complex rotation logic, or very fine-grained control over format and algorithms, explicit encryption may be a better fit. For internal application scenarios, however, it is an extremely useful tool and still somewhat underrated.

CSRF in ASP.NET Core

CSRF protection is still necessary whenever the application uses authentication cookies and accepts state-changing requests. It is not an “old” problem. It is simply less visible than other vulnerabilities, which is why it is sometimes neglected when teams move to hybrid SPAs, admin panels, or less frequently touched MVC areas.

[ValidateAntiForgeryToken] vs AutoValidateAntiforgeryTokenAttribute

A practical approach works best here. [ValidateAntiForgeryToken] validates the antiforgery token only where you explicitly apply it. That is fine when you want precise control, perhaps on selected actions or controllers, but it requires constant discipline. In larger teams or growing codebases, it is very easy to forget it on a POST, PUT, or DELETE.

AutoValidateAntiforgeryTokenAttribute, on the other hand, when applied globally, is almost always the safer choice for MVC applications and Razor Pages that rely on cookies. It automatically validates state-changing requests while ignoring GET, HEAD, OPTIONS, and TRACE. In practical terms, it turns antiforgery protection from an opt-in mechanism into a safer default.

When does it still make sense to use [ValidateAntiForgeryToken]? Mainly when you want to mark particularly sensitive endpoints explicitly, or in codebases where introducing a global policy is not yet feasible. In new projects, a global policy is usually the better choice.

Two practical considerations are worth keeping in mind:

  • if the application uses bearer tokens in the Authorization header instead of cookies, the CSRF risk changes substantially and often does not apply in the same way;
  • in hybrid applications where JavaScript frontends call cookie-protected endpoints, the antiforgery token must also be handled correctly on the client side.

Antiforgery configuration example

The delicate part is not enabling the service, but using it consistently with the type of client that is calling the application.

Rate limiting

ASP.NET Core includes a solid rate limiting middleware, and it is worth taking seriously. Limiting traffic is not only useful against pure denial-of-service attempts. It also helps slow down brute-force attacks, enumeration, aggressive scraping, and excessive consumption of expensive endpoints.

A single global policy is rarely enough. In most cases it makes sense to separate at least:

  • authentication endpoints;
  • public or semi-public APIs;
  • particularly expensive operations such as exports, parsing, uploads, or reporting.

Sliding window and token bucket

In practice, the most useful policies are often sliding window and token bucket.

A sliding window policy is useful when you want to avoid the rigid behavior of a fixed window. It distributes limits more evenly over time and reduces the classic burst issue that happens around the edge of adjacent windows.

A token bucket policy is very effective when you want to allow controlled bursts while still enforcing a sustainable average rate. For many public APIs, it is a more natural fit than fixed window because it reflects real traffic patterns more accurately.

The policies can then be applied to endpoints or endpoint groups:

Partitioning and the right key

The quality of the policy depends heavily on how traffic is partitioned. Limiting only by IP is better than nothing, but behind NAT, reverse proxies, or shared corporate networks it can be too coarse. In some cases it makes more sense to combine IP address, user identity, client ID, or tenant ID. The important part is not relying on easily spoofed headers unless they have already been sanitized correctly by the upstream proxy.

Consistent responses and Retry-After

When a request is rejected, it is useful to return a clear 429 response and, whenever possible, a Retry-After header. If the application uses Problem Details, this case should be aligned with the rest of the API so that throttling errors do not end up using a completely different response format.

HTTP hardening: HSTS, CSP, and Security Headers

HTTP headers do not replace application security, but they help reduce preventable risks and enforce stricter browser behavior. In ASP.NET Core, the ones that deserve the most attention are at least HSTS and CSP.

HSTS

HSTS tells the browser to use HTTPS only for the target domain, preventing accidental downgrade and plain HTTP access attempts after the first visit. In production it makes a lot of sense, provided that the application is genuinely ready to live entirely over HTTPS.

It is wise to be cautious with IncludeSubDomains and especially with preload. An overly aggressive configuration can create more problems than benefits if the infrastructure is not fully consistent.

Content-Security-Policy

CSP is one of the most effective tools for reducing the impact of XSS and unwanted resource loading, but it only works well when it is written carefully. A permissive policy that allows 'unsafe-inline' almost everywhere does not buy you much. Building a good CSP often requires some cleanup work on the frontend, especially in legacy applications or older Razor views.

A minimal example, to be adapted to the specific application, could look like this:

Regarding X-Frame-Options, it is worth noting that CSP’s frame-ancestors directive is more modern and more flexible. Keeping both is not usually a problem, but CSP should gradually become the primary control.

Forwarded headers and reverse proxy

If the application runs behind Nginx, Apache, YARP, Azure Front Door, or another reverse proxy, forwarded header configuration becomes critical. A bad configuration can alter the request scheme, client IP, and other values later used by authentication, redirects, logging, and rate limiting.

ASP.NET Core should be configured carefully, restricting trusted proxies and trusted networks. Blindly accepting forwarded header chains is a good way to end up trusting falsified client data.

This also affects IP-based rate limiting: if forwarded headers are not trustworthy, you end up limiting the proxy instead of the real client, or worse, accepting manipulated IP data.

Authentication cookies

When the application relies on authentication cookies, some settings should be treated as sensible defaults: Secure, HttpOnly, and a SameSite policy that matches the actual login flow. This may not be the most glamorous part of security, but it remains one of the most useful.

SameSite=Strict is more restrictive, but it is not always compatible with every login or navigation flow. Lax is often a pragmatic compromise for the authentication cookie, while more sensitive cookies may deserve stricter settings.

Audit trail: not just for compliance

Recording security-relevant events consistently helps clarify what really happened when something goes wrong. Successful logins, denied access, accepted or rejected uploads, jobs skipped for security reasons, application errors with trace identifiers: all of this is useful both during incident response and during ordinary maintenance.

The important part is avoiding noisy, low-value audit logs. A small number of meaningful events with useful and consistent metadata is much better than an endless stream of barely readable records. Needless to say, audit data should never contain secret values or unnecessary sensitive content.

Conclusions

In many projects, the topics covered in this article are addressed late, often only after the first incident or after a security review highlights issues very similar to the ones described above. Waiting for that moment can be costly. It is far better to move earlier, starting with restrictive policies, following the approach often described as Security by Default, and relaxing them only where there is a real need. In many cases, a relatively small package of countermeasures is enough: allowlists for uploads, ideally with a scanner integrated into the flow; global antiforgery protection for state-changing requests; properly configured Data Protection; explicit HTTP headers; rate limiting for sensitive endpoints; and, whenever possible, serious secret handling. When implemented correctly, these measures prevent a surprising number of problems and headaches.

The good news is that, as this article has shown, ASP.NET Core already provides most of the necessary tools. As is often the case, the real difference lies in configuration choices and in the discipline required to keep those choices intact as the application grows. Many application vulnerabilities do not come from a single major mistake, but from a series of small security concessions accumulated over time. That is exactly what these best practices are meant to prevent.

References

Exit mobile version