5

I have a dotnet core mvc web application using AzureAD b2c authentication (via OpenId Connect). This works correctly when I run it against localhost but when I deploy the solution to Kubernetes and I try to login I get the following error:

Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware[1]
      An unhandled exception has occurred while executing the request.
System.Exception: An error was encountered while handling the remote login.
 ---> System.Exception: Unable to unprotect the message.State.
   --- End of inner exception stack trace ---
   at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync()
   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
   at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.<Invoke>g__Awaited|6_0(ExceptionHandlerMiddleware middleware, HttpContext context, Task task)'

I have set up an NGINX ingress with SSL that forwards the traffic to the service in Kubernetes so this is acting as a reverse proxy within the cluster.

To ensure that the request's original hostname is retained I have added the following to the startup.cs:

services.Configure<ForwardedHeadersOptions>(options =>
            {
                options.ForwardedHeaders =
                    ForwardedHeaders.XForwardedProto | ForwardedHeaders.XForwardedHost;
                options.KnownNetworks.Clear();
                options.KnownProxies.Clear();
            });
app.UseForwardedHeaders();

As well as adding the following annotations to my Ingress

    nginx.ingress.kubernetes.io/proxy_http_version: "1.1"
    nginx.ingress.kubernetes.io/proxy_set_header: "Upgrade $http_upgrade"
    nginx.ingress.kubernetes.io/proxy_set_header: "Connection keep-alive"
    nginx.ingress.kubernetes.io/proxy_set_header: "Host $host"
    nginx.ingress.kubernetes.io/proxy_cache_bypass: "$http_upgrade"
    nginx.ingress.kubernetes.io/proxy_set_header: "X-Forwarded-For $proxy_add_x_forwarded_for"
    nginx.ingress.kubernetes.io/proxy_set_header: "X-Forwarded-Proto $scheme"
    nginx.ingress.kubernetes.io/proxy_buffers: "16 16k"
    nginx.ingress.kubernetes.io/proxy_buffer_size: "32k"

I've also made sure that the reply URLs have been correctly configured in Azure.

Is there a step I am missing when configuring the Ingress (NGINX) that could cause this issue?

JPlatt99
  • 61
  • 1
  • 6
  • Hi @JPlatt99. Does this only occur when you're hosting multiple pods? I believe this might be occurring because the data protection key (which is used to protect and unprotect the state data) isn't shared across pods by default. E.g. see [here](https://coding4dummies.net/load-balanced-asp-net-core-application-with-docker-mongodb-and-redis-pt-6-749deca700b0) – Chris Padgett Mar 02 '20 at 09:15

2 Answers2

4

I had the same problem, adding data protection fixed it:

        private void AddDataProtection(IServiceCollection services, IConfiguration configuration)
        {
            var serviceProvider = services.BuildServiceProvider();
            var kvClient = serviceProvider.GetRequiredService<IKeyVaultClient>();
            var vaultSettings = configuration.GetConfiguredSettings<VaultSettings>();
            var redisSettings = configuration.GetConfiguredSettings<RedisSettings>();
            var redisAccessKey = kvClient.GetSecretAsync(vaultSettings.VaultUrl, redisSettings.AccessKeySecretName).GetAwaiter().GetResult().Value;
            var connectionMultiplexer = ConnectionMultiplexer.Connect(new ConfigurationOptions()
            {
                EndPoints = { redisSettings.Endpoint },
                Password = redisAccessKey,
                Ssl = true,
                AbortOnConnectFail = false
            });
            var key = $"{env.ApplicationName}::{env.EnvironmentName}::DataProtection::Keys";
            Logger.LogInformation($"protect data using key={key}, cert='{redisSettings.ProtectionCertSecretName}'");
            var x509 = kvClient.GetX509CertificateAsync(vaultSettings.VaultUrl, redisSettings.ProtectionCertSecretName)
                .GetAwaiter().GetResult();
            services.AddDataProtection()
                .PersistKeysToRedis(connectionMultiplexer, key)
                .ProtectKeysWithCertificate(x509);
        }
user2529654
  • 71
  • 1
  • 5
  • 1
    Can you add some explanation as to how and why this answer works? See https://stackoverflow.com/help/how-to-answer – Marcello B. Jul 16 '20 at 05:20
  • 3
    as explained by JPlatt99, the error only occurs when there are multiple replicas, during authentication, when redirected to signin-oidc, state cannot be decrypted by different instance, adding distributed protection with same key resolved the problem. – user2529654 Jul 17 '20 at 07:30
1

As Chris Padgett suggested it was due to running multiple pods, scaling it down to one replica fixed the issue, I'll have to investigate sharing the data protection key between pods.

A sidenote for anyone reading, this ingress will still give you errors with oidc due to the headers being too large as the following annotations are incorrect

    nginx.ingress.kubernetes.io/proxy_buffers: "16 16k"
    nginx.ingress.kubernetes.io/proxy_buffer_size: "32k"

It should instead be

    nginx.ingress.kubernetes.io/proxy-buffers: "16 16k"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
JPlatt99
  • 61
  • 1
  • 6
  • Seems you have entered before and after values same as: nginx.ingress.kubernetes.io/proxy_buffers: "16 16k" nginx.ingress.kubernetes.io/proxy_buffer_size: "32k" Can you provide what should be the new values of these properties? – user2142938 Aug 05 '23 at 08:42