19

We are having a bunch of problems (read long response times) with a couple of projects in production and wanted to see exactly what was happening on the server. I then proceeded to add Application Insights to all of our projects by following this article. The problem is that both of our WebAPI projects are not sending server data to the Azure portal, while all other projects (MVC 5) are.

This is what is shown when I access the corresponding Application Insights blade on Azure:

enter image description here

I tried to disable and re-enable data collection in the Application Insights Status Monitor in our Azure VMs, restarted IIS a few times all while making requests to the API, to no avail. When I enable it on a MVC project, I can see the data almost instantly on the Azure portal when I open pages on the site.

When I saw that data was not being sent from our Azure VMs for these specific projects, I tried to setup the same collections in our dev environment, which is hosted in our own infrastructure, and the exact same situation repeated itself, ruling out the possibility that this is related to projects being hosted in Azure VMs.

I'm not exactly sure what is preventing these projects from sending data to Azure, but by taking a look at the working projects vs the non working ones, I think it might be somehow related to the fact that our WebAPI projects use the new OWIN pipeline while the MVC ones are standard MVC projects. I checked both the web.config file and the bin folder for both project types and they seem to be modified correctly by the Insights Monitor (I can see the same new dlls added to the bin folder and the same http module added to the web.config).

With that in mind, how do I enable server side telemetry using Application Insights for WebAPI projects that rely on the OWIN/Katana pipeline? What could I do to find out what exactly is causing the project to not send data to Azure in this case?

abatishchev
  • 98,240
  • 88
  • 296
  • 433
julealgon
  • 7,072
  • 3
  • 32
  • 77

3 Answers3

21

This is an old question but it was still in the top 3 results on searches for "web api application insights owin". After lots of searching and not a lot of answers that didn't require us to write our own middleware or explictly instrumenting everything. We came across an extension package that made things super simple:

Here's the Github Repository for it and the associated NuGet Package

For those too lazy to look at the links, all that was needed to be added was:

public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app.UseApplicationInsights();

        // rest of the config here...
    }
}

and add this to your ApplicationInsights.Config

<TelemetryInitializers>
    <!-- other initializers.. -->
    <Add Type="ApplicationInsights.OwinExtensions.OperationIdTelemetryInitializer, ApplicationInsights.OwinExtensions"/>
</TelemetryInitializers>
rfcdejong
  • 2,219
  • 1
  • 25
  • 51
Norrec
  • 531
  • 4
  • 17
10

AI uses httpmodule to collect information on begin request and send it on end request. As described here Owin/Katana uses middelwares to execute logic on a different stages. As most of AI auto collection logic is internal you cannot reuse it in your middleware. But you can instrument your code yourself. Create TelemetryClient from your code and start sending Request, Traces and Exceptions (like described here)

Anastasia Black
  • 1,890
  • 9
  • 21
  • So basically you are saying that it's not possible to integrate AI in a WebAPI project that uses OWIN/Katana without code modifications? I'd really like to avoid that, since what we need now is just a temporary profiling to detect and fix the slowdown and then remove AI from the projects again. If code modifications are needed, it means we will have to deploy an entirely new version to production just to diagnose the problem, which is a big hassle at the moment. – julealgon Apr 07 '15 at 14:14
  • 1
    AI is not a profiler. It is SDK for code instrumentation. There is out of the box support for asp.net applications that use regular IIS stack. If onBegin and onEnd are not called, AI code is not invoked. – Anastasia Black Apr 09 '15 at 06:28
  • I understand that, but our current scenario would greatly benefit from the Application Insights Monitor telemetry on a live environment. If we go the code approach now, it means we will have to come up with a plan on how to manage the keys over all different environments, how to set them, where in the code this will need to be, etc. Unfortunately we don't have time for these tasks right now and we need to solve the problem as quick as possible, so it's very sad that this process does not work. Are you from the dev team? Have you considered supporting katana out of the box somehow? – julealgon Apr 09 '15 at 15:09
  • There are no current plans to support it. I would suggest to add the ask to [link](http://visualstudio.uservoice.com/forums/121579-visual-studio/category/77108-application-insights) so if it gets more votes it will be added to the backlog. Though I doubt that you can register middleware in config so I would expect that support will be added only for greenfield/brownfield applications. – Anastasia Black Apr 10 '15 at 17:46
  • 1
    Biggest problem is the lack of available documentation about this subject. – Thibault D. Oct 30 '15 at 10:54
  • There are several places where you can read about application insights. 1. Getting started and feature overview here: https://azure.microsoft.com/en-us/documentation/articles/app-insights-get-started/ 2. We try to cover all new features on the blog post here: https://azure.microsoft.com/en-us/blog/tag/application-insights/ 3. And even more tips here: http://apmtips.com/. Also we are gradually moving to open source. Core API and channels are already there: https://github.com/Microsoft/ApplicationInsights-dotnet and contributions are welcome. Hope this helps. – Anastasia Black Nov 01 '15 at 04:14
  • 5
    It smells to me like the AI team are expecting the community to pick up the slack around OWIN/Katana and AI integration and I kinda get that. For me, the problem was discovering that AI would 'just work' for my traditional WebAPIs and not for my OWIN ones. – Luke Puplett Nov 17 '15 at 13:47
  • _As described here Owin/Katana uses middelwares to execute logic on a different stages_ if you run OWIN/Katana on IIS it use HttpModules as well. starts from Authentication stage of request. AI http module *Microsoft.ApplicationInsights.Web.ApplicationInsightsHttpModule* uses BeginRequest to track request so I think all should work if you add the module to the web config. – Ilya Sep 20 '17 at 13:18
7

Below is our implementation of a OWIN Middleware for Application Insights.

/// <summary>
/// Extensions to help adding middleware to the OWIN pipeline
/// </summary>
public static class OwinExtensions
{
    /// <summary>
    /// Add Application Insight Request Tracking to the OWIN pipeline
    /// </summary>
    /// <param name="app"><see cref="IAppBuilder"/></param>
    public static void UseApplicationInsights(this IAppBuilder app) => app.Use(typeof(ApplicationInsights));

}

/// <summary>
/// Allows for tracking requests via Application Insight
/// </summary>
public class ApplicationInsights : OwinMiddleware
{

    /// <summary>
    /// Allows for tracking requests via Application Insight
    /// </summary>
    /// <param name="next"><see cref="OwinMiddleware"/></param>
    public ApplicationInsights(OwinMiddleware next) : base(next)
    {
    }

    /// <summary>
    /// Tracks the request and sends telemetry to application insights
    /// </summary>
    /// <param name="context"><see cref="IOwinContext"/></param>
    /// <returns></returns>
    public override async Task Invoke(IOwinContext context)
    {
        // Start Time Tracking
        var sw = new Stopwatch();
        var startTime = DateTimeOffset.Now;
        sw.Start();

        await Next.Invoke(context);

        // Send tracking to AI on request completion
        sw.Stop();

        var request = new RequestTelemetry(
            name: context.Request.Path.Value,
            startTime: startTime,
            duration: sw.Elapsed,
            responseCode: context.Response.StatusCode.ToString(),
            success: context.Response.StatusCode >= 200 && context.Response.StatusCode < 300
            )
        {
            Url = context.Request.Uri,
            HttpMethod = context.Request.Method
        };

        var client = new TelemetryClient();
        client.TrackRequest(request);

    }
}
Phillsta
  • 131
  • 3
  • 5
  • Looks pretty good, clean and simple. I don't see any key property though, how does the AI server knows which application to add the telemetry to? Is it based on that `Name` parameter on the telemetry request? – julealgon Mar 13 '16 at 12:27
  • @julealgon The instrumentation key is read from the ApplicationInsights.config file. I tried this solution, and it worked perfectly! – Tuukka Haapaniemi Apr 08 '16 at 12:51
  • 1
    @julealgon You can use the [TelemetryClient(TelemetryConfiguration)](https://learn.microsoft.com/en-us/dotnet/api/microsoft.applicationinsights.telemetryclient?view=azure-dotnet) constructor OR just set the `instrumentation key` property on the TelemetryClient() object – GiriB Jul 16 '18 at 16:55