I know about Redgate's Ants Profiler which I hear is awesome but it's also $400. Any cheaper alternatives out there that will at least point me to where I might need to optimize my code?
-
See http://stackoverflow.com/questions/911932/where-can-i-find-a-profiler-for-c-applications-to-be-used-in-visual-studio-2008 for further discussion /options – Amal Sirisena Jul 18 '09 at 18:24
4 Answers
EQATEC Profiler is free.
I haven't tried it myself, but it sounds ok and there are some positive testimonials on their site.
I'd be interested to hear the opinion of anyone who has actually used it.

- 263,068
- 57
- 365
- 409
-
2It's simple but gets the job done. And it's free (for personal use) which is great. – Dominic K Aug 29 '10 at 00:22
-
I quickly gave up AMD's CodeAnalyst because I couldn't work out how to get an analysis of "the total time spent in each method". So I tired the free version of EQATEC. It works well for me... told me exactly where my problem was in about five minutes... including registering, downloading, installing, configuring, and running my first analysis. Ergo: It's really easy to use. How did we get anything done before google? – corlettk Sep 09 '12 at 05:05
Dottrace is about half the price of Ants, and it's really good. Made by the same people that do ReSharper.
If you're just looking for a one-off optimization of your code, then you should go for Ants anyway, since it has a full-featured 15-day free trial, which should be enough to get a lot of optimization done.

- 115,835
- 26
- 236
- 269
-
I use Dottrace as well and would definitely second this recommendation. – Amal Sirisena Jul 18 '09 at 18:22
VSProfiler ships with VS and works pretty well. If you are looking at memory related issues then CLRProfiler will be your option.

- 4,030
- 1
- 16
- 12
-
1Correct me if I'm wrong, but I believe this is only available with a Team edition of Visual Studio. – womp Jul 18 '09 at 20:03
-
In general, the method I use is this.
I'm not so much interested in timing pieces of the code as in finding big unnecessary time-takers so I can clean them out and accomplish speedup.
It's really a different process.
ADDED: If I can elaborate, typical performance problems I see are that some activity (which is nearly always a function call) is consuming some fraction of time, like 10%, 50%, 90%, whatever, and it is not really necessary - it can be replaced with something else or not done at all, and that amount of time will be saved.
Suppose for illustration it's 50%.
I take random-time samples of the call stack, 10 for example, and that call has a 50% chance of appearing on each one, so it will be on roughly half of the samples. Thus it will attract my attention, and I will look to see if what it is doing is really necessary, and if not, I will fix it to get the speedup.
Now, was that measuring? If so, it was really poor measurement, because the number of samples was so small. If 5 out of 10 samples showed the call, the fraction of time is probably around 50%, give or take, and it's definitely more than 10%. So I may not know the percent with precision, but I definitely know it is worth fixing, and I definitely know exactly where the problem is.
(Side note: I did not count the number of calls, or estimate the call duration. Rather, I estimated the cost of the call, which is what removing it would save, which is its fractional residence time on the stack. Also notice that I am working at the call level, not the function level. I may care what function calls are above and below the call of interest, but other than that, function-level issues, such as exclusive time, call graphs, and recursion, play no part.)
That's why I say measuring performance, and finding performance problems, while they may be complementary, are really different tasks.

- 1
- 1

- 40,059
- 14
- 91
- 135