3

I am writing an application that I need to run at incredibly low processor speeds. The application creates and destroys memory in creative ways throughout its run, and it works just fine. What compiler optimizations occur so I can try to build to that?

One trick off hand is that the CLR handles arrays much faster than lists, so if you need to handle a ton of elements in a List, you may be better off calling ToArray() and handling it rather than calling ElementAt() again and again.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Dested
  • 6,294
  • 12
  • 51
  • 73
  • 4
    I'm not quite sure you can *defeat* the CLR by bypassing its optimizations. How's about you explain exactly why you need to run "at low speeds" and what exactly that means. A possible solution might be as simple as suspending the active thread... – Yuval Adam Mar 09 '10 at 11:21
  • Im not trying to bypass its optimizations, im trying to utilize them as much as possible. My problem is non specific. – Dested Mar 09 '10 at 11:22
  • Um... you need the application to run at **low** speeds? I'm sure there are any number of creative approaches, apart from the blindingly obvious `Thread.Sleep()`. But you apparently meant the opposite... – Pontus Gagge Mar 09 '10 at 11:25
  • 4
    If your problem is non-specific then there is no specific answer to your question. If you want to increase performance - profile and fix bottlenecks. If you want your program to run slowly (?!) - constantly suspend the thread to give other processes CPU time. – Yuval Adam Mar 09 '10 at 11:25
  • 7
    Do you mean high speeds? Low speed means it will run slowly and take forever to do anything (like a car traveling at low speeds will take a while to get from point A to point B), whereas a high speed will process data quickly (it will go from point A to B in a short period of time). – Grant Peters Mar 09 '10 at 11:26
  • 1
    List is implemented using an array, so you get the optimizations even for lists. – Brian Rasmussen Mar 09 '10 at 11:59

7 Answers7

8

Build your system, run it, then attach a profiler to see what's slow. Then use Stack Overflow, Google, and common sense to speed those areas up.

The most important thing is to not waste time speeding up things that actually don't matter, so profiling is very important.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Rob Fonseca-Ensor
  • 15,510
  • 44
  • 57
6

You possibly mean HIGH speeds, not low speeds.

Wrong language. For total optimization you need something lower level. Mostly not needed, though.

Note BTW., that your indication of arrays and lists is wrong... List is, depending on which you choose, a linked list, so has a different performance characteristics than an array. But that is not a CLR / runtime thing.

Besides the StringBuilder - my main advice is: use a profiler. Most people try to be smart with speed, but never profile, so spend a lot of time on useless optimizations later on - thy get faster, but bad bang for the buck. Find out first where the application actually spends the time.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
TomTom
  • 61,059
  • 10
  • 88
  • 148
2

If there is a lot of string manipulation in your application, use StringBuilder instead of string. The performance of the application will increase considerably.

Also replace string concatenations (+ operator) with StringBuilder.

Specific for Windows Forms .NET, turn off DataGridView.AutoSizeColumnsMode and AutoSizeRowMode.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
HotTester
  • 5,620
  • 15
  • 63
  • 97
  • 1
    Always profile before and after! Switching to `StringBulder` will not always speed things up. The compiler is quite good at working with strings. – Mike Two Mar 09 '10 at 11:25
  • I wouldn't say 'always'. See this interesting article for instance: http://www.heikniemi.net/hardcoded/2004/08/net-string-vs-stringbuilder-concatenation-performance/ – Steven Mar 09 '10 at 11:27
2

It is usually pretty rare that you need to go down to this level, but I'm doing something pretty similar at the moment; some thoughts:

  • do you have any buffers in regular use? Have you considered pooling them? i.e. so that instead of creating a new one you ask a pool to get (or create) one?
  • have you removed any reflection? (replacing with typed delegates, DynamicMethod, etc) - or taking it hardcore, Reflection.Emit?
  • have you considered unsafe? (used sparingly, in places where you have a measured need to do so)
  • (again, pretty low level) have you looked for anything silly in the IL? I recently found a lot of cycles (in some specific code) were being spent shuffling things around on the stack to pass in parameters appropriate. By changing the code (and, admittedly doing some custom IL) I've eliminated all the unnecessary (shuffling) stloc/ldloc, and removes almost all of the locals (reducing the stack size in the process)

But honestly, you need to profile here. Focus on things that actually matter, or where you know you have a problem to fix.

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
  • Are you sure that your custom IL even made a difference? The people who wrote the C# compiler might know that the JIT would handle that specific case. – Jørgen Fogh May 04 '10 at 13:39
  • @Jørgen - well, I need to use `ILGenerator` *anyway* due to the meta-programming nature of the task. Might as well squeeze them in. And in answer; the C# compiler would be *forced* to use extra locals in this scenario, taking *infinitesimally* more stack space. I'd be very surprised if the JIT unpicked it. – Marc Gravell May 04 '10 at 15:37
1

You are mentioning arrays being faster than List's. The CLR will actually do basic bounds checking for you when you are accessing an array. So, you will be able to gain a bit more performance by using the unsafe keyword, and then accessing the array using pointer arithmetic. Only do this if you actually need it - and you can measure a performance improvement for your specific scenario.

driis
  • 161,458
  • 45
  • 265
  • 341
1

When you ask about low-level optimizations (which nearly everyone does) you are basically guessing that those things will matter. Maybe at some level they will, but nearly everyone underestimates what can be saved by high-level optimization, which cannot be done by guessing.

It's like the proverbial iceberg, where what you can see is a tiny fraction of what's there.

Here's an example. I hesitate to say "use a profiler" because I don't.
Instead, I do this which, in my experience, works much better.

Community
  • 1
  • 1
Mike Dunlavey
  • 40,059
  • 14
  • 91
  • 135
0

Let the garbage collector (GC) do its work and do not interfere with calling GC.Collect directly. This will hinder the built-in GC algorithm to run effectively in turn run slower and your call to GC.Collect can add unnecessary overhead too.

For GDI+ specific, call Invalidate to invalidate the client area or specific rectangle of a control or form instead of calling Refresh which call invalidate and update to repaint the control.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Fadrian Sudaman
  • 6,405
  • 21
  • 29