-4

I'm wondering if there is a (almost negligible) performance hit when creating a couple of additional class to hold my business log vs creating having extra methods in the current class.

If I spread out the business logic across a couple of classes (let's call them 'sub-classes'), and the 'main' business logic class instantiates these 'sub-classes' when required (and the GC cleans up memory when finished) the code would be neater, spread out logically, and thus easier to maintain. As opposed to putting everything in one 'main' class.

OpcodePete
  • 887
  • 1
  • 14
  • 28
  • 2
    If you're concerned about performance, you should measure it. – Brian Rasmussen Feb 07 '14 at 22:33
  • 1
    In the modern age of computers this is rarely something to worry about. If you were working on an environment with limited memory/processing power you would worry... – Simon Whitehead Feb 07 '14 at 22:39
  • Thanks Brian and Simon. I'm using Visual Studio 2013 and C#. If I instantiate one class, I can then use 1)task manager to review memory usage, and the 2) C# stopwatch library, but I suspect this would be un-noticable with both these tools. Hence why I haven't measured anything yet. And I subscribe to Simon's theory. – OpcodePete Feb 07 '14 at 22:41
  • 1
    @BrianRasmussen: Actually, its more like "If you're *having issues* with performance, you should measure it." It's C#, though: an object oriented language. Use classes. Just make sure to research IoC and DI. Dependencies need to be moved upward as far as possible. – Magus Feb 07 '14 at 22:41
  • 1
    Not the best question, since this might generate a lot of discussion. My advice is to avoid a monolithic class (aka God object). Try to adhere to good OOP and create classes that follow Single Responsibility Principle. – Nathan R Feb 07 '14 at 22:42
  • @Magus the point I was trying to make was don't guess, measure. – Brian Rasmussen Feb 07 '14 at 22:46
  • I'm just saying that while that's true, even measuring it is most likely a waste of time. If it's not broken, you don't need to even consider how it's broken when you proceed to not fix it. – Magus Feb 07 '14 at 22:49
  • 1
    One class, one job. Easy to test, easy to understand, easy to extend and easy to reuse. Adding methods just creates 'bucket' classes whose real intent is dilute. – David Osborne Feb 07 '14 at 22:54

1 Answers1

1

Your second paragraph answers your question for you:

If I spread out the business logic across a couple of classes … and the 'main' business logic class instantiates these [classes] when required … the code would be neater, spread out logically, and thus easier to maintain.

The per-instance overhead of an object instance is 8 bytes (32-bit) or 16 bytes (64 bit), plus whatever instance data the object instance requires. The minimum allocation chunk is 12 or 24 bytes, per Jon Skeet's answer to the question, What is the memory overhead of a .NET Object

Don't sweat it. Get your object model right and your notation right, first. If you have a problem, then (and only then), you can start to worry about it.

The average lifespan of a piece of code is something north of 7 years. People are expensives, computers and memory are cheap. Lacking strong arguments to the contrary, clarity and maintainability trump performance: write code with an eye towards the guy 4 years from now that will have to fix your code. Don't fix problems you don't yet have.

Community
  • 1
  • 1
Nicholas Carey
  • 71,308
  • 16
  • 93
  • 135