For instance, let's say you use fully-qualified namespaces instead of aliases or 'using' statements in an extremely large piece of software. Let's say you type in all kinds of nonsense that doesn't really need to be there, your arrays don't stop iterating when your goal is executed, etc, etc. Would these types of code inefficiencies affect the speed of execution of a piece of software today?
-
1Could somebody please explain to me why this question is being downvoted and how I could improve it? – user Feb 13 '14 at 06:04
-
1I think code bloat might be the wrong term. I usually think of code bloat as 'unnecessary complexity' in the code that adds little or no value, usually due to bad design and rushed coding. I would put things like array inefficiencies in a different category – xdhmoore Feb 13 '14 at 06:04
-
I agree, I'm talking about the length of code in terms of the number of characters used to produce it. Overly lengthy kind of code. – user Feb 13 '14 at 06:05
-
This seems better suited for programmers.stackexchange.com. – Sumner Evans Feb 13 '14 at 06:05
2 Answers
If by 'code bloat' you mean code that is less readable and unnecessarily complex, the main cost of 'code bloat' is longer development time, not slower code. That doesn't mean there's never a cost in terms of efficiency, but sometimes cleaner code is slower. So, I would say that code bloat doesn't necessarily mean the code is slower or faster, except that the unreadability can keep people from coding in performant ways because the hurdle is higher for understanding the code and optimizing for performance.
If by code bloat, you mean algorithmic efficiency, it probably depends on what you are doing. Something that has a performance curve of O(e^n) for large datasets is going to be slow, no matter how fast your processor is. That said, I usually base it on the size of n. If I know my dataset is going to be small (a hard-coded dropdown menu with 7 items), then I won't worry as much if I'm doing a linear search O(n) instead of a binary search in O(log(n)). But I usually tend towards doing it faster if possible.
Big-O-Notation in case I'm speaking greek: https://en.wikipedia.org/wiki/Big_O_notation

- 8,935
- 11
- 47
- 90
-
I agree with you in terms of Big O and I am familiar with how that affects speed of execution in programs. However, this is more attributed to algorithms as far as I know. I'm speaking about length of code, let's say, if I were to isolate my concern into one simple factor. Putting efficiency of the code aside, assuming two code samples that use the exact same methodologies, would one with triple the length be significantly slower given the amount of RAM/processing power offered to programmers today? – user Feb 13 '14 at 06:50
-
1No clue. Probably depends on what language you're using and how it's getting compiled. Maybe that is the crux of your question, and I guess I don't know enough to answer that. Unless you're doing something in a highly constrained environment (embedded systems, realtime systems, or high-performance computing), I would think that algorithmic efficiency and code readability would be more of an issue. – xdhmoore Feb 13 '14 at 06:53
Off course number of characters, not even LOC, used in a program doesn't show complexity in general. So we cannot say about it's influence on the total throughput, at least in general terms. However to be more accurate complexity of your program is really important not today even in the future. Please consider that our needs raise with our abilities. These days we are facing big data, and by this term we mean thousands of terabytes, but about 15 years ago this size of data was unbelievable.
Take a look at these two snippets:
//Block 2 - O(1)
int abcdef1;
int abcdef2;
//...
int abcdef100000;
//----------------
//Block 2 - O(n^2)
for (int i=0; i < n; i++)
for (int j=0; j < n; j++)
//do something
//----------------
it's clear that the number of characters is not a representative measure for complexity. for more details visit :

- 1
- 1

- 7,177
- 5
- 42
- 66
-
I agree with you fully given your two examples above, but I'm talking about two of the same processes. Let's say you take a large application. In one version you refer to namespaces in their fully qualified manner.So let's say in C#, you refer to the console as System.Console.WriteLine("") throughout the entire application and do this with all references to all static methods and in every spot where you can.Now, same application, another version, you use "using System" and other using statements. Would this reduction in the length of code affect the speed of execution in a large scale program? – user Feb 13 '14 at 06:54
-
1Again this sort of verboseness is not so important.Programs like C# use some sort of compiler and with this consideration it's obvious that in either case (using System and not using) you just want to tell something to the compiler, and finally the compiler uses these hints just to locate some classes and there is no reason that compiler generate different low level codes at last. – Mohsen Kamrani Feb 13 '14 at 07:21
-
user3274830, that is completely news to me and very much answers my question. thank you – user Feb 13 '14 at 07:36