I have 3 byte arrays in C# that I need to combine into one. What would be the most efficient method to complete this task?
-
3What specifically are your requirements? Are you taking the union of the arrays or are you preserving multiple instances of the same value? Do you want the items sorted, or do you want to preserve the ordering in the initial arrays? Are you looking for efficiency in speed or in lines of code? – jason Jan 06 '09 at 02:59
-
1Please try to be more clear in your questions. This vague question has caused a lot of confusion amongst those people good enough to take the time to answer you. – Drew Noakes Jan 06 '09 at 10:20
-
8If you are able to use LINQ, then you can just use the [`Concat`](http://msdn.microsoft.com/en-us/library/bb302894.aspx) method: `IEnumerable
arrays = array1.Concat(array2).Concat(array3);` – casperOne Jan 06 '09 at 03:11
13 Answers
For primitive types (including bytes), use System.Buffer.BlockCopy
instead of System.Array.Copy
. It's faster.
I timed each of the suggested methods in a loop executed 1 million times using 3 arrays of 10 bytes each. Here are the results:
- New Byte Array using
System.Array.Copy
- 0.2187556 seconds - New Byte Array using
System.Buffer.BlockCopy
- 0.1406286 seconds - IEnumerable<byte> using C# yield operator - 0.0781270 seconds
- IEnumerable<byte> using LINQ's Concat<> - 0.0781270 seconds
I increased the size of each array to 100 elements and re-ran the test:
- New Byte Array using
System.Array.Copy
- 0.2812554 seconds - New Byte Array using
System.Buffer.BlockCopy
- 0.2500048 seconds - IEnumerable<byte> using C# yield operator - 0.0625012 seconds
- IEnumerable<byte> using LINQ's Concat<> - 0.0781265 seconds
I increased the size of each array to 1000 elements and re-ran the test:
- New Byte Array using
System.Array.Copy
- 1.0781457 seconds - New Byte Array using
System.Buffer.BlockCopy
- 1.0156445 seconds - IEnumerable<byte> using C# yield operator - 0.0625012 seconds
- IEnumerable<byte> using LINQ's Concat<> - 0.0781265 seconds
Finally, I increased the size of each array to 1 million elements and re-ran the test, executing each loop only 4000 times:
- New Byte Array using
System.Array.Copy
- 13.4533833 seconds - New Byte Array using
System.Buffer.BlockCopy
- 13.1096267 seconds - IEnumerable<byte> using C# yield operator - 0 seconds
- IEnumerable<byte> using LINQ's Concat<> - 0 seconds
So, if you need a new byte array, use
byte[] rv = new byte[a1.Length + a2.Length + a3.Length];
System.Buffer.BlockCopy(a1, 0, rv, 0, a1.Length);
System.Buffer.BlockCopy(a2, 0, rv, a1.Length, a2.Length);
System.Buffer.BlockCopy(a3, 0, rv, a1.Length + a2.Length, a3.Length);
But, if you can use an IEnumerable<byte>
, DEFINITELY prefer LINQ's Concat<> method. It's only slightly slower than the C# yield operator, but is more concise and more elegant.
IEnumerable<byte> rv = a1.Concat(a2).Concat(a3);
If you have an arbitrary number of arrays and are using .NET 3.5, you can make the System.Buffer.BlockCopy
solution more generic like this:
private byte[] Combine(params byte[][] arrays)
{
byte[] rv = new byte[arrays.Sum(a => a.Length)];
int offset = 0;
foreach (byte[] array in arrays) {
System.Buffer.BlockCopy(array, 0, rv, offset, array.Length);
offset += array.Length;
}
return rv;
}
*Note: The above block requires you adding the following namespace at the the top for it to work.
using System.Linq;
To Jon Skeet's point regarding iteration of the subsequent data structures (byte array vs. IEnumerable<byte>), I re-ran the last timing test (1 million elements, 4000 iterations), adding a loop that iterates over the full array with each pass:
- New Byte Array using
System.Array.Copy
- 78.20550510 seconds - New Byte Array using
System.Buffer.BlockCopy
- 77.89261900 seconds - IEnumerable<byte> using C# yield operator - 551.7150161 seconds
- IEnumerable<byte> using LINQ's Concat<> - 448.1804799 seconds
The point is, it is VERY important to understand the efficiency of both the creation and the usage of the resulting data structure. Simply focusing on the efficiency of the creation may overlook the inefficiency associated with the usage. Kudos, Jon.

- 20,270
- 7
- 50
- 76

- 45,297
- 16
- 93
- 124
-
1
-
1Yes. There is no memory being allocated/copied (aside from the iterator anonymous class). – FryGuy Jan 06 '09 at 06:45
-
67But are you actually converting it into an array at the end, as the question requires? If not, of course it's faster - but it's not fulfilling the requirements. – Jon Skeet Jan 06 '09 at 08:17
-
2
-
1If you argue the IEnumerable<> solution doesn't fulfill the requirement, then I'll argue the requirement is poor because it offers no context. A combined array *may* not be necessary, in which case it'd be foolish to create one just to satisfy a poor requirement. Better to update the requirement. – Matt Davis Jan 06 '09 at 19:49
-
Awesome summary of multiple methods with pros and cons. +1. Much appreciated. – Gerald Davis Aug 23 '11 at 12:56
-
29Re:Matt Davis - It doesn't matter if your "requirements" need turn the IEnumerable into an array - all that your requirements need is that the result is actually *used in some fasion*. The reason your performance tests on IEnumerable are so low is because *you are not actually doing anything*! LINQ does not perform any of its work until you attempt to use the results. For this reason I find your answer objectively incorrect and could lead others to use LINQ when they absolutely should not if they care about performance. – csauve May 16 '13 at 20:09
-
2As a general rule if your performance tests show 0 or close to 0 (as your IEnumerable results do), there is probably something wrong with your test. – csauve May 16 '13 at 20:13
-
1@csauve, Jon Skeet made the same point over 4 years ago, and I addressed it as part of the edit to my answer. I appreciate the clarification, but perhaps commenting after reading the *entire* answer would be more beneficial in the future. – Matt Davis May 16 '13 at 20:24
-
14I read the entire answer including your update, my comment stands. I know I'm joining the party late, but the answer is grossly misleading and the first half is patently *false*. – csauve May 16 '13 at 20:48
-
1So when you say "But, if you can use an IEnumerable
, DEFINITELY prefer Linq's Concat<> method." that is a false statement. When you make false statements (and make them in bold all caps DEFINITELY no-less) you should correct them inline rather than 20 lines down. – csauve May 16 '13 at 20:51 -
I am also trying to make it very clear that the "result must be a byte array" requirement is of no consequence, which *it seems* you still cling to. – csauve May 16 '13 at 20:53
-
Once you get the required 2000 rep, feel free to edit the answer to your satisfaction. Alternatively, you can post an answer yourself and provide as much clarification as you wish. Until then, your comments will serve to complement my edited answer. – Matt Davis May 16 '13 at 21:49
-
3If the code that generated these results isn't available for public scrutiny, these figures are as good as fiction. – core May 30 '13 at 23:54
-
19Why is the answer that contains false and misleading information the top-voted answer, and was edited to basically completely **invalidate its original statement** after someone (Jon Skeet) pointed out that it didn't even answer OPs question? – MrCC Jan 27 '14 at 12:26
-
3@MattDavis, Whould you please publish your test code too? It I cannot get the same result proportions as yours. – AaA Feb 12 '14 at 04:04
-
3Misleading answer. Even the edition isn't answering the question. – Serge Profafilecebook Jun 27 '14 at 08:14
-
1`IEnumerable
` usage in your answer shows your limited knowledge of Linq because you only built expressions but not the result. If you target extreme performance, you must avoid Linq. Thank for pointing the faster execution of `System.Buffer.BlockCopy` – Shenron Mar 24 '17 at 15:09
Many of the answers seem to me to be ignoring the stated requirements:
- The result should be a byte array
- It should be as efficient as possible
These two together rule out a LINQ sequence of bytes - anything with yield
is going to make it impossible to get the final size without iterating through the whole sequence.
If those aren't the real requirements of course, LINQ could be a perfectly good solution (or the IList<T>
implementation). However, I'll assume that Superdumbell knows what he wants.
(EDIT: I've just had another thought. There's a big semantic difference between making a copy of the arrays and reading them lazily. Consider what happens if you change the data in one of the "source" arrays after calling the Combine
(or whatever) method but before using the result - with lazy evaluation, that change will be visible. With an immediate copy, it won't. Different situations will call for different behaviour - just something to be aware of.)
Here are my proposed methods - which are very similar to those contained in some of the other answers, certainly :)
public static byte[] Combine(byte[] first, byte[] second)
{
byte[] ret = new byte[first.Length + second.Length];
Buffer.BlockCopy(first, 0, ret, 0, first.Length);
Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
return ret;
}
public static byte[] Combine(byte[] first, byte[] second, byte[] third)
{
byte[] ret = new byte[first.Length + second.Length + third.Length];
Buffer.BlockCopy(first, 0, ret, 0, first.Length);
Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
Buffer.BlockCopy(third, 0, ret, first.Length + second.Length,
third.Length);
return ret;
}
public static byte[] Combine(params byte[][] arrays)
{
byte[] ret = new byte[arrays.Sum(x => x.Length)];
int offset = 0;
foreach (byte[] data in arrays)
{
Buffer.BlockCopy(data, 0, ret, offset, data.Length);
offset += data.Length;
}
return ret;
}
Of course the "params" version requires creating an array of the byte arrays first, which introduces extra inefficiency.

- 1,421,763
- 867
- 9,128
- 9,194
-
Jon, I understand precisely what you're saying. My only point is that sometimes questions are asked with a particular implementation already in mind without realizing that other solutions exist. Simply providing an answer without offering alternatives seems like a disservice to me. Thoughts? – Matt Davis Jan 06 '09 at 19:58
-
2@Matt: Yes, offering alternatives is good - but it's worth explaining that they *are* alternatives rather than passing them off as the answer to the question being asked. (I'm not saying that you did that - your answer is very good.) – Jon Skeet Jan 06 '09 at 20:11
-
6(Although I think your performance benchmark should show the time taken to go through all the results in each case, too, to avoid giving lazy evaluation an unfair advantage.) – Jon Skeet Jan 06 '09 at 20:12
-
You can also change `params byte[][]` to `IEnumerable
`, thus avoiding the need for an array – Ohad Schneider Oct 25 '10 at 14:54 -
1Even without meeting the requirement of "result must be an array", simply meeting a requirement of "result must be used in some fasion" would make LINQ non-optimal. I think that requirement to be able to use the result should be implicit! – csauve May 16 '13 at 20:18
-
-
-
2@andleer: Aside from anything else, Buffer.BlockCopy only works with primitive types. – Jon Skeet Mar 24 '14 at 15:58
I took Matt's LINQ example one step further for code cleanliness:
byte[] rv = a1.Concat(a2).Concat(a3).ToArray();
In my case, the arrays are small, so I'm not concerned about performance.

- 51,256
- 26
- 134
- 147
-
3
-
4This is definitely clear, readable, requires no external libraries/helpers, and, in terms of development time, is quite efficient. Great when run-time performance is not critical. – binki Mar 21 '18 at 15:57
If you simply need a new byte array, then use the following:
byte[] Combine(byte[] a1, byte[] a2, byte[] a3)
{
byte[] ret = new byte[a1.Length + a2.Length + a3.Length];
Array.Copy(a1, 0, ret, 0, a1.Length);
Array.Copy(a2, 0, ret, a1.Length, a2.Length);
Array.Copy(a3, 0, ret, a1.Length + a2.Length, a3.Length);
return ret;
}
Alternatively, if you just need a single IEnumerable, consider using the C# 2.0 yield operator:
IEnumerable<byte> Combine(byte[] a1, byte[] a2, byte[] a3)
{
foreach (byte b in a1)
yield return b;
foreach (byte b in a2)
yield return b;
foreach (byte b in a3)
yield return b;
}

- 8,614
- 3
- 33
- 47
-
I've done something similar to your 2nd option to merge large streams, worked like a charm. :) – Greg D Jan 06 '09 at 03:21
-
4
I actually ran into some issues with using Concat... (with arrays in the 10-million, it actually crashed).
I found the following to be simple, easy and works well enough without crashing on me, and it works for ANY number of arrays (not just three) (It uses LINQ):
public static byte[] ConcatByteArrays(params byte[][] arrays)
{
return arrays.SelectMany(x => x).ToArray();
}

- 2,818
- 3
- 25
- 29
-
This should be the accepted answer due to its simplicity, versatility and pedagogical value >;-) – smirkingman Aug 15 '23 at 21:02
The memorystream class does this job pretty nicely for me. I couldn't get the buffer class to run as fast as memorystream.
using (MemoryStream ms = new MemoryStream())
{
ms.Write(BitConverter.GetBytes(22),0,4);
ms.Write(BitConverter.GetBytes(44),0,4);
ms.ToArray();
}

- 5,215
- 1
- 23
- 42
-
3As qwe stated, I did a test in a loop 10,000,000 times, and MemoryStream came out 290% SLOWER than Buffer.BlockCopy – esac Jan 12 '11 at 17:18
-
In some cases you may be iterating over an enumerable of arrays without any foreknowledge of the individual array lengths. This works well in this scenario. BlockCopy relies on having a destination array precreated – Sentinel Jun 27 '17 at 14:18
-
As @Sentinel said, this answer is perfect for me because I have no knowledge of the size of things I have to write and lets me do things very cleanly. It also plays nice with .NET Core 3's [ReadOnly]Span
! – Water Oct 09 '19 at 17:55 -
1If you initialize MemoryStream with the final size of the size it will not be recreated and it will be faster @esac. – Tono Nam Mar 28 '20 at 19:30
public static byte[] Concat(params byte[][] arrays) {
using (var mem = new MemoryStream(arrays.Sum(a => a.Length))) {
foreach (var array in arrays) {
mem.Write(array, 0, array.Length);
}
return mem.ToArray();
}
}

- 31
- 1
-
You're answer could be better if you had posted a little explanation of what does this code sample. – AFract Dec 10 '14 at 17:30
-
2it does concatenate an array of byte arrays into one large byte array (like this): [1,2,3] + [4,5] + [6,7] ==> [1,2,3,4,5,6,7] – Peter Ertl Dec 15 '14 at 16:36
Can use generics to combine arrays. Following code can easily be expanded to three arrays. This way you never need to duplicate code for different type of arrays. Some of the above answers seem overly complex to me.
private static T[] CombineTwoArrays<T>(T[] a1, T[] a2)
{
T[] arrayCombined = new T[a1.Length + a2.Length];
Array.Copy(a1, 0, arrayCombined, 0, a1.Length);
Array.Copy(a2, 0, arrayCombined, a1.Length, a2.Length);
return arrayCombined;
}

- 29
- 7
public static bool MyConcat<T>(ref T[] base_arr, ref T[] add_arr)
{
try
{
int base_size = base_arr.Length;
int size_T = System.Runtime.InteropServices.Marshal.SizeOf(base_arr[0]);
Array.Resize(ref base_arr, base_size + add_arr.Length);
Buffer.BlockCopy(add_arr, 0, base_arr, base_size * size_T, add_arr.Length * size_T);
}
catch (IndexOutOfRangeException ioor)
{
MessageBox.Show(ioor.Message);
return false;
}
return true;
}

- 21
- 1
-
1Unfortunately this won't work with all types. Marshal.SizeOf() will be unable to return a size for many types (try using this method with arrays of strings and you'll see an exception "Type 'System.String' cannot be marshaled as an unmanaged structure; no meaningful size or offset can be computed". You could try limiting the type parameter to reference types only (by adding `where T : struct`), but - not being an expert in the innards of the CLR - I couldn't say whether you might get exceptions on certain structs as well (e.g. if they contain reference type fields). – Daniel Scott Sep 12 '15 at 01:39
/// <summary>
/// Combine two Arrays with offset and count
/// </summary>
/// <param name="src1"></param>
/// <param name="offset1"></param>
/// <param name="count1"></param>
/// <param name="src2"></param>
/// <param name="offset2"></param>
/// <param name="count2"></param>
/// <returns></returns>
public static T[] Combine<T>(this T[] src1, int offset1, int count1, T[] src2, int offset2, int count2)
=> Enumerable.Range(0, count1 + count2).Select(a => (a < count1) ? src1[offset1 + a] : src2[offset2 + a - count1]).ToArray();

- 29
- 4
-
2Thank you for the contribution. Since there are already a number of highly rated answers to this from over a decade ago, it’d be useful to offer an explanation of what distinguishes your approach. Why should someone use this instead of e.g. the accepted answer? – Jeremy Caney May 10 '20 at 04:53
-
I like using extended methods, because there are clear code to understand. This Code select two arrays with start index and count and concat. Also this method extended. So, this is for all array types ready for all times – Mehmet ÜNLÜ May 14 '20 at 20:50
-
1That makes good sense to me! Do you mind editing your question to include that information? I think it would be valuable to future readers to have that upfront, so they can quickly distinguish your approach from the existing answers. Thank you! – Jeremy Caney May 14 '20 at 21:12
Here's a generalization of the answer provided by @Jon Skeet. It is basically the same, only it is usable for any type of array, not only bytes:
public static T[] Combine<T>(T[] first, T[] second)
{
T[] ret = new T[first.Length + second.Length];
Buffer.BlockCopy(first, 0, ret, 0, first.Length);
Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
return ret;
}
public static T[] Combine<T>(T[] first, T[] second, T[] third)
{
T[] ret = new T[first.Length + second.Length + third.Length];
Buffer.BlockCopy(first, 0, ret, 0, first.Length);
Buffer.BlockCopy(second, 0, ret, first.Length, second.Length);
Buffer.BlockCopy(third, 0, ret, first.Length + second.Length,
third.Length);
return ret;
}
public static T[] Combine<T>(params T[][] arrays)
{
T[] ret = new T[arrays.Sum(x => x.Length)];
int offset = 0;
foreach (T[] data in arrays)
{
Buffer.BlockCopy(data, 0, ret, offset, data.Length);
offset += data.Length;
}
return ret;
}

- 3,965
- 1
- 22
- 22
-
4DANGER! These methods won't work property with any array type with elements longer than one byte (pretty much everything other than byte arrays). Buffer.BlockCopy() works with quantities of bytes, not numbers of array elements. The reason it can be used easily with a byte array is that every element of the array is a single byte, so the physical length of the array equals the number of elements. To turn John's byte[] methods into generic methods you'll need to multiple all the offsets and lengths by the byte-length of a single array element - otherwise you won't copy all of the data. – Daniel Scott Sep 12 '15 at 01:08
-
2Normally to make this work you'd compute the size of a single element using `sizeof(...)` and multiply that by the number of elements you want to copy, but sizeof can't be used with a generic type. It is possible - for some types - to use `Marshal.SizeOf(typeof(T))`, but you'll get runtime errors with certain types (e.g. strings). Someone with more thorough knowledge of the inner workings of CLR types will be able to point out all the possible traps here. Suffice to say that writing a generic array concatenation method [using BlockCopy] isn't trivial. – Daniel Scott Sep 12 '15 at 01:10
-
3And finally - you can write a generic array concatenation method like this in almost exactly the way shown above (with slightly lower performance) by using Array.Copy instead. Just replace all the Buffer.BlockCopy calls with Array.Copy calls. – Daniel Scott Sep 12 '15 at 01:11
All you need to pass list of Byte Arrays and this function will return you the Array of Bytes (Merged). This is the best solution i think :).
public static byte[] CombineMultipleByteArrays(List<byte[]> lstByteArray)
{
using (var ms = new MemoryStream())
{
using (var doc = new iTextSharp.text.Document())
{
using (var copy = new PdfSmartCopy(doc, ms))
{
doc.Open();
foreach (var p in lstByteArray)
{
using (var reader = new PdfReader(p))
{
copy.AddDocument(reader);
}
}
doc.Close();
}
}
return ms.ToArray();
}
}

- 1,061
- 14
- 33
Concat is the right answer, but for some reason a handrolled thing is getting the most votes. If you like that answer, perhaps you'd like this more general solution even more:
IEnumerable<byte> Combine(params byte[][] arrays)
{
foreach (byte[] a in arrays)
foreach (byte b in a)
yield return b;
}
which would let you do things like:
byte[] c = Combine(new byte[] { 0, 1, 2 }, new byte[] { 3, 4, 5 }).ToArray();

- 1,501
- 2
- 13
- 19
-
5The question specifically asks for the most *efficient* solution. Enumerable.ToArray isn't going to be very efficient, as it can't know the size of the final array to start with - whereas the hand-rolled techniques can. – Jon Skeet Jan 06 '09 at 08:16