Here's a general purpose function that would merge an arbitrary number of arrays, preventing duplicates of the passed in key.
As it is merging, it creates a temporary index of the names used so far and only merges new elements that have a unique name. This temporary index should be much faster than a linear search through the results, particularly as the arrays get large. As a feature this scheme, it filters all duplicates, even duplicates that might be in one of the source arrays.
If an element does not have the keyName, it is skipped (though that logic could be reversed if you want depending upon what error handling you want for that):
var array1 = [{ "name" : "foo" , "age" : "22"}, { "name" : "bar" , "age" : "33"}];
var array2 = [{ "name" : "foo" , "age" : "22"}, { "name" : "buz" , "age" : "35"}];
function mergeArrays(keyName /* pass arrays as additional arguments */) {
var index = {}, i, len, merge = [], arr, name;
for (var j = 1; j < arguments.length; j++) {
arr = arguments[j];
for (i = 0, len = arr.length; i < len; i++) {
name = arr[i][keyName];
if ((typeof name != "undefined") && !(name in index)) {
index[name] = true;
merge.push(arr[i]);
}
}
}
return(merge);
}
var merged = mergeArrays("name", array1, array2);
// Returns:
// [{"name":"foo","age":"22"},{"name":"bar","age":"33"},{"name":"buz","age":"35"}]
You can see it work here: http://jsfiddle.net/jfriend00/8WfFW/
When this algorithm is run against the Matt algorithm in jsperf using larger arrays, this algorithm is around 20x faster:
