Uniques - Dictionary
A very solid (and fast) way of returning a 1D-array of unique values would be to use a conventional Dictionary
object as below:
Sub UniquesDictionary()
Dim lr As Long, x As Long
Dim arr As Variant
Dim dict As Object: Set dict = CreateObject("Scripting.Dictionary")
With Sheet1
'Find the last used row
lr = .Cells(.Rows.Count, 1).End(xlUp).Row
arr = .Range("A2:A" & lr).Value
End With
'Loop through memory and fill dictionary
For x = LBound(arr) To UBound(arr)
dict(arr(x, 1)) = 1
Next x
'Pull unique items into a 1D-array
arr = dict.Keys
End Sub
Uniques - Evaluate
Whilst the above works. The wish was to avoid any loop. The way to do this is to use .Evaluate
, see below:
Sub UniquesEvaluate()
Dim lr As Long
Dim arr As Variant
With Sheet1
'Find the last used row
lr = .Cells(.Rows.Count, 1).End(xlUp).Row
'Get array of unique values
arr = Filter(.Evaluate("TRANSPOSE(If(Row(A2:A" & lr & ")<>MATCH(A2:A" & lr & ",A2:A" & lr & ",0)+1,""|"",A2:A" & lr & "))"), "|", False)
End With
End Sub
It might seem like a long formula but it really isn't that impressive and boils down to:
=IF(ROW(A2:A8)<>MATCH(A2:A8,A2:A8,0)+1,"|",A2:A8)
The TRANSPOSE
is only there to return a 1D-array for FILTER
to work with.
The inital formula will only return those values that are sitting on the rows where they are first encountered through MATCH
, otherwise it would return a pipe-symbol.
Thus TRANSPOSE(<formula>)
returns a 1D-array, e.g.: {A,B,|,|,C,|,|}
Filter
then uses this 1D-array as an input, returning an array filtering out those pipe-symbols using FALSE
in the "include" parameter, e.g: Filter(<arr>,"|",FALSE)
> {A,B,C}
.
Comparison
This would only have real purpose if this method would be equally as fast as the more conventional Dictionary
so I did a small comparison. As far as my testing goes, there was no really noticable timing difference (around 0 seconds), but since in essence the Evaluate
is a CSE
formula, larger datasets will get noticable timing differences above let's say 2000 rows of data.
Hopefully this is usefull to those working with smaller datasets.