Here's a solution for massive strings. I'm doing text finds on 4.5MB text strings and the other solutions grind to a halt. This takes advantage of the fact that ruby .split is very efficient compared to string comparisions.
def indices_of_matches(str, target)
cuts = (str + (target.hash.to_s.gsub(target,''))).split(target)[0..-2]
indicies = []
loc = 0
cuts.each do |cut|
loc = loc + cut.size
indicies << loc
loc = loc + target.size
end
return indicies
end
It's basically using the horsepower behind the .split method, then using the separate parts and the length of the searched string to work out locations. I've gone from 30 seconds using various methods to instantaneous on extremely large strings.
I'm sure there's a better way to do it, but:
(str + (target.hash.to_s.gsub(target,'')))
adds something to the end of the string in case the target is at the end (and the way split works), but have to also make sure that the "random" addition doesn't contain the target itself.
indices_of_matches("a#asg#sdfg#d##","#")
=> [1, 5, 10, 12, 13]