If you're talking about two strings, you would have to create a matrix of each with respect to eachother, i.e.
1) "abcd
"
2) "cdef
"
Matrix)
ac bc cc dc
ad bd cd dd
ae be ce de
af bc cf df
Then check for doubles to show up in diagnal patterns, (i.e., cc
and dd
) in the above example.
This means, you will have to iterate through each string for each char of the other string, so I believe this would give you O(n^2) time complexity. For each diagonal match up, that would be a token that matches (and there could be more than one). This is not the same thing as longest common substring, as @lexicore said.
If they are really large strings, you probably wouldn't want to go through each char, but instead tokenize them (i.e. by splitting them by spaces) and creating a sorted list for each (or hash table or something) so you could itterate through each one in O(log n)ish time. I think that would give you something like O((log n)^2), but at least better than O(n^2).