0
int scrambled( int a[], int b[], int len )
{
    int check[len];
    for ( int i=0; i<len; i ++)
    {
        check[i] = 0;
    }
    for ( int i=0; i<len; i++ )
    {
        int flag = 0;
        for ( int j=0; j<len; j++)
        {
            if( b[j] == a[i] && check[j] != 1  && flag == 0)
            {
                check[j] = 1;
                flag = 1;
            }

        }
        if (flag = 0)
        {
            return 0;
        }
        else
        {
            flag = 0;
        }

    }
    return 1;
}

int main( void )
{
    int a[3] = {10, 15, 20};
    int b[3] = {10, 20, 15};

    if( scrambled( a, b, 3 ) == 1 )
    {
        printf( "b is a scrambled version of a\n" );
    } else {
        printf( "b has different contents to a\n" );
    }

    return 0;
}

This is my program to check any two arrays and see if those two arrays are scrambled versions of each other or if they contain different contents.

I'm not allowed to manipulate the main function, so I'm only allowed to tweak with the scrambled function at the top.

But for some reason, my function keeps saying scrambled for some rare occasions and I don't know how I can fix that?

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
user3880587
  • 69
  • 2
  • 12
  • 6
    `if (flag = 0)` is always false, do you mean `if (flag == 0)`? – mch Oct 25 '14 at 17:37
  • 1
    Also, what about `{0,0,1}` and `{1,1,0}`. Anyway, the only good and fast algorithm is: 1. Sort both 2. Compare element by element Order: O(n * log n) Yours: O(n*n) – Deduplicator Oct 25 '14 at 17:38
  • As a side-note, consider following the C convention of 0 <==> false and anything else <==> true. – Deduplicator Oct 25 '14 at 17:45
  • Do really both need to be sorted, to achieve the fastes solution? Disclaimer: I'm unsure. @Deduplicator – alk Oct 25 '14 at 17:45
  • Sort both with qsort, then just memcmp. – Charlie Burns Oct 25 '14 at 17:47
  • @alk: Otherwise you have a buggy algorithm or an O(n*n) one. Both bad. (At least I think that's the case without restrictions on the input.) – Deduplicator Oct 25 '14 at 17:47
  • 1
    @Deduplicator Neither of those solutions would be optimal. The optimal solution is in O(n), where you iterate through the first array, hashing all of its contents, and then iterate through the second, checking the hash table to match the contents of the 2nd array to those of the first. – fvgs Oct 25 '14 at 17:48
  • Related (not for the language, but for the problem): http://stackoverflow.com/q/7086234/694576 – alk Oct 25 '14 at 17:51
  • @Wolfram: That's still O(n*n), unless you can guarantee a good/perfect hash. – Deduplicator Oct 25 '14 at 17:55
  • Related (not for the language, but for the problem): http://stackoverflow.com/q/4153120/694576 – alk Oct 25 '14 at 17:56
  • @alk: Do not confuse expected and upper bound. – Deduplicator Oct 25 '14 at 17:59
  • 1
    @Deduplicator There would be no point in ever using hash tables if we assumed that the hash was inadequate. A hash table with a reasonable hash and sized accordingly to the scale of the input may be treated as having constant time insertions and lookups. Even if the hashing algorithm isn't absolutely perfect, it will still be a large improvement. i.e. in O(n). Which is to say, even a moderately imperfect hashing algorithm will result in better performance than then O(nlogn) solution. – fvgs Oct 25 '14 at 18:05
  • @Wolfram: Hash-collision (and DOS-attacks using them) are a fact of life. If you say expected time is faster, I concurr. If you say the bound is lower, you are just wrong. – Deduplicator Oct 25 '14 at 18:10
  • 1
    @Deduplicator Absolutely, the worst-case bound for the hash solution will be in O(n^2) and is worse than the worst-case of the sorting solution. However, the sorting solution has an average case in O(nlogn) while the hashing solution has an average case in O(n). Given the nature of the original question, it seemed the optimal average-case would be of more use to the OP. But if we consider the entire breadth of potential applications of such an algorithm, then the preference may come down to the the optimal worst-case performance rather than the optimal average-case performance. – fvgs Oct 25 '14 at 18:20
  • If it's only for testing equivalence and if the arrays are big and acccess is expensive, a multi step approach might also be the way to go, that is starting with using a Bloom filter. If the latter fails for one element your done in a maximum of O(n). – alk Oct 25 '14 at 18:39

0 Answers0