While it has been sometime since I last worked with OpenMP, your problem most likely comes down to overheads and the work done by each thread being rather small. You have each thread setup to do 1/6 of the mallocs and 1/6 of the sets to 0. For a problem like this you should consider just how large are seq1 and seq2 and how much work is actually being executed in parallel. For example memory allocation by the standard malloc is likely a point of contention, see for instance this question with a more detailed analysis. If the bulk of the work is being done by malloc and as such not being done in parallel to a large extent then you wouldn't get much of a speedup for paying the overhead of thread initialization. If it is truly needed then you may get improvements from using a different allocator. Setting regions of memory to 0 can be split up amongst the threads, but it is almost certainly extremely fast in comparison to the allocation. There may also be some cache coherency costs to setting scoreMatrix[i] on line 229 as that cacheline is shared amongst the threads.
With OpenMP and MPI it is important to remember that there are overheads involved in simply starting the parallel parts of computations, and as such blocks without much work, even if they could be highly parallel, may not be worth parallelizing. When you get to doing computations on the array you are much more likely to see a benefit.
For zeroing memory in general your best easy bet is likely memset, but your compiler might optimize lines 230 & 231 to do similar things.