I am assuming that by "redundant" you mean that instead of a loop you manually write out all function calls.
In terms of efficiency - yes, the redundancy is inefficient, but not for the reason you might think. It is inefficient in terms of maintenance cost and in terms of code/file size bloat.
For example, you have 3 function calls right now with 3 integers. Later you might decide you need to instead call it 20 times, and with the redundant method this involves copying the line 17 extra times with new integers.
Now you decide that the integers you are currently passing in need to be changed. As a result, you have to update all 20 integers.
Next you decide you need to insert some logic between each function call, so you write it once and copy+paste it 20 times.
This scales extremely poorly, increases the size of the file (byte size), and is error-prone. If it turns out the logic you added between the functions had a bug, you now have the same bug in 20 different places to fix.
Any speedups you would gain will likely be negated down the line because your code has become a horrible, un-maintainable monster.
What you might want to look into instead is a more limited form of this called loop unrolling. This can be done both by hand and, as pointed out in the comments, by the JIT compiler if it determines that it is actually worth it. It targets the issue you raise with the for loop having both a counter and a logical comparison which introduces some overhead. The idea is that you try to perform as many computations as are reasonable during each loop and perform fewer loops as a result.
Simple Example:
for (int i = 1; i <= 300; i += 3) {
call(i);
// some logic
call(i + 1);
// some logic
call(i + 2);
// some logic
}
Now you call the same function 3 times in the loop and loop 100 times, but 300 function calls result. This is both potentially more efficient than having 1 function call in the loop and it is far more scalable.