What you need is dynamic programming. You need to memorize values of function f for those arguments for which it already have been calculated. Also it can be implemented without recursion like this:
int f(int n,int a,int x)
{
int q[1000][50]; // to simplify and not use dynamic allocation assume that a < 50 and n < 1000
q[0][0] = 1;
for (int i = 1; i < 1000; ++i)
q[i][0] = 0;
for (int i = 1; i <= a; ++i)
{
for (int j = 0; j <= n; j++)
{
int t = 0;
for (int l = 0; l <= j && l <= x; ++l)
t += q[j - l][i-1];
q[j][i] = t;
}
}
return q[n][a];
}
This is only simple technique demonstration. It can be optimized one more time, you can precalculate t-sum and eliminate loop for l. And you don't have to store whole table q, you only need two layers, it will reduce memory usage. So the result will look like this:
int f(int n,int a,int x)
{
int q[1000][2]; // to simplify and not use dynamic allocation assume n < 1000
q[0][0] = 1;
for (int i = 1; i < 1000; ++i)
q[i][0] = 0;
int current = 1;
for (int i = 1; i <= a; ++i)
{
int t = 0;
for (int j = 0; j <= n; j++)
{
t += q[j][1 - current];
if (j > x)
t -= q[j - x - 1][1 - current];
q[j][current] = t;
}
current = 1 - current;
}
return q[n][1 - current];
}
So finally it will take O(a*n) time to compute.
PS: Note that answer can be a huge number which can overflow any native integer type.