I'm writing a game of pong for an embedded ARM microcontroller (the board is a Keil MCBSTM32C, for anyone interested) and I'm presently trying to implement an AI. The basic functionality was fairly easy because I simply mapped the location of the player 2 controller to the location of the ball. However, that is obviously not ideal because I'd like the AI to make mistakes so the player can win.
So, I tried to implement a randomised amount of error in the AI movements. However, the AI paddle now "jitters" around the screen a lot with the amount of jitter being greater than the amount of error. I think understand why this is happening (it is due to the fact that the randomised difficulty offset changes each time the paddle is redrawn), but I'm not entirely sure how to fix it. I tried making it move in the direction of the ball rather than directly mapped to the ball but that doesn't seem to have helped much.
The code for drawing the player paddle is:
void draw_player2(enum pMode mode) {
int adcValue;
static int lastValue = 0;
int direction;
switch (mode)
{
default:
break;
case DUAL:
adcValue = (ADC1->DR & 0x0FFF) >> 4; /* AD value (8 bit) */
if (lastValue != adcValue) {
thisGame.p2.y = (unsigned int) adcValue * (HEIGHT-BAR_H)/256;
LCDupdatePaddle(thisGame.p2);
lastValue = adcValue;
}
break;
case AI:
direction = thisGame.ball.y-lastValue;
adcValue = thisGame.ball.y; /* AD value (8 bit) */
if (lastValue != adcValue) {
thisGame.p2.y = (lastValue + direction + selectDiffMod()) * (HEIGHT-BAR_H)/256;
LCDupdatePaddle(thisGame.p2);
lastValue = adcValue;
}
break;
}
}
(HEIGHT=240 and BAR_H=48, btw)
The code for selectDiffMod() is:
int selectDiffMod() {
int r;
if(thisGame.delay == T_SLOW) {
r = rand() % 100;
}
if(thisGame.delay == T_MEDIUM) {
r = rand() % 50;
}
if(thisGame.delay == T_FAST) {
r = rand() % 20;
}
return r;
}
My current line of thinking would be to generate the difficulty modifier/offset less often, but I'm not sure this would actually solve it and I'm wondering if anyone has a better solution?