You should create for every blocked IP a file. By that you can block the visitor through .htaccess
as follows:
# redirect if ip has been banned
ErrorDocument 403 /
RewriteCond %{REQUEST_URI} !^/index\.php$
RewriteCond /usr/www/firewall/%{REMOTE_ADDR} -f
RewriteRule . - [F,L]
As you can see it only allows access to the index.php
. By that you can do a simple file_exists()
in the first line before heavy db requests are made and you can throw a IP unlocking captcha to avoid permanent blocking of false-positives. By that you have a better user experience compared to a simple hardware firewall that does not return any information or does not have a unlocking mechanism. Of course you could throw a simple HTML text file (with a php file as forms target) to avoid the PHP parser working as well.
Regarding DoS I don't think you should rely only on IP addresses as it would result to many false-positives. Or you have a 2nd level for whitelisting proxy ips. For example if a ip was unblocked multiple times. Some ideas to block unwanted requests:
- is it a human or crawler? (
HTTP_USER_AGENT
)
- if crawler, does it respect
robots.txt
?
- if human, is he accessing links that aren't visited through humans (like links that are made invisible through css or moved out of the visible range or forms ...)
- if crawler, what about a whitelist?
- if human, is he opening links like a human would? (example: in the footer of stackoverflow you will find
tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback
. No human will open 5 or more of them I think, but a bad crawler could = block its ip.
If you really want to rely on ip/min I suggest not to use LOCK_EX
and only one file as it will result a bottleneck (as long the lock exists all other requests need to wait). You need a fallback file as long a LOCK exists. Example:
$i = 0;
$ip_dir = 'ipcheck/';
if (!file_exists($ip_dir) || !is_writeable($ip_dir)) {
exit('ip chache not writeable!');
}
$ip_file = $ip_dir . $_SERVER['REMOTE_ADDR'];
while (!($fp = @fopen($ip_file . '_' . $i, 'a')) && !flock($fp, LOCK_EX|LOCK_NB, $wouldblock) && $wouldblock) {
$i++;
}
// by now we have an exclusive and race condition safe lock
fwrite($fp, time() . PHP_EOL);
fclose($fp);
This will result a file called 12.34.56.78_0
and if it hit a bottleneck it will create a new file called 12.34.56.78_1
. Finally you only need to merge those files (respect the locks!) and check for to many requests in a given time period.
But now you are facing the next problem. You need to start a check for every request. Not really a good idea. A simple solution would be to use mt_rand(0, 10) == 0
before starting a check. An other solution is to check the filesize()
so we do not need to open the file. This is possible because the filesize raises by every request. Or you check the filemtime()
. If the last file change is done in the same second or only one second ago. P.S. Both functions are equal fast.
And by that I come to my final suggestion. Use only touch()
and filemtime()
:
$ip_dir = 'ipcheck/';
$ip_file = $ip_dir . $_SERVER['REMOTE_ADDR'];
// check if last request is one second ago
if (filemtime($ip_file) + 1 >= time()) {
mkdir($ip_dir . $_SERVER['REMOTE_ADDR'] . '/');
touch(microtime(true));
}
touch($ip_file);
Now you have a folder for every ip that could be a DoS attack containing the microtime
of its request and if you think it contains to many of those requests you could block the ip by using touch('firewall/' . $_SERVER['REMOTE_ADDR'])
. Of course you should periodical clean up the whole thing.
My experiences (German) using such a firewall are very good.