Main » 2011 » Март » 16 » Simple and effective method to reflect http ddos ??on 50mbit with nginx and iptables
12:00
Simple and effective method to reflect http ddos ??on 50mbit with nginx and iptables
Hello, Habr
I suggest to your attention a simple and at the same time an effective method of dealing with http DDoS. server-based Xeon 2.5GHz / 4Gb RAM / SAS can deal with an attack by about 300 Mbit / s (the value obtained by extrapolation).

way to implement

fine-tune the system parameters. So that the north will be able to withstand more connections from the botnet than the channel to the server could miss it.

Applications

Fighting Http DDoS on a dedicated server or CHD. The maximum possible power deter DDoS attacks is limited to the physical capabilities of the server and bandwidth.

SEO by DDoS-th

Your site will be properly indexed in the attack, which will retain its position in search engines. is especially important for sites with large SEO budgets.

Cost and effectiveness

At the time of the attack have to cancel some services on your site. You may have to expand the channel strip, move the website to a more powerful server. Efficiency is achieved by maximizing the coefficient of scalability. Provides rapid build-up of hardware resources with increasing power of attack.

Description of the method

I'll talk about the application of the method and results, based on a real case against http DDoS attacks.

In my possession were two servers Xeon 2.5GHz / 4Gb RAM / SAS, first under the PHP, the second under a DB. All the settings are made on the first servere.OS - Debian 4, the site was in attendance ~ 60k. frontends was nginx. The core of the system was configured by default. A way to ban by ip - iptables in case coped with the attack botnet of up to 7K.
In the case of a more powerful attack will have to set ipset.

History of anti-DDoS


first day. overflow network stack

IP address is allocated by DOS unresponsive to any requests (ping, http, ssh), given that the other server's IP will continue to function normally. If the server has multiple IP then the site will fall under DOS, the work of other sites on the server and ssh will not be compromised.
By default, OS Debian and other OS is not able to support a huge number of compounds produced by the botnet. You must make changes in the kernel configuration, to strengthen the stack TCP / IP. I do not dwell on the kernel configuration, I will give just one example of such a configuration.

net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.eth0.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.core. rmem_max = 996777216
net.core.wmem_max = 996777216
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_mem = 786432 1048576 996 777 216
net.ipv4.tcp_wmem = 4096 87380 4194304
net.ipv4.tcp_max_orphans = 2255360
net.core.netdev_max_backlog = 10000
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 494967295
kernel.shmall = 268435456
net.core.somaxconn = 16096

These parameters can be read about in the documentation, for example debian.telenet.ru / doc / sysctl.conf, but rather a search through google.com Recent articles on this topic.
Carefully change the kernel configuration and restart the server ...
And so. Our system is able to withstand the onslaught of bots. But to celebrate victory is still very early. Because of the enormous number of compounds and processes of PHP database completely "eaten "memory and processor resources, so that the value of load average over 100 points.
It is necessary to cut off spurious connections

Disadvantages of search bots command netstat

Anti-dos administrator, to whom I addressed the problem, proposed a method search bots command netstat. In applying this method, I noticed a few significant drawbacks. consider them in detail:
1. Creating a blacklist-and time-consuming, which does not allow us to frequently update the blacklist
2. Efficient search bots is only possible when the stopped webserver. At that time the site is not accessible to customers and there is the threat of improper site indexing by search engines
3. blacklist can get IP search engines, which is unacceptable

Realizing the inefficiency of the method, I began to creating a method of searching and ban bots that should
1. to ensure constant stable job webserver (website)
2. ensures the lowest probability of a blacklist of search robots

The second day. Opportunities iron server + nginx

Server Xeon 2.5GHz / 4Gb RAM / SAS DoS-Yat query GET / HTTP/1.1.
  1. Experiment A. The Web server (in this case nginx) stopped
    Inbound 6085.2 kbits / sec
    Outgoing Traffic 5342.1 kbits / sec
  2. Experiment B. Nginx gives a blank HTML (return 444;)
    Inbound 56 Mbps
    Outgoing traffic of 54 Mbps
  3. Experiment B. Nginx HTML gives about 2 Kb - this is a page with a little message like "We apologize for any downtime the site"
    Inbound 57 Mbps
    Outgoing traffic is 353 Mbit / s

<...> *

Based on the experiment, the following conclusions:

a) You can completely abandon the filter with sufficient channel capacity and the lack of relations inbound / outbound traffic.
Your site will be available to customers at the cost of a huge parasitic traffic.
Careless decision to completely abandon the filter. Attackers can increase the power of DoS so that the "lies" gigabit channel.

B) When we are totally banned all bots that spurious traffic from the botnet for a mere 5 Mbps. Ban all bots as possible, it would take too many resources. In addition, it is likely ban bots.

You also need to pay attention to the fact that outbound traffic from the latter case exceeded 100 Mbps. Hence, the server is connected to port 100 Mbit / s will be very difficult access to ssh into force of the full load of the channel. To avoid this trouble, I recommend to configure returns an empty HTML or return 444 in nginx to complete the setup filtering bots.

Search bots means nginx

In this case the server attack requests request: «GET / HTTP/1.1».
Making the assumption that good clients make no more than 2 simultaneous requests to homepage. We believe customers have opened more than 3 simultaneous connections attacking bots and banim their IP address on the firewall.

The assumption was confirmed experimentally. Based on analysis of log http requests per day from 120,000 IP addresses, only IP 19ti was made more than 2 simultaneous requests.

To implement the search bots are creating special handling requests
request: «GET / HTTP/1.1» in the nginx.
Error_log / var / log / nginx / error.log;
<...>
location = / {
limit_conn one 3;
root / home / www / site.ru;
}

IP address from which it was opened more than 3 simultaneous connections will be recorded in the error.log with the message limiting connections by zone. Based on the error log we can build a blacklist ip attacking botnet.

Filtering bots in iptables

It is important to note. IPtables is not suitable for filtering a large number of addresses. When the number of chains> iptables 2K ksoftirqd process starts to consume 100% CPU, which leads to prohibitive server load. The problem solved by the installation ipset or decrease the number of rules in iptables.
In this case, setting the ipset was postponed in case of emergency. The server has not been integrated KVM and rebuild the kernel was risky.

Proceed to create blacklist-a. In banks, we pomectim only the most aggressive bots, so as not to overload the iptables.

# Search bots
cat / var / log / nginx / error.log | grep "limiting connections by zone" | grep "request: \" GET / HTTP/1.1 "| awk '{print $ 12} '| awk-F ","' {print $
1} '| sort | uniq-c | sort-nr> / tmp / botnet.blacklist
# clear the ban script
cat / dev / null> / tmp / iptables_ban.sh
# create a DROP rule for 50 of the most aggressive bots
awk '{print "iptables-A INPUT-p tcp - dport 1980-s" $ 2 " -j DROP "} 'botnet.blacklist | head-n 50>> / tmp / iptables_ban.sh
# load the blacklist
bash / tmp / iptables_ban.sh
# do the log rotation
cat / dev / null> / home / www / nginx_log / error.log
[!-f / var / run / nginx.pid] | | kill-USR1 `cat / var / run / nginx.pid `

add the script in cron with a frequency of several minutes. Frequency select empirically. I've done every 5 minutes.

* / 5 * * * * / root / script / ban.sh

As a result, iptables will be extended with new bots.

How to filter


Third Day. Summary


This method provided a stable client access to the site. Proper indexing in SAR has been confirmed that the site has retained its position extradition. Booting the server does not go beyond a reasonable la small 6.7 points. Outgoing traffic from the server does not exceed 100 Mbps. to reflect the attack> 7K botnet ample opportunity to iptables.

DDoS as a natural disaster and to avoid damage is impossible.
Part clients during the crash for your service, will go to competitors.
You'll have to incur some costs for processing programmers, administrators, or acquisition of additional equipment.
Your resource is actively moving in the PS ( yandex, google) so critical risk of incorrect indexing and as a result of the failure to issue positions.
The main objective of minimizing the damage from DDoS.

In my case, DDoS attack ceased the next day after the launch of filtering. DoS client was not willing to spend more money on increasing attack.

In most cases the DDoS is a weapon of competition in the network. Customers can instantly switch to your competitors, if your resource will not work properly.

I believe that the fight against DDoS is not in a bath of bots, but to create conditions in which your total damage from the attack is comparable to the cost of its initiators. The customer must spend a, for example, 50 000 rubles. to cause you harm in 50 000rub., competitors are not economically profitable to organize such an attack .

The method described in this article is not a panacea, it is only part of a package of measures to repel DoS. Plan for the development of large service should consider the risks and propose measures to mitigate the negative effects of attacks.

I hope my article will be useful to community developers and administrators of web applications.

___
* I removed from the text of a paragraph of 300 Mbps, as he rightly calls the criticism.

"More than 300 Mbps, we" upremsya "within ... "- HDD who gives true for video / audio, that is" heavy "files. For HTML files, this is not true.

The text of the remote paragraph:
" The results of this experiment, it is clear that the server is able to withstand an increase in attack up to about 300Mbps. More than 300 Mbps, we "upremsya" within randomly reading SAS disks. So we have a good margin of safety and high likelihood of effective reflection attacks while preserving our customers access to web services. »
Views: 744 | Added by: w1zard | Rating: 0.0/0
Total comments: 0
Имя *:
Email *:
Код *: