Web Site Security – 101

Website security is a big topic of growing importance. We monitor all the sites we manage for intrusion and hacking. Most of these attacks are related to bad guys attempting to take over a website for spamming purposes. They will either add hidden links and text to a site or plant some code that can be triggered to convert the webserver into a networked component of a zombie system. These computers are then taken over, hence the zombie name and used to parcipitate in a DoS or denial of service attack.

Most website receive tens if not hundreds of probes per day. The image below shows attempted logins from one of our Blog Sites over a 24 hour period. This shows the IP address of a person trying to log on the site. Who are these people? We can use a reverse IP lookup service to find where they are.  For example the fist entry 188.143.232.45 is located in the Russian Federation.

logon log

 

A DoD attack is next level up the spam ladder and is a combined attack on a site for political or financial reasons.

The Economist published a timely article on DoS.

Key parts are below:

ON AUGUST 25th a four-hour long denial-of-service (DoS) attack crippled the China Internet Network Information Centre, the group that handles the conversion of readable domain names (like cnnic.cn) into the numeric addresses that underpin the internet. The centre said it was the “largest ever” such attack, apologised and vowed to beef up its infrastructure. The attackers prevented reliable access to the web, e-mail and other internet services. Traffic levels reportedly fell by as much as 32% below normal, according to CloudFlare, an internet-services firm.

But how are such attacks carried out?

A website is technically a “service”, a software-based system that responds in a particular way to incoming requests from client software—in this case a web browser. But a web browser’s requests can be easily faked. A web server can only respond efficiently to a certain number of requests for pages, graphics and other website elements at once. Exceed that number, and it bogs down. Go too far, and the system may become entirely unresponsive. Huge floods of traffic, whether legitimate or not, can thus cripple a server. In recent years beefier hardware and better tools to distribute incoming requests among multiple servers have made things more difficult for attackers. DoS attacks once involved a single computer flooding a webserver. When that became ineffective, distributed DoS (DDoS) onslaughts conscripted thousands of virus-infected computers, known as zombies, to bombard the target system with bogus requests from many locations at once. This used to be impossible to block without severing the server’s internet link altogether. But now specialised hardware can distinguish between real requests and those intended to harm a site, and block them before they form a tsunami of traffic.

Attackers in turn have also responded with ever-more sophisticated software. Past attacks were akin to making a telephone call and never replying to the answer on the other end, thus tying up the line. Mike Rothman, a researcher at Securosis, a security firm, explains in a white paper that hardware designed to repel such attacks can be bypassed using encrypted connections (HTTPS sessions), which are typically handled directly by the server. Attackers also turn innocent websites and other internet services elsewhere (such as domain-name servers) into unwitting assailants. This involves forging the sending address on queries to these other servers, which obligingly reply to the systems under attack, adding to the load. Attacks can thus be scaled up to well over 100 gigabits per second (Gbps). But Mr Rothman notes that some attackers prefer precision attacks that exploit weaknesses in a specific function, rather than the entire server. For instance, sending huge numbers of legitimate-seeming search requests to a website, each of which uses up substantial computational power, may be more effective and harder to pinpoint than simply flooding it with bogus page requests.

What can be done to protect sites against attack? There is no single answer. Content distribution networks (CDNs) such as Akamai deliver website content of behalf of customers from hundreds or thousands of locations around the world. An attack against these networks is vastly more difficult because of both the size of a CDN and the fact that its servers are dispersed. Some security firms now offer a “scrubbing” service that allows a site under attack to redirect traffic through the security firm’s servers, which remove (scrub) the bad requests and send legitimate ones through. And developers of big websites should now be considering Web Applications Firewalls (WAFs), which can be tailored to rebuff unwanted requests intended to overload the site’s search, shopping cart or document-uploading features. But history suggests that these new defensive mechanisms will spur attackers to develop more sophisticated tools of their own. Participation in this arms race is, alas, now an unavoidable aspect of doing business on the internet.

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link