A flash-crowd attack (FCA) is a DDoS attack that floods an application at the victim with numerous service requests. Such attacks are extremely hard to detect and filter, because legitimate and attack requests are indistinguishable from each other. The attackers use multiple bots to send requests to the victim at low rates. Flash-crowd attacks are appealing to attackers, because they can be effective at a low volume. Since many DDoS defenses operate at network level and look for large traffic spikes in network aggregates, flash-crowd attacks often slip by undetected. An attacker can use regular, lightweight requests, such as those for a static page at a Web server or use costly requests, which require more of the server's resources, such dynamic requests, involving database lookups and updates.
FRADE is a defense scheme to mitigate Flash Crowd Attack by distinguising humans from bots. The goal of the FRADE is to raise the bar for the number of bots needed for a succesfull Flash Crowd Attack. FRADE achieves this by three novel approaches which models the human behaviour to distinguish human users from flash-crowd bots.
The dynamics module models the timing of a user's interaction with a server, i.e., how many requests a human user sends within a given time interval. Because not all requests are generated and processed in the same way, we subdivide this model into three sub-models, DYN-h, DYN-e and DYN-c. DYN-h models the rate of human-action requests, such as clicking on a hyperlink or scrolling to the bottom of a page. DYN-e models the rate of requests for embedded content, such as images, which are usually automatically generated by a Web browser. DYN-c models the rate of a user's demand for server resources, where the demand is represented as the total time the server devoted to the user's requests in a given period.
Web pages have an abundance of links whose content is poorly related to the page's main topic, such as copyright notice. Human rarely follow unrelated links and human interest tends to coincide, making few links on a page popular. A random browsing bot cannot repeatedly hit popular links because they are a minority of all the link's on a server's pages resulting in low probablity request sequences. On the other side, Humans mostly access popular and related links resulting in higher probablity request sequences. Bot's can hard-code the popular links but this will result in repetitions and will face detection.
This model detects bots by embedding objects with hyperlinks into server replies in an invisible manner so that probability of clicking these links by a human is very low and by a random-browsing bot is high. To achieve invisibility of embedded objects, we employ several techniques for example: by placing embedded objects beneath the front layer, by embedding very small images around the corner of a bigger image with same background colors, by placing the embedded objects in areas where users rarely click(for ex: bottom right corner of screen).