The emergence of CAPTCHA based authentication was a logical move in the fight against automated brute forcing of login details, registrations, spamming and sploging in the form of comments and splogs registration. And consequently, spammers, phishers and malware authors started figuring out how to automatically achieve their objectives, by either breaking or adapting to a certain CAPTCHA, and even more pragmatic - outsourcing the request to a third-party.
Sample CAPTCHA breaking project requests :
- "I need a captcha breaker that can break captchas that are of the same style i will upload here.I will want a c++ dll that recieves a file path and returns a char* with the content of the picture (letters and numbers)"
- "The program needs to take a myspace captcha image and determine what the text says in the image. The accuracy needs to be 80%+"
- take advantage of the clear IP reputation of the email service in order to improve the chance of having their phishing/spam/malware email successfully received
- set the foundations for a large scale automated spamming/phishing operations by using legitimate email addresses, thus improving their chances of not getting filtered
- automated registration of splogs -- spam blogs
- as search engines are starting to crawl sites submitted at the most popular social networks in real time, spammers or malware authors are naturally interested in abusing this development to timely attract huge
audiences at their splogs who often have malware embedded within
What are malicious parties doing to achieve efficiency despite their inability to defeat an advanced CAPTCHA?
- humans entering the CAPTCHAs while the script is auto generating, storing and auto logging with the passwords in a combinated with the human entered CAPTCHA
- adapting compared to putting more efforts into rocket science as whenever a CAPTCHA cannot be beated automatically, as you already saw on the second screenshot, they're making it easier for humans to enter the CAPTCHA and faster compared to an end user browsing
- outsourcing making it sound it's more of a quality assurance project of CAPTCHA to be introduced on the market
What can web sites do to prevent that sort of malicious behaviour? Strong CAPTCHAs should be in place by default, but taking another perspective, the way I discussed how click fraud could be easily detected by advertising networks syndicating IPs of already known to be malware infected hosts, in this very same fashion we could have CAPTCHA system that would check to see if, for instance, default proxy ports are opened at the host trying to register, and whether or not they're part of a botnet. With data like this now a commodity, a prioritization process to closely monitor mass registrations from these IPs is a pragmatic early warning system.