This post is about identifying web back doors. Recently I made a research about PHP Malware C99 Shell and it seems to be very popular among lots of hacking groups and script kiddies.
C99 PHP Shell
C99Shell is a very well designed shell that practically lets you do about anything with the server, if you have the proper access rights. Here is a list with more web back doors, the link given is actually a google project and it is not going to be accessible trough corporate web gateways (with mal-ware filtering, URL filtering or Content filtering).
Google Dorks
To be more specific a "Crawler" is a generic term for any program (such as a robot or spider) used to automatically discover and scan websites by following links from one webpage to another. Google's main crawler is called Googlebot. This table lists information about the common Google crawlers you may see in your referrer logs, and how they should be specified in robots.txt, the robots meta tags, and the X-Robots-Tag HTTP directives.
But if you want more fine-grained control, you can get more specific. For example, you might want all your pages to appear in Google Search, but you don't want images in your personal directory or hidden linkes such as web back door to be found and to be crawled. In this case, you can use robots.txt to disallow the user-agent Googlebot-image from crawling the files in your /personal directory (while allowing Googlebot to crawl all files), like this:
User-agent: Googlebot
Disallow:
User-agent: Googlebot-Image
Disallow: /personal
Someone can improve his/her web site crawling performance by simply adding directives for different crawlers, like this:
<meta name="robots" content="nofollow"><meta name="googlebot" content="noindex">
The truth is that most of the time the web site is going to crawled and be easily googled no matter what you do , an adversary will even be able to access none linked pages.
Web Back-door Google-Dorks using Google Alerts
- intitle:!C99madShell
- intext: !C99madShell
- inurl:backdor_name.php
The best thing to do in every situation in order to protect yourself from being hacked and not finding out about, is to regularly check you web infrastructure using google alerts. This is also a very good start before you begin a penetration test!! to check for already compromised web infrastructure (I know I am brilliant).
Expand and automate the search using basic scripting
A good thing to do in order to protect yourself from script kiddies is to similarly identify all web back doors that are found in the link mention above (the google project). A very good way to automate the whole process is with scripting!!
So firstly you go to google and insert the intitle:!C99madShell then the google search will return this:
If you copy the requested url you will see that it is exactly this one:
https://www.google.co.uk/#hl=en&sugexp=frgbld&gs_nf=1&cp=20&gs_id=4&xhr=t&q=intitle%3A!C99madShell&pf=p&output=search&sclient=psy-ab&oq=intitle:!C99madShell&aq=f&aqi=&aql=&gs_l=&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.,cf.osb&fp=
f711fce0343c3599&biw=580&bih=425
Now you can use curl to search using google dorks and save your search results in your hard disk or simply use firefox and save search results by doing a save as. You can do this with curl in your command prompt by typing:
curl -A Mozilla http://www.google.com/search?q=C99madSHell |html2text -width 10000000 | grep "Cached - Similar" | grep www.*.php
The following screen shot show the command (notice the html to text Linux utility I used):
The outcome of this command will be exactly the one shown below (after all the necessary grep-ing is done of course):
As you can see if you enlarge the picture (by simply clicking on the image) the search and filtering performed using curl is redirected into a file (after being properly greped to obtain only the desirable URL's). The output text file contains the potentially compromised web sites. Of course a manual filtering will have to be done to remove the references into URL's that are not really compromised.
Crontabing Google Searches
The next best thing to do in order to completely automate the process is to use crontab, a good crontab tutorial is clickmojo. As you already understand after reading this post you understand how toxic the Internet has become.
Here is how to run a google dork search at 6PM every night:
MAILTO=cron@youusername.youmailprovider.com
00 18 * * * /curl <google-dork to search> > logSearch.txt
Note: You can grep or sed the obtained data to analyze the results and verify you logged only interesting URL's.
Epilog
Internet the last 2 years has become more and more toxic. Even users with no significant information to expose or online businesses start having a hard time to maintain their blogs or web sites without taking into consideration security seriously. Please feel free to post comments and give me back some feed on how useful you find my posts......
Reference:
- http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1061943
- http://www.google.com/alerts
- http://clickmojo.com/code/cron-tutorial.html