Anestis Bechtsoudis » reconnaissance http://bechtsoudis.com Driven by Passion for Challenges Tue, 01 Jul 2014 12:30:55 +0000 en-US hourly 1 http://wordpress.org/?v=3.9.2 Pastenum: Enumerating Text Dump Websites http://bechtsoudis.com/2011/06/08/pastenum-enumerating-text-dump-websites/#utm_source=rss&utm_medium=rss&utm_campaign=pastenum-enumerating-text-dump-websites http://bechtsoudis.com/2011/06/08/pastenum-enumerating-text-dump-websites/#comments Wed, 08 Jun 2011 16:24:49 +0000 http://bechtsoudis.com/?p=458 Text dump websites are used by programmers and system administrators to share and store pieces of source code and configuration information. Two of the most popular text dump websites are pastebin and pastie. Day by day more and more programmers, amateur system administrators and regular users are captivated by the attractive functional features of these web tools and use them in order to share large amounts of configuration and source code information. Therefore, like happening in each famous web platform, sensitive information sharing is inevitable. Potential attackers use these web platforms to gather information about their targets, while on the other side penetration testers search into these sites to prevent critical information leakage.

 

Most of the text dump web platforms offer a searching mechanism and therefore anyone can manually query the database for matching strings. Although an automated script/tool capable to query all these text dump websites and generate an overall searching report, would be very useful for the reconnaissance phase of a penetration test. Pen-testers can use such an automate tool, in order to efficiently search for potential configuration and login credentials information leakage that will help an attacker to profile the victim system and find a security hole.

Recently I came across in the web with such a script, pastenum. Pastenum is a ruby script written by Nullthreat member of the Corelan Team. It can query pastebin, pastie and github for user defined strings and generate an overall html report with the searching results.

 

Pastenum can be downloaded from here, while detailed installation information can be found here.

 

Let’s see some screenshots with pastenum in action.

 

 

 

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/06/08/pastenum-enumerating-text-dump-websites/feed/ 2
Knowing is half the battle… http://bechtsoudis.com/2011/05/10/knowing-is-half-the-battle%e2%80%a6/#utm_source=rss&utm_medium=rss&utm_campaign=knowing-is-half-the-battle%25e2%2580%25a6 http://bechtsoudis.com/2011/05/10/knowing-is-half-the-battle%e2%80%a6/#comments Tue, 10 May 2011 19:46:43 +0000 http://bechtsoudis.com/?p=378 G.I. Joe used to say, “Knowing is half the battle.” The collection of prior information could make the difference between success and failure of a Penetration Test.

The first phase (reconnaissance phase) of a penetration test,  includes information gathering & network mapping procedures. Automated intelligent reconnaissance tools have been developed extensively the last years, offering a reliable and sprinting starting point for the exploitation phase. In this article, I will focus on information gathering tools in order to collect valid login names, emails, DNS records and WHOIS databases. A Penetration Tester can use the gathered information in order to profile the target, launch client side attacks, search into social networks for additional knowledge, bruteforce authentication mechanisms etc.

We can easily gather this information with simple scripts, without following an extensive OSINT (Open Source Intelligence) procedure. Although, I should mention that a detailed and extensive OSINT phase will have better results and will be necessary under certain business needs.

I will analyze Edge-Security’s theHarvester and Metasploit’s Search Email Collector tools.

 

theHarvester

theHarvester (currently at 2.0 version) is a python script that can gather email accounts, usernames and subdomains from public search engines and PGP key servers.

The tool supports the following sources:

  • Google – emails,subdomains/hostnames
  • Google profiles – Employee names
  • Bing search – emails, subdomains/hostnames,virtual hosts (requires bing API key)
  • Pgp servers – emails, subdomains/hostnames
  • Linkedin – Employee names
  • Exalead – emails,subdomain/hostnames

The latest version of theHarvester can be downloaded from the GitHub repository here.

Give execute permissions to the script file, and run it in order to see the available options.

$ ./theHarvester.py 
 
*************************************
*TheHarvester Ver. 2.0 (reborn)     *
*Coded by Christian Martorella      *
*Edge-Security Research             *
*cmartorella@edge-security.com      *
*************************************
 
Usage: theharvester options 
 
       -d: Domain to search or company name
       -b: Data source (google,bing,bingapi,pgp,linkedin,google-profiles,exalead,all)
       -s: Start in result number X (default 0)
       -v: Verify host name via dns resolution and search for vhosts(basic)
       -l: Limit the number of results to work with(bing goes from 50 to 50 results,
            google 100 to 100, and pgp does not use this option)
       -f: Save the results into an XML file
 
Examples:./theharvester.py -d microsoft.com -l 500 -b google
         ./theharvester.py -d microsoft.com -b pgp
         ./theharvester.py -d microsoft -l 200 -b linkedin

You can see some execution example in the following screenshots:

 

 

Metasploit Email Collector

Search email collector is a metasploit module written by Carlos Perez. The module runs under the metasploit framework and uses Google, Bing and Yahoo to create a list of valid email addresses for the target domain.

You can view the source code here.

The module options are:

DOMAIN The domain name to locate email addresses for
OUTFILE A filename to store the generated email list
SEARCH_BING Enable Bing as a backend search engine (default: true)
SEARCH_GOOGLE Enable Google as a backend search engine (default: true)
SEARCH_YAHOO Enable Yahoo! as a backend search engine (default: true)
PROXY Proxy server to route connection. <host>:<port>
PROXY_PASS Proxy Server Password
PROXY_USER Proxy Server User
WORKSPACE Specify the workspace for this module

 

Let’s see a running example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
msf >; 
msf >; use auxiliary/gather/search_email_collector 
msf auxiliary(search_email_collector) >; set DOMAIN example.com
DOMAIN =>; example.com
msf auxiliary(search_email_collector) >; run
 
[*] Harvesting emails .....
[*] Searching Google for email addresses from example.com
[*] Extracting emails from Google search results...
[*] Searching Bing email addresses from example.com
[*] Extracting emails from Bing search results...
[*] Searching Yahoo for email addresses from example.com
[*] Extracting emails from Yahoo search results...
[*] Located 49 email addresses for example.com
[*] 	555-555-0199@example.com
[*] 	a@example.com
[*] 	alle@example.com
[*] 	b@example.com
[*] 	boer_faders@example.com
[*] 	ceo@example.com
[*] 	defaultemail@example.com
[*] 	email@example.com
[*] 	example@example.com
[*] 	foo@example.com
[*] 	fsmythe@example.com
[*] 	info@example.com
[*] 	joe@example.com
[*] 	joesmith@example.com
[*] 	johnnie@example.com
[*] 	johnsmith@example.com
[*] 	myname+spam@example.com
[*] 	myname@example.com
[*] 	name@example.com
[*] 	nobody@example.com
....

 

Useful links:

 

DISCLAIMER: I’m not responsible with what you do with this info. This information is for educational purposes only.

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/05/10/knowing-is-half-the-battle%e2%80%a6/feed/ 1
skipfish: Web Security Reconnaissance Tool http://bechtsoudis.com/2011/04/05/skipfish-web-security-reconnaissance-tool/#utm_source=rss&utm_medium=rss&utm_campaign=skipfish-web-security-reconnaissance-tool http://bechtsoudis.com/2011/04/05/skipfish-web-security-reconnaissance-tool/#comments Tue, 05 Apr 2011 01:05:13 +0000 http://bechtsoudis.com/?p=217 Skipfish is a fully automated, active web application security reconnaissance tool released by Michal Zalewski (lcamtuf) . Web developers and security professionals can use skipfish to run  a series of tests to websites that are under their responsibility. Skipfish support Linux, FreeBSD, MacOS X, and Windows (Cygwin) environments (i have made my tests under Debian distribution). The tool has been released to the public by Google, in order to offer an easy to use and high speed solution for making websites safer.

Skipfish classifies the discovered risks as high, medium and low. Some of the higher risk ones include:

  • Server-side SQL injection (including blind vectors, numerical parameters).
  • Explicit SQL-like syntax in GET or POST parameters.
  • Server-side shell command injection (including blind vectors).
  • Server-side XML / XPath injection (including blind vectors).
  • Format string vulnerabilities.
  • Integer overflow vulnerabilities.

Skipfish isn’t the only available solution. There exist many free and commercial web scanner vulnerabilities tools (like Nikto2 and Nessus), which sometime have better analysis results. In any case, it’s about time people started taking security seriously, and using a tool like this is a good initial step in the right direction.

Let’s proceed to the installation steps:

  1. Download skipfish from the official site.
  2. Check downloaded sha1sum with the one from the official site.
    $sha1sum skipfish-1.x.tgz
  3. Ensure that your system meet the requirements (if not install the require packages through your OS package manager):
    • libidn11
    • libidn11-dev
    • libssl-dev
    • zlib1g-dev
    • gcc
    • make
    • libc6
    • libc6-dev
  4. Extract files.
  5. run make to compile the sources. In case of problem read known issues wiki.

After compile has finished, you are strongly advised to read the README-FIRST file, in order to choose the appropriate type of dictionary. As a start if your website application is small, you can use the complete.wl dictionary.

 

Let’s proceed to the running part.

  1. In the skipfish main directory make a copy of the complete dictionary
  2. $cp dictionaries/complete.wl skipfish.wl
  3. Create a directory for the output reports.
  4. Execute skipfish giving the website url.
  5. $./skipfish -o outputresults http://example.com
  6. Hit a key to start the scan.
  7. Wait the scan to finish. In case you terminate the scanning process you can see the so far reported risks.
  8. Open the index.html report with firefox.

You should then be able to interpret the results easily. Most of the scan results are pretty self-explanatory. It is recommended to pay attention first to high risk vulnerabilities detected by the scan. You can expand those results to read more details.

What to do next? Well you need to educate yourself at understanding and correcting these vulnerabilities, for example if Skipfish is reporting some MySQL injection vulnerabilities in your website you might need to read and learn more about  SQL injection. You can use Google to read more details about that vulnerability.

 

Here are some screenshots from the tool:

 

Useful links:

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/04/05/skipfish-web-security-reconnaissance-tool/feed/ 1