Showing posts with label OWASP Top 10. Show all posts
Showing posts with label OWASP Top 10. Show all posts

22/02/2020

SSRFing External Service Interaction and Out of Band Resource Load (Hacker's Edition)


In the recent past we encountered two relativly new type of Attacks. External Service Interaction (ESI) and Out-of-band resource loads (OfBRL).
  1. An ESI [1] occurs only when a Web Application allow interaction with an arbitrary external service. 
  2. OfBRL [6] arises when it is possible to induce an application to fetch content from an arbitrary external location, and incorporate that content into the application's own response(s). 

The Problem with OfBRL

The ability to request and retrieve web content from other systems can allow the application server to be used as a two-way attack proxy (when OfBRL is applicable) or a one way proxy (when ESI is applicable). By submitting suitable payloads, an attacker can cause the application server to attack, or retrieve content from, other systems that it can interact with. This may include public third-party systems, internal systems within the same organization, or services available on the local loopback adapter of the application server itself. Depending on the network architecture, this may expose highly vulnerable internal services that are not otherwise accessible to external attackers.

The Problem with ESI

External service interaction arises when it is possible to induce an application to interact with an arbitrary external service, such as a web or mail server. The ability to trigger arbitrary external service interactions does not constitute a vulnerability in its own right, and in some cases might even be the intended behavior of the application. However, in many cases, it can indicate a vulnerability with serious consequences.

The Verification

We do not have ESI or OfBRL when:
  1. In colaborator the source IP is our browser IP 
  2. There is a 302 redirect from our hosts to the collaborator (aka. our source IP appears in the collaborator)
Below we can see the original configuration in the repeater:


Below we can see the modified configuration in the repeater for the test:


The RFC(s)

It usually is a platform issue and not an application one. In some scenarios when we have for example a CGI application, the HTTP headers are handled by the application (aka. the app is dynamically manipulating the HTTP headers to run properly). This means that HTTP headers such as Location and Hosts are handled by the app and therefore a vulnerability might exist. It is recommended to run HTTP header integrity checks when you own a critical application that is running on your behalf.

For more informatinon on the subject read RFC 2616 [2]. Where the use of the headers is explained in detail. The Host request-header field specifies the Internet host and port number of the resource being requested, as obtained from the original URI given by the user or referring resource (generally an HTTP URL. The Host field value MUST represent the naming authority of the origin server or gateway given by the original URL. This allows the origin server or gateway to differentiate between internally-ambiguous URLs, such as the root "/" URL of a server for multiple host names on a single IP address.

When TLS is enforced throughout the whole application (even the root path /) an ESI or OfBRL is not possible, because both protocols perform source origin authentication e.g. as soon as a connection is established with an IP and the vulnerable server the protocol guaranties that the connection established is going to serve traffic only from the original IP. More specifically we are going to get an SNI error.

SNI prevents what's known as a "common name mismatch error": when a client (user) device reaches the IP address for a vulnerable app, but the name on the SSL/TLS certificate doesn't match the name of the website. SNI was added to the IETF's Internet RFCs in June 2003 through RFC 3546, Transport Layer Security (TLS) Extensions. The latest version of the standard is RFC 6066.

The option to trigger an arbitrary external service interaction does not constitute a vulnerability in its own right, and in some cases it might be the intended behavior of the application. But we as Hackers want to exploit it correct?, what can we do with an ESI then or a Out-of-band resource load?

The Infrastructure 

Well it depends on the over all set up! The most juice scenarios are the folowing:
  1. The application is behind a WAF (with restrictive ACL's) 
  2. The application is behind a UTM (with restrictive ACL's) 
  3. The application is running multiple applications in a virtual enviroment 
  4. The application is running behind a NAT. 
In order to perform the hack we have to simple inject our host value in the HTTP host header (hostname including port). Below is a simple diagram explaining the vulnerability.




Below we can see the HTTP requests with injected Host header:

Original request:

GET / HTTP/1.1
Host: our_vulnerableapp.com
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

Malicious requests:

GET / HTTP/1.1
Host: malicious.com
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

or

GET / HTTP/1.1
Host: 127.0.0.1:8080
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

If the application is vulnerable to OfBRL then, it means that the reply is going to be processed by the vulnerable application, bounce back in the sender (aka. Hacker) and potentially load in the context of the application. If the reply does not come back to the sender (aka. Hacker) then we might have a OfBRL, and further investigation is required.

Out-of-band resource load:




ESI:




Below we can see the configuration in the intruder:



We are simply using the sniper mode in the intruder, can do the following:
  1. Rotate through diffrent ports, using the vulnapp.com domain name.
  2. Rotate through diffrent ports, using the vulnapp.com external IP.
  3. Rotate through diffrent ports, using the vulnapp.com internal IP, if applicable.
  4. Rotate through diffrent internal IP(s) in the same domain, if applicable.
  5. Rotate through diffrent protocols (it might not work that BTW).
  6. Brute force directories on identified DMZ hosts.

The Test

Burp Professional edition has a feature named collaborator. Burp Collaborator is a network service that Burp Suite uses to help discover vulnerabilities such as ESI and OfBRL [3]. A typical example would be to use Burp Collaborator to test if ESI exists. Below we describe an interaction like that.


Original request:

GET / HTTP/1.1
Host: our_vulnerableapp.com
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

Burp Collaborator request:

GET / HTTP/1.1
Host: edgfsdg2zjqjx5dwcbnngxm62pwykabg24r.burpcollaborator.net
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: keep-alive

Burp Collaborator response:

HTTP/1.1 200 OK
Server: Burp Collaborator https://burpcollaborator.net/
X-Collaborator-Version: 4
Content-Type: text/html
Content-Length: 53

<html><body>drjsze8jr734dsxgsdfl2y18bm1g4zjjgz</body></html>

The Post Exploitation 

Ok now as Hackers artists we are going to think how to exploit this. The scenarios are: [7][8]

  1. Attempt to load the local admin panels. 
  2. Attempt to load the admin panels of surounding applications. 
  3. Attempt to interact with other services in the DMZ. 
  4. Attempt to port scan the localhost 
  5. Attempt to port scan the DMZ hosts
  6. Use it to exploit the IP trust and run a DoS attack to other systems 
A good option for that would be Burp Intruder. Burp Intruder is a tool for automating customized attacks against web applications. It is extremely powerful and configurable, and can be used to perform a huge range of tasks, from simple brute-force guessing of web directories through to active exploitation of complex blind SQL injection vulnerabilities.


Burp Intruder configuration for scanning surounding hosts:

GET / HTTP/1.1
Host: 192.168.1.§§
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

Burp Intruder configuration for port scanning surounding hosts:

GET / HTTP/1.1
Host: 192.168.1.1:§§
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

Burp Intruder configuration for port scanning localhost:

GET / HTTP/1.1
Host: 127.0.0.1:§§
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

What Can you Do

The big hack analysis; this vulnerability can be used in the following ways:
  1. Bypass restrictive UTM ACL(s) 
  2. Bypass restrictive WAF Rule(s) 
  3. Bypass restrictive FW ACL(s) 
  4. Perform cache poisoning
  5. Fingerprint internal infrastracture
  6. Perform DoS exploiting the IP trust
  7. Exploit applications hosted in the same mahine aka. mulitple app loads
Below we can see a schematic analysis on bypassing ACL(s):



The impact of a maliciously constructed response can be magnified if it is cached either by a web cache used by multiple users or even the browser cache of a single user. If a response is cached in a shared web cache, such as those commonly found in proxy servers, then all users of that cache will continue to receive the malicious content until the cache entry is purged. Similarly, if the response is cached in the browser of an individual user, then that user will continue to receive the malicious content until the cache entry is purged, although only the user of the local browser instance will be affected. [5]

Below follows the schematic analysis:



What Can't You Do
You cannot perform XSS or CSRF exploting this vulnerability, unless certain conditions apply.

The fix

If the ability to trigger arbitrary ESI or OfBRL is not intended behavior, then you should implement a whitelist of permitted URLs, and block requests to URLs that do not appear on this whitelist. [6] Also running host intergrity checks is recommended.[6]

We should review the purpose and intended use of the relevant application functionality, and determine whether the ability to trigger arbitrary external service interactions is intended behavior. If so, you should be aware of the types of attacks that can be performed via this behavior and take appropriate measures. These measures might include blocking network access from the application server to other internal systems, and hardening the application server itself to remove any services available on the local loopback adapter. [6]

More specifically we can:

  1. Apply egress filtering on the DMZ
  2. Apply egress filtering on the host
  3. Apply white list IP restrictions in the app
  4. Apply black list restrictions in the app (although not reommended)
Refrences:

28/05/2016

Hacker’s Elusive Thoughts The Web

Introduction

The reason for this blog post is to advertise my book. First of all I would like to thank all the readers of my blog for the support and feedback on making my articles better. After 12+ years in the penetration testing industry, the time has come for me to publish my book and tranfer my knowledge to all the intersted people that like hacking and want to learn as much as possible. Also at the end of the blog you will find a sample chapter.



About The Author

Gerasimos is a security consultant holding a MSc in Information Security, a CREST (CRT), a CISSP, an ITILv3, a GIAC GPEN and a GIAC GAWPT accreditation. Working alongside diverse and highly skilled teams Gerasi- mos has been involved in countless comprehensive security tests and web application secure development engagements for global web applications and network platforms, counting more than 14 years in the web application and application security architecture.

Gerasimos further progressing in his career has participated in vari- ous projects providing leadership and accountability for assigned IT security projects, security assurance activities, technical security reviews and assess- ments and conducted validations and technical security testing against pre- production systems as part of overall validations.

Where From You Can Buy The Book

This book can be bought from leanbup. Leanpub is a unique publishing platform that provides a way in the world to write, publish and sell in-progress and completed ebooks. Anyone can sign up for free and use Leanpub's writing and publishing tools to produce a book and put it up for sale in our bookstore with one click. Authors are paid a royalty of 90% minus 50 cents per transaction with no constraints: they own their work and can sell it elsewhere for any price.

Authors and publishers can also upload books they have created using their own preferred book production processes and then sell them in the Leanpub bookstore, taking advantage of our high royalty rates and our in-progress publishing features.

Please for more information about bying the book see link: https://leanpub.com/hackerselusivethoughtstheweb

Why I Wrote This Book

I wrote this book to share my knowledge with anyone that wants to learn about Web Application security, understand how to formalize a Web Appli- cation penetration test and build a Web Application penetration test team.

The main goal of the book is to: 

Brainstorm you with some interesting ideas and help you build a com- prehensive penetration testing framework, which you can easily use for your specific needs. Help you understand why you need to write your own tools. Gain a better understanding of some not so well documented attack techniques.
The main goal of the book is not to:
 
Provide you with a tool kit to perform Web Application penetration tests. Provide you with complex attacks that you will not be able to under- stand. Provide you with up to date information on latest attacks.

Who This Book Is For 


This book is written to help hacking enthusiasts to become better and stan- dardize their hacking methodologies and techniques so as to know clearly what to do and why when testing Web Applications. This book will also be very helpful to the following professionals:

1. Web Application developers.
2. Professional Penetration Testers.
3. Web Application Security Analysts.
4. Information Security professionals.
5. Hiring Application Security Managers.
6. Managing Information Security Consultants.

How This Book Is Organised  

Almost all chapters are written in such a way so as to not require you to read the chapters sequentially, in order to understand the concepts presented, although it is recommended to do so. The following section is going to give you an overview of the book:

Chapter 1: Formalising Web Application Penetration Tests -
This chapter is a gentle introduction to the world of penetration testing, and attempt to give a realistic view on the current landscape. More specifically it attempt to provide you information on how to compose a Pen- etration Testing team and make the team as ecient as possible and why writing tools and choosing the proper tools is important.

Chapter 2: Scanning With Class -

The second chapter focuses on helping you understand the dierence between automated and manual scanning from the tester’s perspective. It will show you how to write custom scanning tools with the use of Python. This part of the book also contains Python chunks of code demonstrating on how to write tools and design your own scanner.

Chapter 3: Payload Management -

This chapter focuses on explaining two things a) What is a Web payload from security perspective, b) Why is it important to obfuscated your payloads.

Chapter 4: Infiltrating Corporate Networks Using XXE -

This chapter focuses on explaining how to exploit and elevate an External Entity (XXE) Injection vulnerability. The main purpose of this chapter is not to show you how to exploit an XXE vulnerability, but to broaden your mind on how you can combine multiple vulnerabilities together to infiltrate your target using an XXE vulnerability as an example.

Chapter 5: Phishing Like A Boss -

This chapter focuses on explaining how to perform phishing attacks using social engineering and Web vulnerabilities. The main purpose of this chapter is to help you broaden your mind on how to combine multiple security issues, to perform phishing attacks.

Chapter 6: SQL Injection Fuzzing For Fun And Profit -

This chapter focuses on explaining how to perform and automate SQL injection attacks through obfuscation using Python. It also explains why SQL injection attacks happen and what is the risk of having them in your web applications.


Sample Chapter Download
From the following link you will be able to download a sample chapter from my book:

Sample Book Download
















15/04/2014

PHP Source Code Chunks of Insanity (Post Pages) Part 3

Intro 

This post is going to talk about source code reviewing PHP and demonstrate how a relatively small chunk of code can cause you lots of problems.

The Code

In this article we are going to analyze the code displayed below. The code displayed below might seem innocent for some , but obviously is not. We are going to assume that is used by some web site to post the user comments securely.
<?php require_once 'common.php'; validateMySession(); ?> <html> <head> <title>User Posts</title> </head> <body> <h1>Showing current posts</h1> <form action='awsomePosts.php'>
<p>MySearch: <input type='text'  value='<?php if (isset($_GET['search'])) echo htmlentities($_GET['search'])?>'></p> <p><input type='submit' value='MySearch'></p>
</form> <?php showAwsomePosts();?> </body>
</html>
If you look carefully the code you will se that the code is vulnerable to the following issue: Stored XSS!!
    
Think this is not accurate , think better.

The Stored XSS

An adversary would need to have very good knowledge of encoding/XSS attacks to exploit this vulnerability. This vulnerability is based on a well known UTF-­‐7 encoding attack that is considered to be old. Other filter bypassing techniques can be used to bypass htmlentities such as JavaScript events.

Vulnerable Code: 
1:  <p>MySearch: <input type='text' value='<?php if (isset($_GET['search'])) echo htmlentities($_GET['search'])?>'></p>// Vulnerable to XSS UTF-­‐7 attack  
The page that the potential XSS resides on doesn't provide a page charset header (e.g. header('Content-­‐ Type: text/html; charset=UTF-­‐8'); or <HEAD><META HTTP-­‐EQUIV="CONTENT-­‐TYPE" CONTENT="text/html; charset=UTF-­‐8">), any browser that is set to UTF-­‐7 encoding can be exploited with the following XSS input (she don't need the charset statement if the user's browser is set to auto-­‐ detect and there is no overriding content-­‐types on the page in Internet Explorer and Netscape rendering engine mode). This does not work in any modern browser without changing the encoding type.

Example1 UTF-­‐7 Encoding

Input Payload :

1:  <script>alert(1)</script>  

Output (UTF-­‐7): 
1:  +ADw-­‐script+AD4-­‐alert('XSS')+ADw-­‐/script+AD4APA-­‐/vulnerable+AD4-­‐  

Example2 JavaScript Events

Injecting also JavaScript events to the htmlentities function of php will also by pass the filter.

The code before injection:     
1<p>MySearch: <input type='text' value='<?php if (isset($_GET['search'])) echo htmlentities($_GET['search'])?>'></p>  

The code after injection:
<p>MySearch: <input type='text' value='onerror='alert(String.fromCharCode(88, 83, 83))'></p>  


Note: This example needs further testing to see if it is applicable.

Remedial Code:

Provide Server Side filters for the vulnerability. Make use of regular expressions and html encode the variables whether displayed back to the user or not.

1st Layer of defense 
1:  //XSS filter the value because this value might be printed later on back in the user. if preg_match ("/[a-­‐zA-­‐Z]+/", "", $search){  
2:  showPosts();  
3:  }  
Note: Using regular expressions to replace parts of the input and proceed with further processing the input is not recommended, once a malicious input is identified should be rejected (e.g. using preg_match instead of preg_replace).

2nd Layer of defense 
1:  header('Content-­‐Type: text/html; charset=UTF-­‐8');  
2:  // This function will convert both double and single quotes. mb_convert_encoding($search, 'UTF-­‐8');  


Countermeasures Summarized
  1. Specify charset clearly (HTTP header is recommended)
  2. Don't place the text attacker can control before <meta>
  3. Specify recognizable charset name by browser.
  4. Apply regular expressions based on the white list mentality.
Note: mb_convert_encoding converts the character encoding of the input string to the desired encoding. 

References:

1. https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet#UTF-­‐7_encoding 
2. http://php.net/manual/en/function.mb-­‐convert-­‐encoding.php
3. http://shiflett.org/blog/2005/dec/google-­‐xss-­‐example
4. http://www.motobit.com/util/charset-­‐codepage-­‐conversion.asp
5. http://openmya.hacker.jp/hasegawa/security/utf7cs.html 
6. http://wiremask.eu/?p=tutorials&id=10 


21/09/2013

The Hackers Guide To Dismantling IPhone (Part 3)

Introduction

On May 7, 2013, as a German court ruled that the iPhone maker must alter its company policies for handling customer data, since these policies have been shown to violate Germany’s privacy laws.

The news first hit the Web via Bloomberg, who reports that:

"Apple Inc. (AAPL), already facing a U.S. privacy lawsuit over its information-sharing practices, was told by a German court to change its rules for handling customer data.
A Berlin court struck down eight of 15 provisions in Apple’s general data-use terms because they deviate too much from German laws, a consumer group said in a statement on its website today. The court said Apple can’t ask for “global consent” to use customer data or use information on the locations of customers.
While Apple previously requested “global consent” to use customer data, German law requires that customers know in detail exactly what is being requested. Further to this, Apple may no longer ask for permission to access the names, addresses, and phone numbers of users’ contacts."

Finally, the court also prohibited Apple from supplying such data to companies which use the information for advertising. But why does this happen?

More Technical on privacy issues


Every iPhone has an associated unique device Identifier derived from a set of hardware attributes called UDID. UDID is burned into the device and one cannot remove or change it. However, it can be spoofed with the help of tools like UDID Faker.

UDID of the latest iPhone is computed with the formula given below:

UDID = SHA1(Serial Number + ECID + LOWERCASE (WiFi Address) + LOWERCASE(Bluetooth Address))

UDID is exposed to application developers through an API which would allow them to access the UDID of an iPhone without requiring the device owner’s permission. The code snippet shown below is used to collect the UDID of a device, later which can used to track the user’s behavior.

NSString *uniqueIdentifier = [device uniqueIdentifier]

With the help of UDID, it is possible to observe the user’s browsing patterns and trace out the user’s geo location. As it is possible to locate the user’s exact location with the help of a device UDID, it became a big privacy concern. More possible attacks are documented in Eric Smith-iPhone application privacy issues whitepaper. Eric’s research shows that 68% of applications silently send UDIDs to the servers on the internet. A perfect example of a serious privacy security breach is social gaming network Openfient.

OpenFeint was a social platform for mobile games Android and iOS. It was developed by Aurora Feint, a company named after a video game by the same developers. The platform consisted of an SDK for use by games, allowing its various social networking features to be integrated into the game's functionality. OpenFeint was discontinued at the end of 2012.

Openfient collected device UDID’s and misused them by linking it to real world user identities (like email address, geo locations latitude & longitude, Facebook profile picture) and making them available for public access, resulting in a serious privacy breach.

While penetration testing, observe the network traffic for UDID transmission. UDID in the network traffic indicates that the application is collecting the device identifier or might be sending it to a third party analytic company to track the user’s behavior. In iOS 5, Apple has deprecated the API that gives access to the UDID, and it will probably remove the API completely in future iOS releases. Development best practice is not to use the API that collects the device UDIDs, as it breaches the privacy of the user. If the developers want to keep track of the user’s behaviour, create a unique identifier specific to the application instead of using UDID. The disadvantage with the application specific identifier is that it only identifies an installation instance of the application, and it does not identify the device.

Apart from UDID, applications may transmit personal identifiable information like age, name, address and location details to third party analytic companies. Transmitting personal identifiable information to third party companies without the user’s knowledge also violates the user’s privacy. So, during penetration testing carefully observe the network traffic for the transmission of any important data.
Example: Pandora application was used to transmit user’s age and zip code to a third party analytic company (doubleclick.net) in clear text. For the applications which require the user’s geo location (ex: check-in services) to serve the content, it is always recommended to use the least degree of accuracy necessary. This can be achieved with the help of accuracy constants defined in core location framework (ex: CLLocationAccuracy kCLLocationAccuracyNearestTenMeters).

Identifying UUID transmission

Identifying if the UUID of the Iphone is transmitted is easy. It can be done through a Man In The Middle attack or a sniffer such as Wireshark. For example by using Wireshark to sniff traffic you can very easily identify if the UUID is transmitted if you follow the tcp stream.

Local data storage security issues

IPhone stores the data locally on the device to maintain essential information across the application execution or for a better performance or offline access. Also, developers use the local device storage to store information such as user preferences and application configurations. As device  theft is becoming an increasing concern, especially in the enterprise, insecure local storage is considered to be the top risk in mobile application threats.  A recent survey conducted by Viaforensics revealed that 76 percent of mobile applications are storing user’s information on the device. 10 percent of them are 
even storing the plain text passwords on the phone.

Sensitive information stored on the iPhone can be obtained by attackers in several ways. A few of the ways are listed below -

From Backups

When an iPhone is connected to iTunes, iTunes automatically takes a backup of everything on the device. Upon backup, sensitive files will also end up on the workstation. So an attacker who gets access to the workstation can read the sensitive information from the stored backup files.

More specifically backed-up information includes purchased music, TV shows, apps, and books; photos and video in the Camera Roll; device settings (for example, Phone Favorites, Wallpaper, and Mail, Contacts, Calendar accounts); app data; Home screen and app organization; Messages (iMessage, SMS, and MMS), ringtones, and more. Media files synced from your computer aren’t backed up, but can be restored by syncing with iTunes.

iCloud automatically backs up the most important data on your device using iOS 5 or later. After you have enabled Backup on your iPhone, iPad, or iPod touch in Settings > iCloud > Backup & Storage, it will run on a daily basis as long as your device is:

  • Connected to the Internet over Wi-Fi
  • Connected to a power source
  • Screen locked

Note:You can also back up manually whenever your device is connected to the Internet over Wi-Fi by choosing Back Up Now from Settings > iCloud > Storage & Backup.

Physical access to the device

People lose their phones and phones get stolen very easily. In both cases, an attacker will get physical access to the device and read the sensitive information stored on the phone. The passcode set to the device will not protect the information as it is possible to brute force the iPhone simple passcode within 20 minutes. To know more details about iPhone passcode bypass go through the iPhone Forensics article available at – http://resources.infosecinstitute.com/iphone-forensics/.

Malware

Leveraging a security weakness in iOS may allow an attacker to design a malware which can steal the files on the iPhone remotely. Practical attacks are demonstrated by Eric Monti in his presentation on iPhone Rootkit.

Directory structure

In iOS, applications are treated as a bundle represented within a directory. The bundle groups all the application resources, binaries and other related files into a directory. In iPhone, applications are executed within a jailed environment (sandbox or seatbelt) with mobile user privileges. Unlike Android UID based segregation, iOS applications runs as one user. Apple says “The sandbox is a set of fine-grained controls limiting an application’s access to files, preferences, network resources, hardware, and so on. Each application has access to the contents of its own sandbox but cannot access other applications’ sandboxes. When an application is first installed on a device, the system creates the application’s home directory, sets up some key subdirectories, and sets up the security privileges for the sandbox“. A sandbox is a restricted environment that prevents applications from accessing unauthorized resources; however, upon iPhone JailBreak, sandbox protection gets disabled.

When an application is installed on the iPhone, it creates a directory with a unique identifier under /var/mobile/Applications directory. Everything that is required for an application to execute will be contained in the created home directory. Typical iPhone application home directory structure is listed below.


Plist files

A property List (Plist file) is a structured binary formatted file which contains the essential configuration of a bundle executable in nested key value pairs. Plist files are used to store the user preferences and the configuration information of an application. For example, Gaming applications usually store game
levels and game scores in the Plist files. In general, applications store the Plist files under [Application's Home Directory]/documents/preferences folder. Plist can either be in XML format or in binary format.

As XML files are not the most efficient means of storage, most of the applications use binary formatted Plist files. Binary formatted data stored in the Plist files can be easily viewed or modified using Plist editors (ex: plutil). Plist editors convert the binary formatted data into an XML formatted data, later it can be edited easily. Plist files are primarily designed to store the user preferences & application configuration; however, the applications may use Plist files to store clear text usernames, passwords and session related information.

ICanLocalize

ICanLocalize allows online translating plist files as part of a Software Localization project. A parser will go through the plist file. It will extract all the texts that need translation and make them available to the translators. Translators will translate only the texts, without worrying about the file format.

When translation is complete, the new plist file is created. It has the exact same structure as the original file and only the right fields translated.

For example, have a look at this plist file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN"
    "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
  <key>Year Of Birth</key>
  <integer>1965</integer>
  <key>Photo</key>
  <data>
    PEKBpYGlmYFCPAfekjf39495265Afgfg0052fj81DG==
  </data>
  <key>Hobby</key>
  <string>Swimming</string>
  <key>Jobs</key>
  <array>
    <string>Software engineer</string>
    <string>Salesperson</string>
  </array>
  </dict>
</plist>

Note: It includes several keys and values. There’s a binary Photo entry, an integer field calls Year of Birth and text fields called Hobby and Jobs (which is an array). If we translate this plist manually, we need to carefully watch out for strings we should translate and others that we must not translate.

Of this entire file, we need to translate only the items that appear inside the <string> tags. Other texts must remain unchanged.

Translating as plist info

Once you’re logged in to ICanLocalize, click on Translation Projects -> Software Localization and create a new project.

Name it and give a quick description. You don’t need to tell about the format of plist files. Our system knows how to handle it. Instead, explain about what the plist file is used for. Tell about your application, target audience and the preferred writing style. Then, upload the plist file. You will see a list of texts which the parser extracted.


Note: Manipulating and altering the plist files can be done through the iExplorer. Simple download iExplorer open plist files, modify them and then insert them back again.

Keychain Storage 

Keychain is an encrypted container (128 bit AES algorithm) and a centralized SQLite database that holds identities & passwords for multiple applications and network services, with restricted access rights. On the iPhone, keychain SQLite database is used to store the small amounts of sensitive data like usernames, passwords, encryption keys, certificates and private keys. In general, iOS applications store the user’s credentials in the keychain to provide transparent authentication and to not prompt the user every time for login.

iOS applications use the keychain service library/API:

  • secItemAdd
  • secItemDelete
  • secItemCopyMatching & secItemUpdate methods

Note: These keywords can be used for source code reviews (identifying the location of the data)

These keywords are used to read and write data to and from the keychain. Developers leverage the keychain services API to dictate the operating system to store sensitive data securely on their behalf, instead of storing them in a property list file or a plaintext configuration file. On the iPhone, the keychain SQLite database file is located at – /private/var/Keychains/keychain-2.db.

Keychain contains a number of keychain items and each keychain item will have encrypted data and a set of unencrypted attributes that describes it. Attributes associated with a keychain item depend on the keychain item class (kSecClass). In iOS, keychain items are classified into 5 classes – generic passwords (kSecClassGenericPassword), internet passwords (kSecClassInternetPassword), certificates (kSecClassCertificate), keys (kSecClassKey) and digital identities (kSecClassIdentity, identity=certificate + key). In the iOS keychain, all the keychain items are stored in 4 tables – genp, inet, cert and keys (shown in Figure 1). Genp table contains generic password keychain items, inet table contains Internet password keychain items, and cert & keys tables contain certificates, keys and digital identity keychain items.

Keys hierarchy

Here is the keychain stracture:

  • UID key : hardware key embedded in the application processor AES engine, unique for each device. This key can be used but not read by the CPU. Can be used from bootloader and kernel mode. Can also be used from userland by patching IOAESAccelerator.
  • UIDPlus key : new hardware key referenced by the iOS 5 kernel, does not seem to be available yet, even on newer A5 devices.
  • Key 0x835 : Computed at boot time by the kernel. Only used for keychain encryption in iOS 3 and below. Used as "device key" that protects class keys in iOS 4.
  • key835 = AES(UID, bytes("01010101010101010101010101010101"))
  • Key 0x89B : Computed at boot time by the kernel. Used to encrypt the data partition key stored on Flash memory. Prevents reading the data partition key directly from the NAND chips.
  • key89B = AES(UID, bytes("183e99676bb03c546fa468f51c0cbd49"))
  • EMF key : Data partition encryption key. Also called "media key". Stored encrypted by key 0x89B
  • DKey : NSProtectionNone class key. Used to wrap file keys for "always accessible" files on the data partition in iOS 4. Stored wrapped by key 0x835
  • BAG1 key : System keybag payload key (+initialization vector). Stored unencrypted in effaceable area.
  • Passcode key : Computed from user passcode or escrow keybag BagKey using Apple custom derivation function. Used to unwrap class keys from system/escrow keybags. Erased from memory as soon as the keybag keys are unwrapped.
  • Filesystem key (f65dae950e906c42b254cc58fc78eece) : used to encrypt the partition table and system partition (referred to as "NAND key" on the diagram)
  • Metadata key (92a742ab08c969bf006c9412d3cc79a5) : encrypts NAND metadata




iOS 3 and below

16-byte IV - AES128(key835, IV, data + SHA1(data))

iOS 4

version (0)|protection_class - AESWRAP(class_key, item_key) (40 bytes)|AES256(item_key, data)

iOS 5

version (2) protection_class len_wrapped_key AESWRAP(class_key, item_key) (len_wrapped_key) AES256_GCM(item_key, data) integrity_tag (16 bytes)

Keychain tools


  1. https://github.com/ptoomey3/Keychain-Dumper/blob/master/main.m
  2. https://code.google.com/p/iphone-dataprotection/downloads/detail?name=keychain_dump


Notes

In the recent versions of iOS (4 & 5), by default, the keychain items are stored using the kSecAttrAccessibleWhenUnlocked data protection accessibility constant. However the data protection is effective only with a device passcode, which implies that sensitive data stored in the keychain is secure only when a user sets a complex passcode for the device. But iOS applications cannot enforce the user to set a device passcode. So if iOS applications rely only on the Apple provided security they can be broken if iOS security is broken.

Epiloge

iOS application security can be improved by understanding the shortcomings of the current implementation and writing one’s own implementation that works better. In the case of the keychain, iOS application security can be improved by using the custom encryption (using built-in crypto API) along with the data protection API while adding the keychain entries. If custom encryption is implemented it is recommended to not to store the encryption key on the device.

References:

  1. http://appadvice.com/appnn/tag/privacy-issues
  2. http://resources.infosecinstitute.com/pentesting-iphone-applications-2/
  3. http://cryptocomb.org/Iphone%20UDIDS.pdf
  4. http://en.wikipedia.org/wiki/OpenFeint
  5. http://resources.infosecinstitute.com/iphone-forensics/
  6. http://support.apple.com/kb/HT1766
  7. http://stackoverflow.com/questions/6697247/how-to-create-plist-files-programmatically-in-iphone
  8. http://www.icanlocalize.com/site/tutorials/how-to-translate-plist-files/
  9. http://www.macroplant.com/iexplorer/
  10. http://resources.infosecinstitute.com/iphone-penetration-testing-3/
  11. http://sit.sit.fraunhofer.de/studies/en/sc-iphone-passwords-faq.pdf

22/08/2012

The Teenage Mutant Ninja Turtles project....

Intro
 
Elusive Thoughts are proud to present you The Teenage Mutant Ninja Turtles project....


What Teenage Mutant Ninja Turtles is?

The Teenage Mutant Ninja Turtles project is three things:
  1. A Web Application payload database (heavily based on fuzzdb project for now).
  2. A Web Application error database.
  3. A Web Application payload mutator.
Nowadays all high profile sites found in financial and telecommunication sector use filters to filter out all types of vulnerabilities such as SQL, XSS, XXE, Http Header Injection e.t.c. In this particular project I am going to provide you with a tool to generate Obfuscated Fuzzing Injection attacks on order to bypass badly implemented Web Application injection filters (e.t.c SQL Injections, XSS Injections e.t.c).

When you test a Web Application all you need is a fuzzer and ammunition:

"I saw clearly that war was upon us when I learned that my young men had been secretly buying ammunition."

Chief Joseph

Ammunition is what you use for fuzzing and the weapon is the fuzzer itself. The project called teenage-mutant-ninja-turtles is an open source payload mutator, nothing more nothing less. With teenage-mutant-ninja-turtles you will be able to generate Obfuscated payloads for testing all sorts of attacks, such as XSS, SQL Injections etc. The project is in version 1.1 and currently supports only SQL Injection fuzzing. Later on I will add support for fuzzdb and all types of attacks. Maybe later it will become a complete Web Application Scanner who knows. If you think that you are interested please contact me to participate.

Download link:http://code.google.com/p/teenage-mutant-ninja-turtles/downloads/list

The Teenage Mutant Ninja Turtles in action

The following screenshot shows the tool banner (yes it has a banner!!):


The Teenage Mutant Turtle is a Web application payload database for performing black box Web Application penetration tests (it also supports banner displaying!!!), more specifically is:
  1. A collection of known attack patterns focused in Web Application input validation attacks (e.g. SQL Injections, XSS attacks e.t.c)
  2. A collection of error messages produced by malicious and malformed user inputs, which you can use with Burp intruder or other grep-like utilities to identify and verify vulnerabilities when fuzzing.
  3. An easy to use python script that helps you to obfuscate payloads for bypassing costume Web Application filters.
It is designed to be used by people with a wide range of security experience and as such is ideal for developers and functional testers who are new to penetration testing as well as being a useful addition to an experienced pen testers arsenal toolkit.

 The Teenage Mutant Ninja Turtles features

Currently Teenage Mutant Ninja Turtles (tmnt) support the following features:
  1. Generic payload URL encoding.
  2. Generic payload Base64 encoding.
  3. SQL keyword case variation adding (e.g. converts SELECT to SeLeCt e.t.c).
  4. Generic payload DE-duplication (e.g. removing double payload lines).
  5. SQL Injection suffix adder (e.g. adding EXEC to the begging of the payload e.t.c).
  6. SQL Injection post-fix adder (e.g. adding ); -- to the end of the payload e.t.c).  
 The following screenshot shows the help message of the the tool:


Epilogue 

There are more features to come...





  


29/07/2012

Hacking the Session for fun and profit

Intro

This post is about describing from the security perspective how the life cycle of a Web Application should be. By saying life cycle I mean all the stages a session goes through and what are the steps to be taken on order to properly test the session. Very recently I had a discussion about session management with a colleague of mine and he seemed confused about what session management is and how it should be handled. Now if you lookup the OWASP session management cheat sheet you are going to find lots of interesting information overlapping the information presented here but, there is no information in the internet that has a complete and easy to understand guide about how to test a session.

What is a Session and how should it behave

A web application session is a user credential "representative" for as long as the user is logged in (well not always). In more simple words the user credentials after a successful log-in should be translated into one or more cryptographically secure variable that:
  1. Should not leak (e.g. no session passed in web application URL) while being valid.
  2. Should not be predictable (that is what cryptographically secure means).
  3. Should expire under certain conditions (e.g. user log out).
  4. Should not be recyclable (e.g. do not save in database). 
  5. Should be mascaraed (e.g. do not use the default framework .NET name).
  6. Should be tamper-prof (e.g. run regular integrity checks).
  7. Should not be cloned (e.g. concurrent log-ins should not be supported).
  8. Should be audited (e.g. track user id and session pair and log events)
  9. Should have a 1-1 relationship with the username or user id (e.g. one session variable should translate in one username)   
Note: If one or more of the conditions described above is not met then your session is open to be attacked.

Session life cycle conceptual representation

The following diagram shows the session life cycle and how it should be according to me view. The purple color shows what it should be checked, the green shows how to perform hardening, all other is parts of the diagram is about about attacks that can be performed at that specific session stage. The attacks in red are the most critical and should be characterized as high impact attacks, the attacks in yellow should be characterized as medium impact and the attacks in green have information leakage impact in the web application.
  
  

Note: See how the the session attacks performed after the session authorization are all red. Take note also to the fact that almost all attacks concerning session are red.

The following diagram explains the signs used above:


Note: The image above is self explanatory.

The Session Checklist

The following part of the article attempts to specify the exact characyetristics a session should have and what you should check in more detail:
  • Session Properties:
  1. Session ID Name Fingerprinting
    1. Change Session Default Name
    2. Use additional proper costume secure tokens
  2. Session ID Length
    1. Chose Big Length (e.g. the session ID length must be at least 128 bits 16)
  3. Session ID Entropy
    1. Use true random numer generator for the session
    2. Refresh after login
  4. Session ID Content (or Value)
    1. Don't save critical Web Application Data in session
    2. Use meaningless data to the session 
  • Session Attributes:
  1. Secure Attribute
    1. Set secure flag (or simply enforce SSLv3/TLSv1)
  2. HttpOnly Attribute
    1. Set HttpOnly flag
    2. Disable TRACE http method
  3. Domain and Path Attributes
    1. Restrict Path
    2. Do not include cross domain scripts from none trusted third parties 
  4. Expire and Max-Age Attributes 
    1. Make sure cookies expire after log-off 
  • Session Input Validation Defense Mechanisms:
  1. Manage Session ID as Any Other User Input
  2. Renew the Session ID After Any Privilege Level Change
  3. Renew Session after authorizing the session
  • Session Leakage
  1. Do not pass session into Http referrer header field. 
  2. Do not pass session into web application urls.
  • Session Termination:
  1. Idle Timeout (e.g. after 15 minutes of user inactivity)
  2. Absolute Timeout (e.g. force expiration after 3 hours)
  3. Manual Session Expiration (e.g. increase user security awareness by giving them de-validation options)
  4. Logout Button 
  5. Force Session Logout On Web Browser Window Close Events
  6. Automatic Client Logout 3 Session Attacks Detections
    1. Session ID Guessing and Brute Force Detection
    2. Detecting Session ID Anomalies
    3. Binding the Session ID to Other User Properties 
  • Session Event Auditing:
  1. Monitor Session Creation
  2. Monitor Destruction of Session IDs 
  3. Monitor simultaneous Session Log-ons
  4. Export session auditing into syslog format and feed it to an event correlation engine
  5. Implement costume e-mail alerts (e.g. for multiple access denied events). 
Cookie/Session token reverse engineering

Questions to answer about cookie reverse engineering:

  1. Unpredictability: a cookie must contain some amount of hard-to-guess data. The harder it is to forge a valid cookie, the harder is to break into legitimate user's session. If an attacker can guess the cookie used in an active session of a legitimate user, he/she will be able to fully impersonate that user (session hijacking). In order to make a cookie unpredictable, random values and/or cryptography can be used.
  2. Tamper resistance: a cookie must resist malicious attempts of modification. If we receive a cookie like IsAdmin=No, it is trivial to modify it to get administrative rights, unless the application performs a double check (for instance, appending to the cookie an encrypted hash of its value)
  3. Expiration: a critical cookie must be valid only for an appropriate period of time and must be deleted from disk/memory afterwards, in order to avoid the risk of being replayed. This does not apply to cookies that store non-critical data that needs to be remembered across sessions (e.g., site look-and-feel).
  4. "Secure” flag: a cookie whose value is critical for the integrity of the session should have this flag enabled in order to allow its transmission only in an encrypted channel to deter eavesdropping.
Epilogue 

The diagram shown above should be enough to give you a good information and a new perspective about how session should be handled. Hope this helped.....

Reference:

https://www.owasp.org/index.php/Session_Management_Cheat_Sheet
https://www.owasp.org/index.php/Testing_for_Session_Management_Schema_(OWASP-SM-001)

23/04/2012

Defending against XSS with .NET

Intro 

This is an older post from my previous blog that now does not exist. 

Use the HttpOnly Cookie Option

Internet Explorer 6 Service Pack 1 and later supports the HttpOnly cookie attribute, which prevents client-side scripts from accessing a cookie using the DOM object document.cookie. If someone uses the that particular DOM object the script will return an empty string. The cookie is still sent to the server whenever the user browses to a Web site in the current domain. Now if you use .NET to set the HttpOnly attribute to true, what practically happens is that the Http header response field Set-Cookie adds one more attribute (except from the ones that is already supposed to have) at the of the line called HttpOnly. It looks something like that:

Set-Cookie: USER=123; expires=Wednesday, 09-Nov-99 23:12:40 GMT; HttpOnly


Now if the Web browser is IE 6 with sp1 and above it wont allow JavaScript DOM object to access the cookie, but if any other browser is used then it does not provide any protection. The thing is that the Set-Cookie is actually used when the web server decides for the first time to log your activity as a web user, meaning for example the after a successful authentication your cookie is going to be used probably as a security token. The following picture shows how someone can use social engineering to make you execute malicious JavaScript and steal your cookie [5].


Picture : HttpOnly option in action [1].

Note: Web browsers that do not support the HttpOnly cookie attribute either ignore the cookie or ignore the attribute, which means that it is still subject to cross-site scripting attacks [5].

Now if the Web browser is IE 6 with sp1 and above it wont allow JavaScript DOM object to access the cookie, but if any other browser is used then it does not provide any protection. The thing is that the Set-Cookie is actually used when the web server decides for the first time to log your activity as a web user, meaning for example the after a successful authentication your cookie is going to be used probably as a security token. The following picture shows how someone can use social engineering to make you execute malicious JavaScript and steal your cookie [5].

It is important for the developer to understant that this property is already set by default for Authentication and Sessions cookies in ASP.NET 2.0 but not for manually issued cookies.  Therefore, you should consider enabling this option for your manually issued cookies as well.  This option can be enabled in web.config by modifying the httpCookies element as in the example below [4]: 

<httpCookies httpOnlyCookies=“true“ /> 

The System.Net.Cookie class

The System.Net.Cookie class in Microsoft .NET Framework version 2.0 supports the HttpOnly property. The HttpOnly property is always set to true when someone is using the Form authentication. Earlier versions of the .NET Framework (versions 1.0 and 1.1) require that you add code to the  Application_EndRequest event handler in your application Global.asax file to explicitly set the HttpOnly attribute. The code that is actually enabling you to use HttpOnly cookie is:

Visual Basic (Usage):

Dim instance As Cookie Dim value As Boolean value = instance.HttpOnly instance.HttpOnly = value 

Code Example: HttpOnly option set using code[3].

In ASP.NET 1.1 the System.Net.Cookie class does not support the HttpOnly property. Therefore, to add an HttpOnly attribute to the cookie you must add the following code to your application’s Application_EndRequest event handler in Global.asax [4]:

protected void Application_EndRequest(Object sender, EventArgs e)
{
string authCookie = FormsAuthentication.FormsCookieName;

      foreach (string sCookie in Response.Cookies)
      {
            if (sCookie.Equals(authCookie))
            {
                  Response.Cookies[sCookie].Path += “;HttpOnly”;
            }
      }
}

Code Example: HttpOnly option set using web.config [4]. 

Do Not Rely only in the HttpOnly flag for XSS issues

The HttpOnly protection mechanism is useful only in case where the attacker is not skillful enough to undertake other means for attacking the remote application and subsequently the user. Although, session hijacking is still considered the only thing you can do when having XSS, this is for from what is actually possible. The truth is that session hijacking is probably one of the least things the attacker will do for a number of reasons. The most obvious reason is that XSS attacks, although could be targeted, are not instant, like traditional overrun attacks where the attacker point the exploit to a remote location and gain access right away. For an XSS attack to be successful, sometimes it is required a certain period of time. It is highly unlikely that the attacker will wait all the time just to get a session which could be invalid a couple of moments later when the user clicks on the logout button. Remember, session hijacking is possible because concurrent sessions are possible [2].

The only and most effective way to attack when having XSS hole is to launch an attack right on place when the payload is evaluated. If the attacker needs to transfer funds or obtain sensitive information, they most probably will use the XMLHttpRequest object in the background, to automate the entire process. Once the operation is completed, the attacker could leave the user to continue with their normal work or maybe gain full control of the account my resetting the password and destroying the session by performing a logout operation [2]. 

What to do besides using HttpOnly flag (which is a lot)

Evaluate your specific situation to determine which techniques will work best for you. It is important to note that in all techniques, you are validating data that you receive from input and not your trusted script (use must check every single field). Essentially, prevention means that you follow good coding practice by running sanity checks on your input to your routines [6].

The following list outlines the general approaches to prevent cross-site scripting attacks:
  1. Encode output based on input parameters
  2. Filter input parameters for special characters.
  3. Filter output based on input parameters for special characters.
When you filter or encode, you must specify a character set for your Web pages to ensure that your filter is checking for the appropriate special characters. The data that is inserted into your Web pages should filter out byte sequences that are considered special based on the specific character set. A popular charset is ISO 8859-1, which was the default in early versions of HTML and HTTP. You must take into account localization issues when you change these parameters [6].


Code Example: HtmlEncode used to sanitized web fields [8].

Anti-XSS tools for .NET

So what was wrong with using System.Web.HttpUtility.HtmlEncode?  The problem with HttpUtility class is it was based upon deny-list (e.g. black listing approach) approach—in which I mentioned an earlier blog on the down fall with this approach—versus a Accept-only approach.  As a result of the deny-list approach the HttpUtility.HtmlEncode as only good against the following characters:

1. <
2. >
3. &
4. “
5. Characters with values 160-255 inclusive

The Microsoft Anti-XSS tool follows an Accept-only approach (e.g. white listing approach) in which this tool looks for a finite set of valid input and everything else is considered invalid.  This approach will provide a more comprehensive protection to XSS and reduce the ability to trick HttpUtility.HtmlEncode with canonical representations attacks [7].

You will find that the Anti-XSS tool works much like HttpUtility.HtmlEncode:

AntiXSSLibrary.HtmlEncode(string)

AntiXSSLibrary.URLEncode(string)


Now all characters will be encoded except for [7]:

1. a-z (lower case)
2. A-Z (upper case)
3. 0-9 (Numeric values)
4. , (Comma)
5. . (Period)
6. _ (Underscore)
7. - (dash) 8. (Space)—Except for URLEncode 

Do Not Rely on user input filtering but also at output user filtering

A common practice is for code to attempt to sanitize input by filtering out known unsafe characters (e.g. black listing known malicious input). Do not rely on this approach because malicious users can usually find an alternative means of bypassing your validation. While writing this article only IE supports HttpOnly, but there is a firefox plugin called HttpOnly5.0. It provides support for HttpOnly option to Firefox by encrypting cookies marked as HttpOnly on the browser side, so that JavaScript cannot read them.HttpOnly makes XSS much more harder to achive and Firefox3 is going probably to support HttpOnly option….. 

Reference:
  1. http://msdn2.microsoft.com/en-us/library/ms533046.aspx
  2. http://www.gnucitizen.org/blog/why-httponly-wont-protect-you/
  3. http://msdn.microsoft.com/en-us/library/system.net.cookie.httponly(VS.80).aspx
  4. http://blogs.msdn.com/dansellers/archive/2006/03/13/550947.aspx
  5. http://www.microsoft.com/technet/archive/security/news/crssite.mspx?mfr=true
  6. http://support.microsoft.com/default.aspx?scid=kb;en-us;252985&sd=tech
  7. http://blogs.msdn.com/dansellers/archive/2006/02/23/538187.aspx
  8. http://www.java2s.com/Code/ASP/Server/ServerHtmlEncodeVBnet.htm