SSRFing External Service Interaction and Out of Band Resource Load (Hacker's Edition)

In the recent past we encountered two relativly new type of Attacks. External Service Interaction (ESI) and Out-of-band resource loads (OfBRL).
  1. An ESI [1] occurs only when a Web Application allow interaction with an arbitrary external service. 
  2. OfBRL [6] arises when it is possible to induce an application to fetch content from an arbitrary external location, and incorporate that content into the application's own response(s). 

The Problem with OfBRL

The ability to request and retrieve web content from other systems can allow the application server to be used as a two-way attack proxy (when OfBRL is applicable) or a one way proxy (when ESI is applicable). By submitting suitable payloads, an attacker can cause the application server to attack, or retrieve content from, other systems that it can interact with. This may include public third-party systems, internal systems within the same organization, or services available on the local loopback adapter of the application server itself. Depending on the network architecture, this may expose highly vulnerable internal services that are not otherwise accessible to external attackers.

The Problem with ESI

External service interaction arises when it is possible to induce an application to interact with an arbitrary external service, such as a web or mail server. The ability to trigger arbitrary external service interactions does not constitute a vulnerability in its own right, and in some cases might even be the intended behavior of the application. However, in many cases, it can indicate a vulnerability with serious consequences.

The Verification

We do not have ESI or OfBRL when:
  1. In colaborator the source IP is our browser IP 
  2. There is a 302 redirect from our hosts to the collaborator (aka. our source IP appears in the collaborator)
Below we can see the original configuration in the repeater:

Below we can see the modified configuration in the repeater for the test:

The RFC(s)

It usually is a platform issue and not an application one. In some scenarios when we have for example a CGI application, the HTTP headers are handled by the application (aka. the app is dynamically manipulating the HTTP headers to run properly). This means that HTTP headers such as Location and Hosts are handled by the app and therefore a vulnerability might exist. It is recommended to run HTTP header integrity checks when you own a critical application that is running on your behalf.

For more informatinon on the subject read RFC 2616 [2]. Where the use of the headers is explained in detail. The Host request-header field specifies the Internet host and port number of the resource being requested, as obtained from the original URI given by the user or referring resource (generally an HTTP URL. The Host field value MUST represent the naming authority of the origin server or gateway given by the original URL. This allows the origin server or gateway to differentiate between internally-ambiguous URLs, such as the root "/" URL of a server for multiple host names on a single IP address.

When TLS is enforced throughout the whole application (even the root path /) an ESI or OfBRL is not possible, because both protocols perform source origin authentication e.g. as soon as a connection is established with an IP and the vulnerable server the protocol guaranties that the connection established is going to serve traffic only from the original IP. More specifically we are going to get an SNI error.

SNI prevents what's known as a "common name mismatch error": when a client (user) device reaches the IP address for a vulnerable app, but the name on the SSL/TLS certificate doesn't match the name of the website. SNI was added to the IETF's Internet RFCs in June 2003 through RFC 3546, Transport Layer Security (TLS) Extensions. The latest version of the standard is RFC 6066.

The option to trigger an arbitrary external service interaction does not constitute a vulnerability in its own right, and in some cases it might be the intended behavior of the application. But we as Hackers want to exploit it correct?, what can we do with an ESI then or a Out-of-band resource load?

The Infrastructure 

Well it depends on the over all set up! The most juice scenarios are the folowing:
  1. The application is behind a WAF (with restrictive ACL's) 
  2. The application is behind a UTM (with restrictive ACL's) 
  3. The application is running multiple applications in a virtual enviroment 
  4. The application is running behind a NAT. 
In order to perform the hack we have to simple inject our host value in the HTTP host header (hostname including port). Below is a simple diagram explaining the vulnerability.

Below we can see the HTTP requests with injected Host header:

Original request:

GET / HTTP/1.1
Host: our_vulnerableapp.com
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

Malicious requests:

GET / HTTP/1.1
Host: malicious.com
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close


GET / HTTP/1.1
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

If the application is vulnerable to OfBRL then, it means that the reply is going to be processed by the vulnerable application, bounce back in the sender (aka. Hacker) and potentially load in the context of the application. If the reply does not come back to the sender (aka. Hacker) then we might have a OfBRL, and further investigation is required.

Out-of-band resource load:


Below we can see the configuration in the intruder:

We are simply using the sniper mode in the intruder, can do the following:
  1. Rotate through diffrent ports, using the vulnapp.com domain name.
  2. Rotate through diffrent ports, using the vulnapp.com external IP.
  3. Rotate through diffrent ports, using the vulnapp.com internal IP, if applicable.
  4. Rotate through diffrent internal IP(s) in the same domain, if applicable.
  5. Rotate through diffrent protocols (it might not work that BTW).
  6. Brute force directories on identified DMZ hosts.

The Test

Burp Professional edition has a feature named collaborator. Burp Collaborator is a network service that Burp Suite uses to help discover vulnerabilities such as ESI and OfBRL [3]. A typical example would be to use Burp Collaborator to test if ESI exists. Below we describe an interaction like that.

Original request:

GET / HTTP/1.1
Host: our_vulnerableapp.com
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

Burp Collaborator request:

GET / HTTP/1.1
Host: edgfsdg2zjqjx5dwcbnngxm62pwykabg24r.burpcollaborator.net
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: keep-alive

Burp Collaborator response:

HTTP/1.1 200 OK
Server: Burp Collaborator https://burpcollaborator.net/
X-Collaborator-Version: 4
Content-Type: text/html
Content-Length: 53


The Post Exploitation 

Ok now as Hackers artists we are going to think how to exploit this. The scenarios are: [7][8]

  1. Attempt to load the local admin panels. 
  2. Attempt to load the admin panels of surounding applications. 
  3. Attempt to interact with other services in the DMZ. 
  4. Attempt to port scan the localhost 
  5. Attempt to port scan the DMZ hosts
  6. Use it to exploit the IP trust and run a DoS attack to other systems 
A good option for that would be Burp Intruder. Burp Intruder is a tool for automating customized attacks against web applications. It is extremely powerful and configurable, and can be used to perform a huge range of tasks, from simple brute-force guessing of web directories through to active exploitation of complex blind SQL injection vulnerabilities.

Burp Intruder configuration for scanning surounding hosts:

GET / HTTP/1.1
Host: 192.168.1.§§
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

Burp Intruder configuration for port scanning surounding hosts:

GET / HTTP/1.1
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

Burp Intruder configuration for port scanning localhost:

GET / HTTP/1.1
Pragma: no-cache
Cache-Control: no-cache, no-transform
Connection: close

What Can you Do

The big hack analysis; this vulnerability can be used in the following ways:
  1. Bypass restrictive UTM ACL(s) 
  2. Bypass restrictive WAF Rule(s) 
  3. Bypass restrictive FW ACL(s) 
  4. Perform cache poisoning
  5. Fingerprint internal infrastracture
  6. Perform DoS exploiting the IP trust
  7. Exploit applications hosted in the same mahine aka. mulitple app loads
Below we can see a schematic analysis on bypassing ACL(s):

The impact of a maliciously constructed response can be magnified if it is cached either by a web cache used by multiple users or even the browser cache of a single user. If a response is cached in a shared web cache, such as those commonly found in proxy servers, then all users of that cache will continue to receive the malicious content until the cache entry is purged. Similarly, if the response is cached in the browser of an individual user, then that user will continue to receive the malicious content until the cache entry is purged, although only the user of the local browser instance will be affected. [5]

Below follows the schematic analysis:

What Can't You Do
You cannot perform XSS or CSRF exploting this vulnerability, unless certain conditions apply.

The fix

If the ability to trigger arbitrary ESI or OfBRL is not intended behavior, then you should implement a whitelist of permitted URLs, and block requests to URLs that do not appear on this whitelist. [6] Also running host intergrity checks is recommended.[6]

We should review the purpose and intended use of the relevant application functionality, and determine whether the ability to trigger arbitrary external service interactions is intended behavior. If so, you should be aware of the types of attacks that can be performed via this behavior and take appropriate measures. These measures might include blocking network access from the application server to other internal systems, and hardening the application server itself to remove any services available on the local loopback adapter. [6]

More specifically we can:

  1. Apply egress filtering on the DMZ
  2. Apply egress filtering on the host
  3. Apply white list IP restrictions in the app
  4. Apply black list restrictions in the app (although not reommended)


Web DDoSPedia a million requests

Web Application Denial of Service Next Level

In this tutorial we are going to talk on how to cause maximum down time (including operational recovery processes) in anything that uses the word Web, this is also known as a Denial o Service Attack. Using this knowledge for malicious purposes is not something I am recommending or approve and I have zero accountability on how you use this knowledge. This is the reason I am providing also with countermeasures on the end of the post.      

What Is The Landscape

In the past we have seen many Denial of Service attacks, but most of them were not very sophisticated. A very good example would be the Low Orbit Ion Cannon (LOIC). LOIC performs a DoS attack (or when used by multiple individuals, a DDoS attack) on a target site by flooding the server with TCP or UDP packets with the intention of disrupting the service of a particular host. People have used LOIC to join voluntary botnets.[2]

All these attacks as stated in previous post do not really take advantage of the 7th layer complexity of the Web and therefore are not so effective as they could be. A very good post exists in the Cloudflare  blog named Famous DDoS Attacks [3].  

A few of the famous attacks are:
  • The 2016 Dyn attack
  • The 2015 GitHub attack
  • The 2013 Spamhaus attack
  • The 2000 Mafiaboy attack
  • The 2007 Estonia attack
Improving DoS and DDoS attacks

In order to improve or understand better what is possible while conducting a DoS attack, we have to think like a Web Server, Be a Web Server, Breath like a Web Server!!

Well what does a server breath? But of course HTTP, so what if we make the Web Server start breathing a lot of HTTP/S, that would be amazing.

This is how we can over dose with HTTP a web server:
  1. HTTP Connection reuse
  2. HTTP Pipelining
  3. Single SSL/TLS handshake  
But lets go a step further and expand on that, what else can we do to increase the impact? But of course profile the server and adjust the traffic to something that can be processed e.g. abuse vulnerable file upload functionality, SQLi attacks with drop statements etc.   

 HTTP connection reuse

HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair.

The newer HTTP/2 protocol uses the same idea and takes it further to allow multiple concurrent requests/responses to be multiplexed over a single connection.

HTTP 1.0, connections are not considered persistent unless a keep-alive header is included, although there is no official specification for how keepalive operates. It was, in essence, added to an existing protocol. If the client supports keep-alive, it adds an additional header to the request:
Connection: keep-alive
Then, when the server receives this request and generates a response, it also adds a header to the response:
Connection: keep-alive
Following this, the connection is not dropped, but is instead kept open. When the client sends another request, it uses the same connection. This will continue until either the client or the server decides that the conversation is over, and one of them drops the connection.

In HTTP 1.1, all connections are considered persistent unless declared otherwise. The HTTP persistent connections do not use separate keepalive messages, they just allow multiple requests to use a single connection.

If the client does not close the connection when all of the data it needs has been received, the resources needed to keep the connection open on the server will be unavailable for other clients. How much this affects the server's availability and how long the resources are unavailable depend on the server's architecture and configuration.

Yes dear reader I know what are you thinking, how can you the humble hacker, the humble whitehat reader can use this knowledge to bring down your home web server for fun? Well there are good news. 

In Python there are various functions provided for instantiating HTTP keepalive connections within urllib3 library, such as ConnectionPools.

Here is a code chunk to look through:

from urllib3 import HTTPConnectionPool
urllib3.connectionpool.make_headers(keep_alive=None, accept_encoding=None, user_agent=None, basic_auth=None)¶

  • keep_alive – If True, adds ‘connection: keep-alive’ header.
  • accept_encoding – Can be a boolean, list, or string. True translates to ‘gzip,deflate’. List will get joined by comma. String will be used as provided.
  • user_agent – String representing the user-agent you want, such as “python-urllib3/0.6”
  • basic_auth – Colon-separated username:password string for ‘authorization: basic ...’ auth header.
Note: If you are a proxy person, you can use the Match and Replace functionality on Burp Pro Suite to add or replace a the keepalive header. Bur then, your client (aka. the browser would have to know how to handle the received content). Better to write a Python template to handle the interaction.

HTTP Pipelining

HTTP pipelining is a technique in which multiple HTTP requests are sent on a single TCP (transmission control protocol) connection without waiting for the corresponding responses. The technique was superseded by multiplexing via HTTP/2, which is supported by most modern browsers.

See following diagram for pipeline :

HTTP pipelining requires both the client and the server to support it. HTTP/1.1 conforming servers are required to support pipelining (Pipelining was introduced in HTTP/1.1 and was not present in HTTP/1.0). This does not mean that servers are required to pipeline responses, but that they are required not to fail if a client chooses to pipeline requests. Interesting behavior!!!!!!!!!

Note: Most of the servers execute requests from pipelining clients in the same fashion they would from non-pipelining clients. They don’t try to optimize it.

Again, yes dear reader I know what are you thinking, how can you the humble blackhat hacker, the humble hacktivist reader can use this knowledge to bring down your home web server for fun? Well there are more good news. 

Some Python frameworks do support HTTP/2 aka HTTP pipelining , Mouxaxaxa. As of late 2017 there are two Python frameworks that directly support HTTP/2, namely Twisted and Quart with only the latter supporting server-push.

Quart can be installed via pipenv or pip:

$ pipenv install quart
$ pip install quart

This requires Python 3.7.0 or higher (see python version support for reasoning).

A minimal Quart example is:

from quart import make_response, Quart, render_template, url_for

app = Quart(__name__)

async def index():
    result = await render_template('index.html')
    response = await make_response(result)
        url_for('static', filename='css/bootstrap.min.css'),
        url_for('static', filename='js/bootstrap.min.js'),
        url_for('static', filename='js/jquery.min.js'),
    return response

if __name__ == '__main__':

Also another library that supports Python HTTP/2 connectivity is hyper. hyper is a Python HTTP/2 library, as well as a very serviceable HTTP/1.1 library.

To begin, you will need to install hyper. This can be done like so:

$ pip install hyper

From the terminal you can launch a request by typing:

>>> from hyper import HTTPConnection
>>> c = HTTPConnection('http2bin.org')
>>> c.request('GET', '/')
>>> resp = c.get_response()

Used in this way, hyper behaves exactly like http.client classic Python client. You can make sequential requests using the exact same API you’re accustomed to. The only difference is that HTTPConnection.request() may return a value, unlike the equivalent http.client function. If present, the return value is the HTTP/2 stream identifier.

In HTTP/2, connections are divided into multiple streams (due to pipelining). Each stream carries a single request-response pair. You may start multiple requests before reading the response from any of them, and switch between them using their stream IDs.

Note: Be warned: hyper is in a very early alpha. You will encounter bugs when using it. If you use the library, provide feedback about potential issues and send to the creator.

Making Sense

By dramatically speeding up the number of payloads per second send to the server we increase the chance to crash the system for the following reasons:
  • Multiple HTTP/2 connections sending requests such as the following would cause significant  resource allocation, both in the server and the database:
    • File upload requests, with large files to be uploaded.
    • File download requests, with large files to be downloaded.
    • POST and GET requests containing exotic Unicode Encoding e.g. %2e%2e%5c etc.
    • POST and GET requests while performing intelligent fuzzing.  
  • Enforcement of single  SSL/TLS Handshake: 
    • Not much to be said here. Simply enforce a single TLS handshake if the malicious payloads are going to consume more resources than the handshake it self. This will cause the server to consume resources.
Note: Such type of an attack can also be used to as a diversion to hide other type of attacks, such as SQLi etc.

The diagram below demonstrates where potentially system is going to crash first:

Other Uses of The This Tech

We can use this knowledge to perform the following tasks:
  • Optimize Web App Scans
  • Optimize directory enumeration
  • Optimize online password cracking on Web Forms
  • Optimize manual SQLi attacks 

Useful Tools 

There are some tools out there that make use some of the principles mentioned here:
  • Turbo Intruder - https://github.com/PortSwigger/turbo-intruder - Turbo Intruder is a Burp Suite extension for sending large numbers of HTTP requests and analyzing the results. It's intended to complement Burp Intruder by handling attacks that require exceptional speed, duration, or complexity.
  • Skipfish - https://code.google.com/archive/p/skipfish/ - Skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes.

Things to do to avoid this type of attacks are:
  • Firewall HTTP state filtering rules 
  • Firewall HTTPS state filtering rules  
  • Firewall HTTP/2 blockage - Although not recommended
  • WAF that checks the following things - 
    • User Agent - Check for spoofing the agent 
    • Request Parameters - Check for fuzzing 
    • Request size check.
That is it folks have fun.......

  1.  https://stackoverflow.com/questions/25239650/python-requests-speed-up-using-keep-alive
  2.  https://en.wikipedia.org/wiki/Low_Orbit_Ion_Cannon
  3.  https://www.cloudflare.com/learning/ddos/famous-ddos-attacks/
  4. https://en.wikipedia.org/wiki/HTTP_persistent_connection 
  5. https://2.python-requests.org/en/master/user/advanced/#keep-alive
  6. https://urllib3.readthedocs.io/en/1.0.2/pools.html.
  7. https://stackoverflow.com/questions/19312545/python-http-client-with-request-pipelining
  8. https://www.freecodecamp.org/news/million-requests-per-second-with-python-95c137af319/
  9. https://www.python.org/downloads/
  10. https://txzone.net/2010/02/python-and-http-pipelining/
  11. https://gitlab.com/pgjones/quart?source=post_page---------------------------
  12. https://gitlab.com/pgjones/quart/blob/master/docs/http2_tutorial.rst
  13. https://hyper.readthedocs.io/en/latest/