Home (CTF) Plaid CTF 2023 Treasure Map, CSS writeup
Post
Cancel

(CTF) Plaid CTF 2023 Treasure Map, CSS writeup

Date: April 16, 2023 team: ST4RT

There were two javascript&css reversing challenges in Plaid CTF, I had a headache, so just solved two easy challenges

Treasure Map - REV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import { go } from "./0.js";
import { go as fail } from "./fail.js";

const clear = () => {
    const frame = document.querySelector(".frame");
    frame.classList.remove("success")
    frame.classList.remove("fail")
}

window.check = async () => {
    clear();
    let flag = document.querySelector("#input").value;
    if (!flag.startsWith("PCTF{") || !flag.endsWith("}")) {
        await fail();
        return;
    }

    flag = flag.slice(5, -1);
    if (flag.length != 25) {
        await fail();
        return;
    }

    window.buffer = flag.split("");
    go();
}

when user input, it checks string length inside FLAG format is equal to 25. copy input to window.buffer and call go() function at 0.js

there are files named 0.js ~ 199.js, fail.js and success.js each file (0.js ~ 199.js) is same file (maybe) it gets input from window.buffer.shift() and generate a array matches character to filename (X.js,fail.js,success), and imports the file that matches the user input and calls the go() function.

the goal is find the correct path to import success.js

1
2
3
4
...
41: success.js,!
...

when I generate each files’s map, 41.js file only have the key, success.js so, we can get the flag using backtrace starts from 41.js file to 0.js.

However, when I wrote the code to trace a single path, there weren’t that many paths. So I just remove 0.js when i is less than 23. It could find correct path.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
const fs = require('fs');
const b64 = `
A
B
...
9
+
/
=`;
const bti = b64.trim().split("\\n").reduce((acc, x, i) => (acc.set(x, i), acc), new Map());
const map = [];
for(let i=0;i<200;i++){
    data = fs.readFileSync(i+'.js.map');
    data = data.toString();
    const moi = fs.readFileSync(i+'.js').toString();
    const tg = JSON.parse(data);
    const fl = tg.mappings.split(";").flatMap((v, l) =>v.split(",").filter((x) => !!x).map((input) => input.split("").map((x) => bti.get(x)).reduce((acc, i) => (i & 32 ? [...acc.slice(0, -1), [...acc.slice(-1)[0], (i & 31)]] : [...acc.slice(0, -1), [[...acc.slice(-1)[0], i].reverse().reduce((acc, i) => (acc << 5) + i, 0)]].map((x) => typeof x === "number" ? x : x[0] & 0x1 ? (x[0] >>> 1) === 0 ? -0x80000000 : -(x[0] >>> 1) : (x[0] >>> 1)).concat([[]])), [[]]).slice(0, -1)).map(([c, s, ol, oc, n]) => [l,c,s??0,ol??0,oc??0,n??0]).reduce((acc, e, i) => [...acc, [l, e[1] + (acc[i - 1]?.[1]??0), ...e.slice(2)]], [])).reduce((acc, e, i) => [...acc, [...e.slice(0, 2), ...e.slice(2).map((x, c) => x + (acc[i - 1]?.[c + 2] ?? 0))]], []).map(([l, c, s, ol, oc, n], i, ls) => [tg.sources[s],moi.split("\\n").slice(l, ls[i+1] ? ls[i+1]?.[0] + 1 : undefined).map((x, ix, nl) => ix === 0 ? l === ls[i+1]?.[0] ? x.slice(c, ls[i+1]?.[1]) : x.slice(c) : ix === nl.length - 1 ? x.slice(0, ls[i+1]?.[1]) : x).join("\\n").trim()]);
    map.push(fl);
}
let last = "41.js";
let next;
let flag = "!";
for(let i=0;i<24;i++){
    // console.log(`${i} .. `);
    for(let j=0;j<map.length;j++){
        for(let k=0;k<map[j].length;k++){
            if(map[j][k][0] === last){
                if(j==0&&i!=23){
                    // console.log("!!!!!!!!!!!!!!!!",map[j][k][1]);
                    continue;
                }
                next = `${j}.js`
                // console.log(next);
                flag = map[j][k][1]+flag;
                if(j==0&&i==23){
                    // console.log("??????",map[j][k][1]);
                    j=200;
                }
                break
            }
        }
    }
    last = next;
}

console.log("PCTF{"+flag+"}");

PCTF{Need+a+map/How+about+200!}

CSS - REV

https://i.imgur.com/F3Buy5o.png

빨간색 화살표를 움직여 각 위치마다 [a-z_] 범위의 글자를 표시할 수 있다.

You can move the red arrows to display letters in the [a-z_] range for each position.

https://i.imgur.com/MlcEPi5.png

The correct phrase is obscured by something.

https://i.imgur.com/2ancJmO.png

3개의 글자가 하나의 div 에 묶여있고, 이러한 div 는 14개 있어 총 42글자다.

There are 3 letters in a div, and there are 14 such divs, for a total of 42 characters.

https://i.imgur.com/N1CzT3o.png

각 div 에는 아래, 위 화살표 6개가 정의되어있고 또 하나의 div 가 있다. 해당 div 에는 각 문자들이 태그로 감싸져있다. 각 글자마다 height 가 다르다. 첫 번째글자부터 순서대로 729, 27, 1 px 의 height 를 가진다. 아래 화살표를 누를 수록 height 가 증가한다. (Ex: aaa -> 0+0+0, bab -> 729+0+1, cab -> 729*2+0+1 )

Each div has six down and up arrows defined and another div. In that div, each character is wrapped in a tag. Each letter has a different height. The first letter, in order, has a height of 729, 27, and 1 px. Pressing the up arrow increases the height. (Ex: aaa -> 0+0+0, bab -> 729+0+1, cab -> 729*2+0+1 )

이렇게 증가된 height 는 Correct 를 가리고있는 4개의 div 박스의 height 를 결정하고, 해당 div 박스 안의 svg 의 top 을 결정한다.

This increased height determines the height of the four div boxes that cover the Correct, the top of the svg inside those div boxes.

https://i.imgur.com/dqtS6IX.png

first div determines the height.

https://i.imgur.com/1l73RKs.png

The second div determines its top based on its height like this.All SVGs are white, but if you change the background color, there is a hole somewhere in the middle. We need to position the correct box over the hole. ( top: 60 )

https://i.imgur.com/G1hblSm.png

If you look at the background url used in each svg, when setting the svg with the d option, you will see 62V78~~ where 62 is the y-coordinate of the square that is pierced. Subtracting that value from 60 gives us the top value we need to match.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
from bs4 import BeautifulSoup
from base64 import b64decode as atob, b64encode as btoa
import re

with open("css.html") as file:
    html = file.read()

soup = BeautifulSoup(html, "html.parser")

# Extract all CSS selectors that contain a background:url property
selectors = soup.select('[style*="background:url("]')

# Extract the URL from each background:url property
urls = []
for selector in selectors:
    style = selector["style"]
    url_start = style.index("url(") + len("url(")
    url_end = style.index(")", url_start)
    url = style[url_start+1:url_end-1]
    urls.append(url)

# Process the extracted URLs (optional)
for i, url in enumerate(urls):
    if "?" in url:
        urls[i] = url[:url.index("?")]
    if "#" in url:
        urls[i] = url[:url.index("#")]

urls.sort()
tops={}
for url in urls:
    calc = int(atob(url.split('base64,')[1]).decode().split(" ")[7].split('V')[0])-2
    calc = 60 - calc
    tops[url] = calc

def clear(calc):
    calc = calc.split("calc")[1]
    calc = calc.split(";")[0]
    calc = calc.replace("px", "")
    return calc

def calculate(calc,x):
    calc = calc.replace("100%", str(x))
    return eval(calc)

s = "abcdefghijklmnopqrstuvwxyz_"
height_pattern = r"height:calc\\([^;]*\\);"
top_pattern = r"top:calc\\([^;]*\\);"
image_pattern = r"background:url\\('([^']+)'\\)"
flag="PCTF{"

for left in range(200,200+120*14,120):
    tags = soup.select('[style*="position:absolute;top:0;left:%d"] > div:nth-of-type(7) > div'%left)
    init_list = [[i,j,k] for i in range(27) for j in range(27) for k in range(27)]
    for tag in tags:
        height_calc = clear(re.findall(height_pattern,tag.decode())[0])
        top_calc = clear(re.findall(top_pattern,tag.decode())[0])
        image_url = re.findall(image_pattern,tag.decode())[0]
        top = tops[image_url]
        next_list=[]
        for init in init_list:
            i,j,k = init
            v = 729*i+27*j+k
            if calculate(top_calc,calculate(height_calc,v)) == top:
                next_list.append([i,j,k])
        init_list=next_list
    flag+="".join([s[i] for i in init_list[0]])
flag+="}"
print(flag)

The css parsing part was written by chatGPT. It takes the 4 divs of each div in groups of 3, gets the height and top attributes, and then bruteforces them by 3 bytes.

I could have optimized this further, but I didn’t want to give myself a headache :<

PCTF{youre_lucky_this_wasnt_a_threesat_instance}

This post is licensed under CC BY 4.0 by the author.
Trending Tags