MushcatShiro's Blog

AOC 2025

This is a page about »AOC 2025«.

Advent of code is something the author heard about for some time but never tried out until 2025. It was purely a coincidence that the author visited on 12/1 the day itself. This year the format is slightly different, 12 days instead of 25 days which is probably a sweet spot for an working adult who struggles with leetcode sometimes. The goal is to provide solutions that are trivial to understand, unless there is a runtime concern (~minutes range is considered acceptable here). Much of the solution will seem like brute force. A general takeaway throughout the event is parsing and figuring out the best representation for the problem is basically 80% of the task.

Day 1

The question was straightforward. A dail that can rotate in either direction, however, with a catch. Counting how many times the dail stopping on origin (p1) position and counting how many times the dail passby the origin position (p2) can be very different depending on how the question is approached. Here a mathematical approach was taken as it feels natural for the first part. The idea was,

 1def process_q1(l: List[str]):
 2  ret = 0
 3  s = 50 # always start at 50
 4  for combi in l:
 5    assert s >= 0 & s <= 99, s
 6    combi = combi.strip()
 7    direction, move = combi[:1], int(combi[1:])
 8    # print(f"{s} {direction} {move}")
 9    if direction == "L":
10      d = -1
11    else:
12      d = 1
13
14    s += move*d
15    while s < 0 or s > 99:
16      s += 100*d*-1
17    if s == 0 or s == 100:
18      ret += 1
19      s = 0
20  return ret

It was simple as the problem can be phrased into checking if the dail stops at the two possible origin value \(0, 100\). However when counting how many times the dail pass by the origin the same trick doesnt work, at least it was obvious. The solution preferred on the day itself looks something like this,

 1def process_q2_mathematical(l: List[str]):
 2  # answer assisted
 3  ret = 0
 4  s = 50
 5  for combi in l:
 6    assert s >= 0 & s <= 99, s
 7    combi = combi.strip()
 8    direction, move = combi[:1], int(combi[1:])
 9    if direction == "L":
10      d = -1
11      steps_to_origin = s
12    else:
13      d = 1
14      steps_to_origin = 100 - s
15    # TODO the concept land on and step over
16    if steps_to_origin > move:
17      if s == 0:
18        ret += 1
19      s += move * d
20      if s < 0:
21        s += 100
22    elif steps_to_origin == move:
23      s = 0
24    else:
25      ret += 1
26      s = 0
27      remaining = move - steps_to_origin
28      if remaining == 0:
29        ret -= 1
30        continue
31      ret += remaining // 100
32      s = (remaining % 100) * d
33      if s < 0:
34        s += 100
35  return ret

It is by no means clean, concise or interpretable (at least on first glance?). The key is the concept of knowing the difference between landing on origin and stepping out of origin. The solution was derived with hints from Gemini (difference between simulation and mathematical solution) and actually knowing the answert to test against. Here’s the Gemini mathematical solution which also is not something trivial.

 1def process_q2(l: List[str]):
 2  # assisted
 3  ret = 0
 4  s = 50 # always start at 50
 5  for combi in l:
 6    assert s >= 0 & s <= 99, s
 7    combi = combi.strip()
 8    direction, move = combi[:1], int(combi[1:])
 9    # print(f"{s} {direction} {move}")
10    # this approach is to always hit 100 or -100
11    if direction == "L":
12      d = -1
13      # 0 is the starting point - to be excluded hence -1
14      hits = (s - 1) // 100 - (s - move - 1) //100
15    else:
16      d = 1
17      hits = (s + move) // 100 - s // 100
18
19    ret += hits
20    s = (s + move * d) %100
21
22  return ret

Here the full range is \(0, 100\), note that 100 is a separate entity hence the dail can stop at 100. When the dial rotates to the right, landing on 100 counts as a click and leaving doesnt counts as one (reverse of the previous solution). This is achieved by floor division of two terms. If the dail stops on 100 the first term counts, if the dail moves away from 0 the second term counts. To prevent actually moving from 99 - 100 - 0 - 1, s is updated using modulus.

as long as the solution counts the sequence 99 (- 100) - 0 - 1 as one it should work.

Day 2

This question feels like a LeetCode question, which the author solves it with brute force. The idea was simple, iterate through the ranges, for each value check if the length is even. If it is even, ensure the first half is equal to the second half. Part two relaxed the criteria to allow 2 - 10 repeats. It should be apparent that if a number repeat 10 times, it also repeats 2 times. This reduces the checks to only checking for step of 1 (for odd), 2, 3, 5 together with part one’s criteria. Part two’s solution can be very explicit and messy, and a much elegant solution can be achieved through regex (beware of the catastrophic backtracking tho).

 1def process_q1(data: str) -> int:
 2  list_of_ranges = process_data(data)
 3  ret = 0
 4  for r in list_of_ranges:
 5    s, e = r
 6    for v in range(int(s), int(e)+1):
 7      sv = str(v)
 8      l = len(sv)
 9      midpoint = int(l / 2)
10      if l % 2 == 1:
11        continue
12
13      if sv[:midpoint] == sv[midpoint:]:
14        ret += v
15
16  return ret
17
18def process_q2(data: str) -> int:
19  list_of_ranges = process_data(data)
20  ret = 0
21  for r in list_of_ranges:
22    s, e = r
23    for v in range(int(s), int(e)+1):
24      sv = str(v)
25      l = len(sv)
26      if l % 2 == 0: # for 2, 4, 6, 8
27        # step of 2, 3, 4, 5, skip if 2*(2/3/4/5) > l
28        midpoint = int(l / 2)
29        if sv[:midpoint] == sv[midpoint:]: # technically this covers step=1, 4
30          # print(f"half {v}")
31          ret += v
32          continue
33        if len(set([sv[x: x+2] for x in range(0, l, 2)])) == 1 and l != 2:
34          # print(f"step 2 {v}")
35          ret += v
36          continue
37      if l % 3 == 0 and l != 3:
38        if len(set([sv[x: x+3] for x in range(0, l, 3)])) == 1:
39          # print(f"step 3 {v}")
40          ret += v
41          continue
42      if l % 5 == 0 and l != 5:
43        if len(set([sv[x: x+5] for x in range(0, l, 5)])) == 1:
44          # print(f"step 5 {v}")
45          ret += v
46          continue
47      if len(set([sv[x: x+1] for x in range(0, l, 1)])) == 1 and l != 1:
48        print(sv)
49        ret += v
50        continue
51  return ret

Day 3

Part one was easy, two for loops one finds the largest value’s index and the other finds the largest value after the found index. There should be a way to further reduce one for loop and obtain the two digit largest value. And so the part two is basically that, and instead of two digits, it is now 12. The idea is similar, by making sure the right side always have the largest possible value until there is no budget to drop the digits.

 1def process_q1(data: List[str]):
 2  ret = 0
 3  for d in data:
 4    d = d.strip()
 5    first = -1
 6    f_pos = -1
 7    # for loop to find largest value
 8    for idx, c in enumerate(d[:-1]):
 9      if int(c) > first:
10        first = int(c)
11        f_pos = idx
12    # for loop to find right to largest value
13    second = -1
14    for idx, c in enumerate(d[f_pos+1:]):
15      if int(c) > second:
16        second = int(c)
17    ret += int(f"{first}{second}")
18
19  return ret
20
21def process_q2(data: List[str]):
22  # assisted
23  ret: List[int] = []
24  for d in data:
25    d = d.strip()
26    budget = len(d) - 12
27    stack: List[str] = []  # always append, then check and pop -1
28    for c in d:
29      print(f"s: {stack}")
30      while stack and int(stack[-1]) < int(c) and budget > 0:
31        stack.pop()
32        budget -= 1
33      stack.append(c)
34      print(f"e: {stack}")
35    ret.append(int("".join(stack[:12])))
36  return sum(ret)

Day 4

Despite being somewhat familar with conv. op in deep learning, however due to the self imposed constraint of not using extra dependency to solve the questions, it turns into a brute force approach that iterates over each position and does the necesary operations. There should be a way to reduce duplicated calculation as everytime the 3x3 box shifts by 1 to the right. Part two was basically part one, with a while loop. This was a good breathing room after D1P2 and D3P2 poses some difficulties.

 1def process_q1(data: List[List[str]]):
 2  rmax = len(data)
 3  cmax = len(data[0])
 4  cnt = 0
 5  for ridx, row in enumerate(data):
 6    for cidx, col in enumerate(row):
 7      if col == "@":
 8        eight_pos = [
 9          (ridx - 1, cidx - 1), (ridx, cidx - 1), (ridx + 1, cidx - 1),
10          (ridx - 1, cidx), (ridx + 1, cidx),
11          (ridx - 1, cidx + 1), (ridx, cidx + 1), (ridx + 1, cidx + 1),
12        ]
13        cummulate = 0
14        for pos in eight_pos:
15          if (0 <= pos[0] < rmax) and (0 <= pos[1] < cmax):
16            if data[pos[0]][pos[1]] == "@":
17              cummulate += 1
18        if cummulate < 4:
19          cnt += 1
20  return cnt
21
22def process_q2(data: List[List[str]]) -> int:
23  remove_cnt = 0
24  rmax = len(data)
25  cmax = len(data[0])
26  while True:
27    prev = remove_cnt
28    for ridx, row in enumerate(data):
29      for cidx, col in enumerate(row):
30        if col == "@":
31          eight_pos = [
32            (ridx - 1, cidx - 1), (ridx, cidx - 1), (ridx + 1, cidx - 1),
33            (ridx - 1, cidx), (ridx + 1, cidx),
34            (ridx - 1, cidx + 1), (ridx, cidx + 1), (ridx + 1, cidx + 1),
35          ]
36          cummulate = 0
37          for pos in eight_pos:
38            if (0 <= pos[0] < rmax) and (0 <= pos[1] < cmax):
39              if data[pos[0]][pos[1]] == "@":
40                cummulate += 1
41          if cummulate < 4:
42            data[ridx][cidx] = "."
43            remove_cnt += 1
44    if remove_cnt == prev:
45      break
46  return remove_cnt

Day 5

Another LeetCode like overlapping schedule question. By first merging the range to ensure the list of ranges are not overlapping then just brute force checking (well having breaks if it is already smaller than current range end helps, assuming the list of ranges is sorted). Part two is straightforward, especially if the ranges is merged.

 1def merge_ranges(list_of_ranges: List[List[int]]):
 2  list_of_ranges = sorted(list_of_ranges, key= lambda x:x[0])
 3  out: List[List[int]] = []
 4  while list_of_ranges:
 5    cur = list_of_ranges.pop(0)
 6    if not out:
 7      out.append(cur)
 8      continue
 9    if cur[0] > out[-1][1]:
10      out.append(cur)
11      continue
12    for idx, i in enumerate(out):
13      if cur[0] <= i[1]:
14        out[idx][1] = max(cur[1], i[1])
15  return out
16
17def process_q1(lor: List[List[int]], loi: List[int]):
18  ret = 0
19  lor = merge_ranges(lor)
20  for i in loi:
21    for r in lor:
22      if i >= r[0] and i <= r[1]:
23        ret += 1
24        break
25  return ret
26
27def process_q2(lor: List[List[int]], loi: List[int]):
28  ret = 0
29  lor = merge_ranges(lor)
30  for r in lor:
31    ret += (r[1] - r[0] + 1)
32  return ret

Day 6

Neovim does give some problem when pasting the example data as it always remove the trailing space, notepad is here to save the day. Also the line about text alignemnt can be ignored should be a clear warning sign that what comes for part two. Part one is easy considering that as long as that each value can be extracted correctly with respect to their index. Part two was similar but the requirements is much stringent, it need to be correct at character level index. After checking r/adventofcode, seems like it is easier to just read each character cell by cell and handle newline, space between columns and math operators.

 1def process_q1(data: List[List[Union[str, int]]]) -> int:
 2  width = len(data[0])
 3  height = len(data) - 1
 4  op_pos = len(data) - 1
 5  ret = 0
 6  for w in range(width):
 7    # print(f"{w} {op_pos}")
 8    acc = 0
 9    op = data[op_pos][w]
10    for h in range(height):
11      if op == "+":
12        acc += data[h][w]
13      else:
14        if acc == 0:
15          acc = 1
16        acc *= data[h][w]
17    ret += acc
18  return ret
19
20def process_q2(data: List[str]):
21  # reddit
22  ret = 0
23  idx = 0
24  op = None
25  c: List[int] = []
26  while True:
27    buf = "".join([x[idx: idx+1] for x in data])
28    # print(f"{c} {buf}")
29    if "\n" in buf:
30      # if buf == "\n"*(len(data) - 1):
31      if op == "*":
32        ret += math.prod(c)
33      else:
34        ret += sum(c)
35      break
36    if buf == " "*len(data):
37      if op == "*":
38        ret += math.prod(c)
39      else:
40        ret += sum(c)
41      c = []
42      op = None
43      idx += 1
44      continue
45    if buf[-1] == "*" or buf[-1] == "+":
46      op = buf[-1]
47      buf = buf[:-1]
48    c.append(int(buf))
49    idx += 1

Day 7

The example took the author awhile to fully understand how the splitter works. Still it takes some hint from reddit to handle the beams correctly, as the first few iteration some of the beam was not present. The reason was due to the ommision of all rows without “^” from the solution. Keeping them make the solution straightforward as when the row has “^”, one can focus on splitting and the next row serves as input to the next set of splitters. However, none of this is necessary as the key is to keep track of all the indices with beams.

 1def process_q1(data: List[str]):
 2  # reddit
 3  ctr = 0
 4  beam_pos = set()
 5  for ridx, row in enumerate(data):
 6    for cidx, c in enumerate(row):
 7      if c == "S":
 8        beam_pos.add(cidx)
 9        continue
10      if c == "^" and (cidx in beam_pos):
11        beam_pos.remove(cidx)
12        ctr += 1
13        # there is 1 row of space between
14        beam_pos.add(cidx - 1)
15        beam_pos.add(cidx + 1)
16  return ctr

Part two slightly challenging. The immediate solution in mind was to do a depth first search and somehow counts each distinct paths. Implementing DFS is easy but nothing cames to mind on how to count the distinct paths.

As the author browse through reddit, he quickly recalls about this video and also take a look at pascal’s triangle but they did not help much. It was not apparent to the author as both examples are clearly expecting perfect binary tree-like stucture, but the question allows merging of beams i.e. “|^|^|” where the beam in the middle is shared between two splitters. The eventual solution was to first draw the beams out, then create a copy of 0s of the grid and count from the last row. The idea is if the last row has “|.||.” and the second last row has “|^||.” The first three characters implies there is a Y junction, by adding each path we have two. The fourth “|” is merely a continuation of the incoming beam hence one is passed upwards. The total paths is obtained by passing and adding the values up.

 1def process_q2(data: List[str]):
 2  # indirect ai help
 3  def draw_beams(data: List[str]):
 4    beam_pos = set()
 5    for ridx, row in enumerate(data):
 6      for cidx, c in enumerate(row):
 7        if c == "S":
 8          data[ridx+1] = data[ridx+1][:cidx] + "|" + data[ridx+1][cidx+1:]
 9          beam_pos.add(cidx)
10          continue
11        if c == "^" and (cidx in beam_pos):
12          beam_pos.remove(cidx)
13          # there is 1 row of space between
14          beam_pos.add(cidx - 1)
15          beam_pos.add(cidx + 1)
16          data[ridx] = data[ridx][:cidx-1] + "|" + data[ridx][cidx:]
17          data[ridx] = data[ridx][:cidx+1] + "|" + data[ridx][cidx+2:]
18        if c == "." and (cidx in beam_pos):
19          data[ridx] = data[ridx][:cidx] + "|" + data[ridx][cidx+1:]
20    return data
21  data = draw_beams(data)
22  # print("".join(data))
23  # data = [row for ridx, row in enumerate(data) if (ridx+1) % 2 == 0]
24  # print("".join(data))
25
26  s_pos = [idx for idx, c in enumerate(data[0]) if c == "S"][0]
27  max_cols = len(data[0])
28  new_arr = [
29    [0]*max_cols for _ in range(len(data))
30  ]
31  for ridx in range(len(data)-1, -1, -1):
32    for cidx, c in enumerate(data[ridx]):
33      if ridx == len(data)-1 and c == "|":
34        new_arr[ridx][cidx] = 1
35      elif ridx % 2 == 0:  # w/ ^
36        val = 0
37        if c == "^":
38          # check l/r
39          if cidx-1 >= 0:
40            val += new_arr[ridx][cidx-1]
41          if cidx+1 < max_cols:
42            val += new_arr[ridx][cidx+1]
43        if c == "|":
44          val = new_arr[ridx][cidx]
45          # check below
46        new_arr[ridx-1][cidx] = val
47      else: # no ^
48        new_arr[ridx-1] = new_arr[ridx]
49  return new_arr[0][s_pos]
50

Day 8

The question felt like asking to hand roll kmeans clustering-esque or knn-esque algorithm, but the working solution is something else. The Disjoint Set Union algorithm is applied. The core concept was to first calculate all pairwise distance (oh, by not calculating squareroot speed things up) and sort them in ascending order, then search each nodes of the pair. If their parent is not the same assign the right’s parent to the left node and keep track of the node size by maintaining a list of integers (default 1).

 1def process_q1(data: List[str], cnx):
 2  # gemini
 3  l = []
 4  for i in range(len(data)):
 5    coor = data[i]
 6    cx, cy, cz = coor.split(",")
 7    for j in range(i+1, len(data)):
 8      ref_coor = data[j]
 9      rx, ry, rz = ref_coor.split(",")
10      d = (int(rx)-int(cx))**2 +\
11        (int(ry)-int(cy))**2 +\
12        (int(rz)-int(cz))**2
13      l.append([f"{i}-{j}", d])
14  l = sorted(l, key=lambda x:x[1]) # ascending
15
16  parent = [x for x in range(len(data))]
17  sizes = [1]*len(data)
18  ctr = 0
19
20  for node in l:
21    i, j = node[0].split("-")
22    root_i = find(int(i), parent)
23    root_j = find(int(j), parent)
24    if root_i != root_j:
25      parent[root_i] = root_j
26      sizes[root_j] += sizes[root_i]
27    ctr += 1
28    if ctr == cnx:
29      break
30  final_clusters = sorted(
31    [sizes[i] for i in range(len(data)) if parent[i] == i],
32    reverse= True
33  )
34
35  return math.prod(final_clusters[:3])

Part two was to continue connecting until all nodes are connected into one circuit and immediately stops. The stopping criteria was slightly tricky, initially it was having the following condition however it did not consider the situation where every two pairs of node is connected but there is no connection between the pairs.

1visited = set
2for node in l:
3  i, j = node[0].split("-")
4  visited.add(i)
5  visited.add(j)
6  if len(visited) == total:
7    break

The correct approach is instead count the successful merges, backed by Krustal’s algorithm - to connect \(N\) separate items into one tree, \(N-1\) successful connections is needed hence,

 1def process_q2(data: List[str]):
 2  # gemini
 3  l = []
 4  for i in range(len(data)):
 5    coor = data[i]
 6    cx, cy, cz = coor.split(",")
 7    for j in range(i+1, len(data)):
 8      ref_coor = data[j]
 9      rx, ry, rz = ref_coor.split(",")
10      d = (int(rx)-int(cx))**2 +\
11        (int(ry)-int(cy))**2 +\
12        (int(rz)-int(cz))**2
13      l.append([f"{i}-{j}", d])
14  l = sorted(l, key=lambda x:x[1]) # ascending
15
16  parent = [x for x in range(len(data))]
17  total = len(data)
18  ctr = 0
19
20  for node in l:
21    i, j = node[0].split("-")
22    root_i = find(int(i), parent)
23    root_j = find(int(j), parent)
24    if root_i != root_j:
25      parent[root_i] = root_j
26      ctr += 1
27
28      if ctr == total - 1:
29        x1 = data[int(i)].split(",")[0]
30        x2 = data[int(j)].split(",")[0]
31        break
32  return int(x1)*int(x2)

Day 9

Part one was a walk in a breeze. By checking each pairwise area and get the largest as the return.

1def process_q1(data: List[str]):
2  coords = process_data(data)
3  max_area = -1
4  for i in range(len(coords)):
5    for j in range(i+1, len(coords)):
6      area = (abs(coords[i][0] - coords[j][0]) + 1) *\
7        (abs(coords[i][1] - coords[j][1]) + 1)
8      max_area = max(area, max_area)
9  return max_area

But part two immediately escalates. It took many tries around the concept of getting each pair of coordinates, figure out all four corners if they are all in or on the edges. However despite many rounds of conversation with various GenAI, nothing comes by. Here’s the version that came cloest of solving it, which some day will be revisited to fully understand why it did not work.

 1def check_collision(data, i, j, min_x, min_y, max_x, max_y):
 2  for idx in range(len(data)):
 3    if idx == i or idx == j:
 4      continue
 5    px, py = data[idx]
 6    if min_x < px < max_x and min_y < py < max_y:
 7      return True
 8  return False
 9
10def on_segment(p, a, b):
11  cp = (p[1] - a[1]) * (b[0] - a[0]) - (p[0] - a[0]) * (b[1] - a[1])
12  if cp != 0: return False
13
14  min_x, max_x = min(a[0], b[0]), max(a[0], b[0])
15  min_y, max_y = min(a[1], b[1]), max(a[1], b[1])
16  return min_x <= p[0] <= max_x and min_y <= p[1] <= max_y
17
18def is_valid_point(data, p):
19  if p in data: return True
20  j = len(data) - 1
21  inside = False
22  for i in range(len(data)):
23    if on_segment(p, data[i], data[j]):
24      return True
25    xi, yi = data[i]
26    xj, yj = data[j]
27
28    if ((yi > p[1]) != (yj > p[1])):
29      intersect_x = (xj - xi) * (p[1] - yi) / (yj - yi) + xi
30      if p[0] < intersect_x:
31        inside = not inside
32      j = i
33  return inside
34
35def segments_strict_intersect(a, b, c, d):
36  def orientation(p, q, r):
37    val = (q[1] - p[1]) * (r[0] - q[0]) - (q[0] - p[0]) * (r[1] - q[1])
38    if val == 0: return 0  # Collinear
39    return 1 if val > 0 else 2
40
41  o1 = orientation(a, b, c)
42  o2 = orientation(a, b, d)
43  o3 = orientation(c, d, a)
44  o4 = orientation(c, d, b)
45
46  if o1 != o2 and o3 != o4:
47    if o1 == 0 or o2 == 0 or o3 == 0 or o4 == 0:
48      return False
49        return True
50  return False
51
52def check_edge_intersections(poly, corners):
53  rect_edges = [
54    (corners[0], corners[1]), (corners[1], corners[2]),
55    (corners[2], corners[3]), (corners[3], corners[0]),
56  ]
57  poly_len = len(poly)
58  for i in range(poly_len):
59    p_edge_start = poly[i]
60    p_edge_end = poly[(i + 1) % poly_len]
61    for r_start, r_end in rect_edges:
62      if segments_strict_intersect(r_start, r_end, p_edge_start, p_edge_end):
63        return True
64  return False
65
66def process_q2(data: List[str]):
67  coords = process_data(data)
68  max_area = -1
69  for i in range(len(coords)):
70    for j in range(i+1, len(coords)):
71      p1 = coords[i]
72      p2 = coords[j]
73      min_x, max_x = min(p1[0], p2[0]), max(p1[0], p2[0])
74      min_y, max_y = min(p1[1], p2[1]), max(p1[1], p2[1])
75      corners = [[min_x, min_y], [max_x, min_y], [max_x, max_y], [min_x, max_y]]
76      if check_collision(coords, i, j, min_x, min_y, max_x, max_y): continue
77        p3, p4 = [p1[0], p2[1]], [p2[0], p1[1]]
78        if not is_valid_point(coords, p3) or not is_valid_point(coords, p4): continue
79        if check_edge_intersections(coords, corners): continue
80        area = (abs(p1[0] - p2[0]) + 1) * (abs(p1[1] - p2[1]) + 1)
81        max_area = max(area, max_area)
82  return max_area

The concept sound simple on paper however the likely reason is it is complicated to cover all corner cases. Instead the author copied the solution from dllu. It was a very interesting concept where all the edges forms a circle with a concave. By selecting the rectangle with largest area within the circle yields the solution.

Day 10

With some hints, part one is manageable. By observing the question, one can make a hypothesis that the maximum presses needed is the total button options. This was not apparent to the author as he recalls from the game show The Brain, where the same button might need to be pressed multiple times to get the target configuration. With that hypothesis,

 1def process_data_q2(data):
 2  rv = []
 3  for row in data:
 4    _, *buttons, goal = row.split(" ")
 5    goal = tuple(int(i) for i in goal[1:-2].split(","))
 6    buttons = [[int(i) for i in button[1:-1].split(",")] for button in buttons]
 7    buttons = tuple(
 8      [int(i in button_set) for i in range(len(goal))] for button_set in buttons
 9    )
10    rv.append([goal, buttons])
11
12  return rv

Also, one of the hard part is actually to figure out what is a good way to parse the input and bit shifting. Apparently this is the first ever the author need to work with bit shifting despite came across when studying about mahjong.

 1def process_data_q1(data):
 2  rv = []
 3  for row in data:
 4    segment_1 = row.split("] (")
 5
 6    target = segment_1[0] + "]"
 7    segment_2 = segment_1[1].split(") {")
 8
 9    buttons = ("(" + segment_2[0] + ")").split(" ")
10    voltages = ("{" + segment_2[1]).strip()
11
12    desired_state = 0
13    n = len(target[1: -1])
14    for i, ch in enumerate(target[1:-1]):
15      if ch == "#":
16        desired_state |= (1 << i)
17    button_masks = []
18    for button in buttons:
19      inds = [int(x) for x in button[1:-1].split(",")]
20      button_mask = 0
21      for i in inds:
22        button_mask |= (1 << i)
23      button_masks.append(button_mask)
24
25    voltage_mask = 0
26    # n = len(voltages[1:-1])
27    for i, v in enumerate(voltages[1:-1].split(",")):
28      voltage_mask <<=1
29      if int(v) % 2 == 1:
30        voltage_mask |= 1
31
32    # print(f"{n} {desired_state:b} {button_masks} {voltage_mask:b}")
33    rv.append([n, desired_state, button_masks, voltages, voltage_mask])
34  return rv

Part two was just impossible on first glance. MILP was something the author learned once but probably returned to the lecturer and never actually successfully hand roll a MILP solver (kudos to this implementation). Eventually the author gives in and uses this smart solution.

Day 11

Part one is a classical traversal question with DFS. The solution bears lots of similarities to D10Q1.

 1def process_q1(data: Dict[str, str]):
 2  q = deque(data["you"])
 3  ctr = 0
 4  while q:
 5    next = q.popleft()
 6    if len(data[next]) == 1 and data[next][0] == "out":
 7      ctr += 1
 8    else:
 9      for path in data[next]:
10        q.append(path)
11  return ctr

Part two is supposed to be easy, until it does not. The difficulty was due to the solution the author was looking for was to not break the paths into three segments. It was a mess, requires external help and felt slow. Eventually @cache was introduces and separting the path segments solution was implemented.

 1def process_q2_slow(graph: Dict[str, str]):
 2  # grok
 3  s = "svr"
 4  t = "out"
 5  k = ["fft", "dac"]
 6
 7  k_map = {node: 1 << i for i, node in enumerate(k)}
 8  full_mask = (1 << len(k)) - 1
 9
10  dp = defaultdict(lambda: defaultdict(int))
11
12  def dfs(node, mask):
13    print(node)
14    if node == t:
15      return 1 if mask == full_mask else 0
16    if dp[node][mask]:  # memoized
17      return dp[node][mask]
18    ways = 0
19    new_mask = mask
20    if node in k_map:
21      new_mask |= k_map[node]
22    for neighbour in graph[node]:
23      ways += dfs(neighbour, new_mask)
24    dp[node][mask] = ways
25    return ways
26
27  return dfs(s, 0)
28
29def process_q2(graph: Dict[str, str]):
30  @cache
31  def traverse(s, t):
32    if s == t:
33      return 1
34    if t != "out" and s == "out":
35      return 0
36    ways = 0
37    for node in graph[s]:
38      ways += traverse(node, t)
39    return ways
40
41  return sum([
42    traverse("svr", "fft") * traverse("fft", "dac") * traverse("dac", "out"),
43    traverse("svr", "dac") * traverse("dac", "fft") * traverse("fft", "out"),
44  ])

Day 12

The author did not attempt D12 on the day itself hence it was made know to him that the trick to solve D12 question. There is some discussion if it is even considered as a legit solution. The author’s take was something in between. Does it solve the question? Sure it does. Does it have some quirk? Obviously. It is not uncommon to have such scenarios in real world and often until a robust solution is available, the solution was simple - lay out the information and allow human to check. Hence the solution is build in such spirit.

 1def process_q1(density, data):
 2  # known the trick before solving
 3  impossible = 0
 4  potential = 0
 5  fits = 0
 6  for qns in data:
 7    shape, counts = qns
 8    w, h = shape.split("x")
 9    area = int(w) * int(h)
10    full_density_area = sum(counts) * 3 * 3 # hardcoded since only 6
11    actual_density_area = 0
12    for idx in range(len(density)):
13      actual_density_area += density[idx] * counts[idx]
14    if area >= full_density_area:
15      fits += 1
16    elif area < actual_density_area:
17      impossible += 1
18    else:
19      potential += 1
20  return (impossible, potential, fits)

#Algorithm