# Nonograms

September 2008: Initial, probably buggy, load of a literate python program.
November 2010: Smarter, faster, bigger.

Original Editable
version 1 & 2 of 2

Nonograms are a puzzle where you are given an incomplete run-length-encoded description of a black and white image, and you must find the picture. A good overview can be found on the Wikipedia page.

Writing a basic nonogram solver is simple. It is about as hard as a sudoku solver. If you have not read it already, take a moment to see a most elegant solution:
Solving Every Sudoku Puzzle

I had written a sudoku solver before reading Norvig's writeup. Our solvers bore a superficial resemblance, largely because neither of us like to use object-oriented features if we can help it. Like his, mine was a constraints based solver but I had missed one important feature: try the most likely guesses first. Norvig's solver works on the row with the least possibilities first. Both solvers spend their time barking up the wrong trees, but Norvig started with the smallest forest. It is an important part of the algorithm to take into consideration

However, even Norvig's constraint based solver suffers on serious drawback. Before solving can begin, every possibility must be created and stored. Then constraints eliminate impossible options until there is only one possibility remaining. This has a huge initial memory requirement, but a 9x9 sudoku is small enough to fit in a memory. Nonograms are not so fortunate

It is easy to create huge nonograms. Compare the work required to make and solve a jig saw puzzle. A small puzzle is easier to solve than a big puzzle. However, a big jig saw is just as easy to make as a small puzzle. Nonograms (and prime factorizations) are like this. It is very easy to make something very difficult to solve.

At first I made a constraint based nonogram solver. Most puzzle are less than 20x20, and it could solve these quickly. The solver broke under a 100x100 puzzle. In a few minutes, it had used up several gigabytes of ram. Upon dissecting the stack trace, I found that it had only completed enumerating the first two rows. There were still 98 rows and 100 columns remaining, for an estimated memory ceiling of 200 gigs. (Additionally, the constarint solver stopped after finding one solution. I would prefer it to find all the solutions.)

Still thinking constraints were viable, I applied the technique while adding rows to the incomplete puzzle. This cut the memory usage in half, so did too little to help. What I needed was a way to solve the puzzle without first creating every possibility, or without storing any state at all.

This sounds a lot like brute force. It is. Dumb algorithms have their place. Often times a dumb algorithm uses less memory and is easier to parallelize. My first solver used a depth first recursive generator to iterate through each possible puzzle. After each leaf was created, it looked for an internal contradiction. This was pure brute force, and was useless for puzzles larger than 5x5.

The old version used a cute and simple twist on brute force. Finding multiple solution was fairly important, and brute force always will. Unlike naive brute force, The next version it did not check every possible leaf. Instead it checked every node as it was added to the tree. Entire branches could quickly be pruned off at the first sign of total contradiction. A parent node would look at its children and prune off the contradictory children. This worked much better, but was still too slow for puzzles larger than 10x10. Children could prune parents too. If nothing is possible, then it is the parent's fault and the hopeless parent is pruned.

This was slow but workable. It seemed good enough, until Jan Wolter entered it in his Paint By Numbers Survey. There this little script was thoroughly trounced but a consolation prize for being very very small.

So, here is the second take. This new one is smart enough to line-solve anything in Mario's Picross but not smart enough to deduct its way though all line-solvable puzzles.

The final component was an extra communication channel, so that a child could prune a hopeless parent. Each row of the puzzle may have hundreds of possible options, at least one of which must fit. If none are possible, then it is the parent's fault and the hopeless parent is pruned.

The solver takes as input a pair of lists, one for rows and one for columns. So

``````    3 1 1 2
1   ? ? ? ?
1,1 ? ? ? ?
4   ? ? ? ?
``````

becomes

``````1
1 1
4

3
1
1
2
``````

This is saved into a file, named as a command line argument. The file is parsed into a dictionary. In order to make backtracking easier, the entire board is stored in a single dictionary. The dictionary holds several different sets of data, kept separate by their keys. Cells have a tuple key of (x,y). Rows have a negative integer for their key, columns are positive integers. Both are 1-indexed to avoid a collision between +0 and -0.

Rows and cols are exactly the same. They contain two lists, one list of clue numbers and one list of cells. The cells are dummy objects, so changing the Nth element of a row changes the underlying object and the change is automatically reflected when examining the Nth column. Each cell has three states: unknown, filled and empty.

rows = [ [1], [1,1], [4] ] cols = [ [3], [1], [1], [2] ]

Each sub-list is referred to as a `row_clue` and each integer in the `row_clue` is called a `clue`.

It is a rats nest of cross references, but also the easiest way. After parsing, the above mini-puzzle looks like

``````{(1,1): Point(1,1), .... , (4,3): Point(4,3),
1: ([3], [Point(1,1), Point(1,2), Point(1,3)]),
.... ,
-2: ([1, 1], [Point(1,2), Point(2,2), Point(3,2), Point(4,2)]),
.... }
``````

Solving works by applying a series of axioms to each line. (Known as line-solving.) These axioms are applied with the `map2d()` function, which also tracks if any progress was made. An overview of the rules:

* `blank()` - If the clue is [0], mark every cell empty. * `empty()` - If there are no clues, mark every unknown cell as empty. * `clean_points()` - Empty cells on the end of a line can be removed. * `center()` - If a chunk is long enough to overlap itself, some points must be filled. * `edge()` - If a line starts with a filled point, then it continues for the length of the first clue. * `edge_dot()` - A special case of `edge()`, where the line starts with "unknown, filled" and the clue is 1. * `unique()` - Find and isolate the longest unique part of a clue. * `cant_fit()` - Chunks of unknowns too small to fit the clue must be empty. * `cant_reach()` - If the start of the last clue is known, points past its reach must be empty.

There is only two stateful data structures to hold the state of the unsolved puzzle. The first represents the board. Like Norvig's, it is a dictionary. The keys take the form of strings such as 'R4' or 'C7' for the fourth row or seventh column, respectively. Each value in the dictionary is a string of zeros and ones. When fully solved, every location on the board will be specified twice (once in its row and once in its column) and all of these pairs will agree with each other. (This check is not actually performed.)

The second structure holds the puzzle. It is a list, and each element of the list represents another recursion deeper into the solution. The elements are tuples, each holding the name-string, clue-list and length of the row/column. A list allows the puzzle to be solved in a particular order.

So, here is the code. It is only 125 lines and the remainder of this write-up is interspersed throughout. Many functions work on both rows and columns. I apologize for calling all rows and columns within a function mearly 'row' and hope this does not cause too much confusion.

Some essentials.
`itertools` is awesome. It rounds out iterators and allows Haskell-ish code. It is not entirely complete, and I usually have to roll at least one lazy iterator meta-operation. This time, the gap was filled by `interleave()`. The most similar function`itertools` has is `chain()`, which concatenates a second iterator to the first after the first runs out. If you wish to yield all the first elements, then second elements, etc. of several iterators then use `interleave()`.

from itertools import *

``````def interleave(*args):
"Consumes several iterators, returns iter-like flat(zip(*))"
flags = [True] * len(args)
while any(flags):
for i,it in enumerate(args):
try:
yield it.next()
except StopIteration:
flags[i] = False

def names(char, length):
"Returns dictionary keys."
return [char.upper() + str(i) for i in range(length)]

index = {}
def populate_index(rows, cols):
"The index contains a dict cache of all the names to their number.  {'R1' : 1} etc"
global index
index = dict(zip(names('R', len(rows)), range(len(rows))) +
zip(names('C', len(cols)), range(len(cols))))</span>
``````

`row_count()` and `all_rle()` function almost identically. Both work by making space for the first clue and then recursively calling themselves with the remaining clues and space. Oddly enough, `all_rle()` ran half as fast when the explicit string additions were replaced with `string.join()`.

def row_count(row_clue, length): "For a given clue, returns the the number of possibilities." cc = row_clue[0] # cc is current clue if len(row_clue) == 1: if cc == 0: return 1 return length - cc + 1 tail = row_clue[1:] min_tail = sum(tail) + len(tail) - 1 return sum(row_count(tail, length-(cc+i+1)) for i in xrange(length-cc-min_tail))

``````def all_rle(row_clue, length):
"Yields all possible RLEs for a row_clue."
cc = row_clue[0]  # cc is current clue
if len(row_clue) == 1:
if cc == 0:
yield '0' * length
raise StopIteration
for i in xrange(length-cc+1):
yield '0'*i + '1'*cc + '0'*(length-cc-i)
raise StopIteration
tail = row_clue[1:]
min_tail = sum(tail) + len(tail) - 1
for i in xrange(length - cc - min_tail):
cs = '0'*i + '1'*cc + '0'  # current string
gap = cc + i + 1
for ts in all_rle(tail, length - gap):  # tail strings
yield cs + ts</span>
``````

As more rows are known, more columns are constrained (and vice versa). By looking at the entries perpendicular to the row/col of interest, we can tell if the locations are known to be zero, one or unknown.
`cross_row()` returns this summary. With the board key as input, the output is a list of tuples. Each tuple holds a row/col number and either a one or a zero. No tuple is generated for unknown positions.

def cross_row(board, name): "Returns a summary of a board row/col." cross = {'C':'R', 'R':'C'} char = cross[name[0]] i = index[name] filtered = (k for k in board if k[0] == char) return [(index[k], board[k][i]) for k in filtered]

The summary made by `cross_row()` is passed on to `valid()`. When `valid()` fails, the child is pruned.

def valid(summary, row): "Confirms if a pattern fits the given summary." return all(row[i] == v for i,v in summary)

The term "must" is taken quite literally. Before writing to a point, an `assert` confirms it is writable. Failures here indicate an erroneous puzzle or a bad guess in the constraint solver. Once a clue number is filled out, the clue number is popped from the clue list and the filled out (solved) portion of the line's point list is removed. The solver works by picking away at the edges of each line.

When these rules get stuck, it makes a guess at the edge of one of the lines. I still have not learned Norvig's lesson. It picks the first line it sees, instead of finding the most probably place to guess. An improvement to make later.

Todo: At 285 lines, it is twice as long as the old one. Too long. And it can't look for multiple solutions. And it could be faster.

`solve()` is the meat of the program. This is the part which recursively explores the tree of possibilities. It also contains the code for pruning a parent. Pruning a parent is a three step process. First, a `ParentError` must occur to the child below the parent. The `ParentError` was raised because every child was invalid. When this happens, the parent finds where its ones or zeros caused the error. This position is noted, and the parent proceeds to skip all patterns matching the ban. If both one and zero are banned in the same location, then a `ParentError` is thrown. Note that when a solution is found it is not returned up the stack, but rather it is printed out. The solver will continue to explore the tree looking for multiple solutions.

This version is still too slow to solve a 100x100 on a single computer. However, solving is not impossible. The next version of the solver will support parallelization by limiting segments of `valid_children`.

class ParentError(Exception): pass

``````def solve(puzzle, board):
def cleanup(name):
"board is passed by reference, so return it untouched."
if name in board:
del(board[name])
name,row_clue,length = puzzle[0]
puzzle_tail = puzzle[1:]
summary = cross_row(board, name)
ban = []  # holds tuples, (position, value_banned)
valid_children = (r for r in all_rle(row_clue, length) if valid(summary, r))
for row in valid_children:
if any(row[i] == v for i,v in ban):
if len(set(zip(*ban)[0])) != len(zip(*ban)[0]):
# 1 and 0 both banned in same position, big problem
cleanup(board, name)
raise ParentError
continue
board[name] = row
if not puzzle_tail:  # out of clues, puzzle solved, print it
print_board(board)
else:
try:
solve(puzzle_tail, board)
except ParentError:
if name[0] == puzzle_tail[0][1][0]:
# parallel rows do not intersect, handle elsewhere
cleanup(board, name)
raise ParentError
else:
err_index = index[puzzle_tail[0][1]]
ban.append( (err_index, row[err_index]) )
# all clues tested, go back up
cleanup(name)

def print_board(board):
rows = sorted((index[k], board[k]) for k in board if k[0] == 'R')
for pos,data in rows:
print ''.join(('.','#')[int(char,10)] for char in data)
print</span>
``````

Using a combination of `row_count()` and `sorted()`, the puzzle data structure places the most obvious parts first. However, note the use of interleave. Alternating rows and column allowed for a maximum amount of cause and effect between parent and child. This allows the solver to draw conclusions about bad parents faster.

def solution(rows, cols): "Converts a list of clues into the puzzle data structure." populate_index(rows, cols) # ( (name, row_clue, length), ... ) rows2 = izip(names('R', len(rows)), rows, repeat(len(cols))) cols2 = izip(names('C', len(cols)), cols, repeat(len(rows))) # [ (row_count, name, row_clue, length), ... ] rows3 = iter(sorted((row_count(c,l), n, c, l) for n,c,l in rows2)) cols3 = iter(sorted((row_count(c,l), n, c, l) for n,c,l in cols2)) # [ (name, row_clue, length), ... ] puzzle = [datum[1:] for datum in interleave(rows3, cols3)] board = {} solve(puzzle, board)

``````def test():
rows = [[3], [2,2], [2,2], [1,1], [2,2], [2,2], [3]]
cols = [[3], [2,2], [2,2], [1,1], [2,2], [2,2], [3]]
solution(rows, cols)

def prof():
import cProfile
cProfile.run('test()', sort=1)

if __name__ == "__main__":
test()
#prof()</span>
``````