Categories
Programming

Poor lemmings, or adventures into cliques

I love programming competitions and attempt to try every new site I find. Those sites will worth another post later, this current will be about a programming contest on CoderCharts I took part recently. It finished about two days ago and I didn’t do as well as I hoped, but did much better than I would have a year ago. Out of the 8 puzzles i have: 5 solved, 1 given up on for now, 1 haven’t even started and 1 attempted but run out of time . This last on is one Lemmings mating, one of the “hard” puzzles, and when I say run out of time I mean at 4am I decided that for the 4 ours then left to finish the competition I’d rather sleep than try to think of another optimization. Still, it bothered be and I did manage to write solution (with ~70% score, not too bad). Now I want to document the how did I get there so I won’t forget.

* Spolier warning. If you haven’t solved the problem and want to do it independently, do not read on. *

Path to a solution

Looking at the problem statement, there’s clearly a graph-related problem behind all the story of the poor creatures. When I’m not sure about not just what should be the solution, but what exactly is the right description the problem, I usually open The Algorithm Design Manual (ADM). It’s a great book, so far I couldn’t find a problem where it had nothing to say. It is a bit dated, however, and while sets me on the right path, there’s usually much more reading than that. In this situation it quickly revealed that the problem I face is finding the maximum clique in the graph describing the problem. Well, quickly learned that there’s a big can of worms waiting for me in there.

Maximum clique

So, to summarize, a clique is a group of vertices that are all connected to each other by edges. Got a maximal clique when it is not possible to add any other vertex in a way that everyone’s still connected. Finally the maximum clique is the largest maximal clique. Note how logical the notation is but still so easy to mix up. It is relatively easy to find maximal cliques (start from any vertex and add more connected vertices until there’s none left that is connected to all already selected). Finding the maximum clique (or any maximum clique since there could be more than one) is, however, a proper NP-complete problem.

The ADM only says something along the lines, that “well, then you just have to do a backtracking through all the graph”. Which is a great idea and simple to implement, I would had a result, but it is terribly inefficient because in a naive implementation one would check lots of cases which cannot possibly have the result. So fist thing to do is pruning, or eliminating paths that we are sure that it cannot improve what we already have. Wikipedia helps in finding just such a thing, called Bron–Kerbosch algorithm. Idea is quite simple (just wrap the head around recursive functions) and it can be implemented in Python following the pseudo-code version on Wikipedia. Here’s mine (not really optimized or anything).

def bronkerbosch(R, P, X, graph):
    """
    Want all vertices from R, some from P and none from X
    where the graph dict defines the connections
    """
    if len(P) == 0 and len(X) == 0:
        return R
    candidates = P.copy()
    max = set([])
    for v in candidates:
        res = bronker1(R.union([v]),
                       P.intersection(graph[v]),
                       X.intersection(graph[v]),
                       graph
                       )
        if len(res) > len(max):
            max = res
        P.remove(v)
        X.add(v)
    return max

This really worked and actually 3 out of 7 tests are passed, but the rest timed out. Got to find something better. Looking at more on Wikipedia, I was tempted to get on with the version of the algorithm that they said is “the fastest”, by J.M. Robson. It’s written up, but it is from the pre-MathJax era, so it’s just terrible work to figure out and keep up with all the strange notation. Also, it looks like a big collection of special cases and theoretical shortcuts. I’m sure it works, maybe I’ll come back to it later but for now I wanted to have a little bit more of an insight as well, so looked a bit more.

Vertex colouring

Since at this point I really caught up with the fact that maximum clique finding is an important problem, so went to Google Scholar to see what the academia has written. A lot, apparently. I was browsing though them in a kind of reverse-chronological order, so I could find the “newest” algorithms, since that should be the best. In the end a pattern emerged: most of the hard work can be done by employing a different technique, the vertex colouring. The two are connected because a clique in a graph G is an independent set in the complement graph of G (two vertices are connected in the complement of G if  and only if are not connected in G), and vertex coloring is a good method of finding independent sets. What we want is labeling the vertices with the smallest number of different labels (“colours”) such that vertices with the same label are independent from each other. The number of different labels we have to use to do that helps pruning our clique finding algorithm by setting an upper bound how big clique it is possible to make from the vertices we have in a particular step.

As an example we can start out with a graph like this (taken from the original lemmings problem):

A sample graph for illustration
Defining the graph: vertices are labelled and edges show connection

When running this graph through a vertex colouring algorithm, we would get something along this lines:

Result of graph colouring
The result of the graph colouring: vertices of the same colour are independent

The blues are not connected to each other, neither are the greens and so on. In this case we need 4 colours to separate the graph into independent groups. Of course when programming one would use numbers instead of colours (thus often this called “numbering”): blue was actually “1”, red was “2”, … By adjusting the algorithm in terms of what is the order of the colours and the order of the vertices within each colour the previous algorithm can be improved by orders of magnitudes since we much better pruning. In the maximal clique every vertex would have different colour (though not all colours will necessarily be used).

Some notes on the algorithm to get started:

  1. Try to keep nodes in the order of neighbours in the current sub-graph. The smaller is good, because they can be quickly eliminated, reducing the test-space more.
  2. Go from colors with fewer members to more members, similar reasons.
  3. Can use the color number + the current clique size for better pruning: if my largest clique so far has Q elements, my current clique has P and the tested vertex comes from the highest colour number N, then in case of N + P <= Q it is futile to go on, I cannot improve the result and time to backtrack. Less efficient methods use the C number of elements in the current candidate set instead of N, and since N<=C, the pruning bound set by C+P is more “loose” letting more tests to be done than necessary.

For example, the final solution of this given problem is the set of vertices highlighted by the edges is shown on the next picture, and it is done in only 4 steps or so…

The final solution, the maximum clique
The maximum clique on this sample graph defined by the highlighted edges.

Different methods

In the papers I found a few different colouring schemes, and most of the improvements seem to be made in:

  • initial ordering, or
  • ordering within each of the colours, or
  • better upper-bound estimation for the possible clique size from the available colour information.

The hardest things about reading these papers is that often the pseudo-code is a big mess, there are plenty of typos and the examples are quite scarce.

In the end I used algorithms mixed from two different papers:

Tomita, Akutsu and Matsunaga, Efficient Algorithms for Finding Maximum and Maximal Cliques: Effective Tools for Bioinformatics, 2011 (link) This has a few different algorithms and the most effective one looked too complicated and I went with MCQ, one of the improved but not perfect on. Now 6/7 tests pass, that’s got to be the right path. According to benchmarks the coloring takes up the most time

Segundo, Rodriguez-Losada and Jimeneza, An exact bit-parallel algorithm for the maximum clique problem, Computers & Operations Research, Volume 38, Issue 2, 2011, pp.571-581 (doi:10.1016/j.cor.2010.07.019). Their algorithm is a variation on the previous one and I’m not using it to it’s full potential (wonder how fast a C implementation would be). Also, I wonder if I misunderstood something but the original form of the algorithm failed on me. There are nodes that don’t need to have colour information because it does not matter at that stage of the algorithm. Their version of colouring then seems to remove those vertices from the colouring function output. This does not matter for their example (it is correct) but in my tests it gives wrong result (the algorithm finishes too early). So, mixed in from the previous paper that I keep all nodes whether they are coloured now or not, because later they might be, it works like a charm. And about a factor of 2-3 faster than before, enough to pass all 7 tests on CoderCharts and with a reasonable score. I think the real improvement is not really in the pruning (this version seems to check a few more nodes than the previous) but somehow the colouring function is faster – that might tell something about my Python skills too, though.

Lessons

This could probably be solved with other algorithms as well, I would be curious to see others’ code / hear their stories. What I took home from this:

  • Know your problem and know where to look for solutions. The best is to study and practice a lot.
  • Often pre-processing the data is a crucial step to the solution. Since there’s usually a memory/speed tradeoff, it is worth experimenting: if and algorithm is too slow, what could be prepared and stores to exchange calculating something with a mere data lookup?
  • Different algorithms have bottlenecks at different sections. E.g. in this case there is the data preparation, the number of steps to test possible solutions and the time it takes for re-colouring. On the path to the final solution I had algorithms inefficient in each of the sections.
  • Test cases are important. The example in the problem setting and the examples in all the papers are too small to be helpful for optimization. The “bonus” test case on problem site, however , is just too big. Actually, even my passed algorithm is too slow to have a result for it in reasonable time. So, if there’s no provided test case, make your own
  • The example cases I generated are much smaller than the 7 tests I need to pass, but they are still slower. Every paper said that maximum clique finding has very different running time for different (random) graphs. I presume that some of the test graphs are in fact specially prepared. Also, some of  the changes in code that made my own test cases faster (sometimes 20%) often failed more tests on the web. The message is that coding competitions like this are set towards finding the optimal solution for their own tests – whatever your winning solution might be, it is not necessarily the best overall solution and if I ever come across a similar problem in real life setting I likely to have to do differently.
  • Most of the Python optimization advice I found is outdated and sometimes outright hurting the performance. Do plenty of performance testing. Start optimizing the big slow downs, the small shortcuts rarely worth the time in competition setting.
  • Language choice matters but does not matter that much. I’d think Python actually pushes me to be more efficient because many things are slower. The code on the other hand is more readable in the end (ideally).
  • Someone has to update Wikipedia. :)

I will share the code later, just need to wait until some time has passed after the competition.

Categories
Life Maker

Monocle and the one idea

I’m always looking for new and interesting magazines that I can read. I do believe there’s still future for printed journalism, even if most of the things I read now is online (Hacker News, myriads of blogs on Google Reader, links shared on Facebook and Twitter…). During my time at the university, a perfect weekend program was having brunch in the Common Room and checking out the latest issue of The Economist. My interests are mostly in analysis, world affairs, getting insights from people with much more experience, but not shoved down on my throat like many dailies seem to do but giving me space to make up my mind myself.

Monocle Issue 40 cover
Cover of Monocle Issue 40

Recently I was checking out Monocle in my local Eslite Bookstore, it was wrapped up since it’s not one of the cheapest (NT$520, almost twice the cover price elsewhere). The cover promises to have ABCDE: Affairs, Business, Culture, Design and Edits – which is all good and could be very interesting. I checked out on the web a little bit what did others write about it, and it is all good. With all the accolades, about their worldwide reach (apparently they have offices all around the globe), that they do their own photography because they want the best, with all the enthusiasm by the staff… I actually felt that this might be the real life embodiment of the Millennium magazine. That’s certainly a lot to live up to, isn’t it?

So, last week I bought it. First impression: I was pretty underwhelmed. The features are thin on content, most the content feels like an IKEA catalogue and how can I relate to something that advertises £190 polo shirts? I wasn’t that excited about it anymore, and it certainly wasn’t the Millennium.

Nevertheless, I took it out every now and again, reading more of it and things did change. I think I was wrong to hype it up for myself, should have judged it on its on merits. And on that, the writers are certainly clever. Maybe their bread is focusing on short observations but making many of them. The topics are actually worthy. The photography is indeed top-notch. The designs they show are really cool (and living in Taiwan, where I’m quite spoiled with good design, that is tough). In the end, I’d say it is a good magazine. Maybe not for me, or not every single issue, but if I’ve found in a library I’m sure I’d check it out. If design was my business I’d subscribe.

The One Idea

Nevertheless, there was idea that stuck with me (and maybe that’s one reason I’ve started to change my mind about the Monocle), which came from a rather short editorial titled “What Ireland can learn from Finland”. The writer argues, that Ireland’s crisis is pretty much inevitable, since no economy can survive on only the service industry. They should instead start to make things again, rolling up the sleeves and creating something tangible.

After reading this I felt a bit shocked. Looking at the things I’m doing, how I’m reading the hymns for software developers and Web 2.0 startups every day, how I’m positioning myself to become a better programmer. And despite doing all that, I do remember now, that I wanted to make things – and I haven’t. It feels like a wake-up call, that there are things that I value more but I forgot about.

This got me thinking: instead of being an awesome programmer (good luck with that), I really should think out how can I leverage my maker background (every experimental physicist is a maker) and programmer ambitions to create something new. Don’t give up either (I couldn’t) but find what unique combination of skills I might have. I really feel this is what would bring the much desired sense of achievement.

Got me thinking that I have all the issues from Make: in the last two years or so, but never actually did anything. That I planned to set up a hacker space in Taipei, but never got beyond asking my friends who would be interested in it. That I admired and saved so many things on Instructables but always had it linger on my Next-Action List.

“If not now, when? If not you, who?”

Now just stay tuned as I try to follow through. :)

Categories
Admin

There’s a war out there

Since I have set up my little Virtual Private server about two months ago, I keep reading and learning more about its administration. In particular I’m trying to make it more secure, since nobody likes data lost or their things used behind their back. I know that the Internet is a tough place. Most computer users are nicely isolated behind their routers and internal networks, nevertheless I had my freshly installed WinXP being infected in less then 5 minutes when connected to the Net. (Well, since then I don’t install anything Microsoft and first thing to take care is the security, so things are much better).

Thwarting brute force attacks

One of the first thing is securing the remote login access to the machine: disabling root login for SSH is always a good idea. But since I’m interested in cleverer methods, I wanted to do something more potent and general. Found this blog post about how to limit brute-force attack with iptables, so I set out to implement it. The basic idea is that if another computer is trying to connect too many times in short succession, then it is likely an attack. Use the firewall to see how many connections are made in a specific time interval to the sensitive ports and if a threshold is passed then ban that host from connecting for a while. I like it and had to implement it.

The information on the linked page is quite detailed and very useful. Just save the current iptables rules, edit them, and then restore.

# iptables-save > myrules
.... edit them rules ....
# iptables-restore < myrules

For remote servers one thing to be extra careful about is not to block the SSH connections completely: keep the current connection open, try to make a new connection and if you can log in, then things should be fine.

The only thing I have changed compare to the other site is the log level, so i can separate them better. In the following line there was originally --log-level 7 (debug) I’m using --log-level 4 (warning):
-A ATTACKED -m limit --limit 5/min -j LOG --log-prefix "IPTABLES (Rule ATTACKED): " --log-level 4

Then update the line in /etc/syslog.conf to:
kern.warning   /var/log/warnings

Of course this might vary somewhat from Linux distro to distro: the above is for my CentOS install with syslog,

From the logs

Well, not sure if my host was particularly busy or not – I assume it wasn’t since I don’t rank high in Google so fewer attackers would find my little “home”. Still, in the last month there’s a nice little collection of IP addresses which triggered that ATTACKED rule of the firewall.

Using Python I extracted the IP addresses from the logs, run them through GeoIP Python API to get their locations and fed that into the Google Maps Static API, to get this picture:

Location of hosts that triggered my ATTACKED iptables rules
Location of hosts that triggered my ATTACKED iptables rules. Red: once, blue: 2-9 times, yellow: 10+ times

Altogether in about 1 month, I logged 110 ATTACKED triggers from 47 different hosts. Most of them tries only once, there was one that did 48 times. According to GeoIP database, it is from Varna, Bulgaria. Well, if there is one good thing that came out of this, that Varna actually looks quite good and I’d be interested to visit it. :) Talk about strange my reactions to things…

It seems Europe and China are up to no good. Not sure if American baddies are less or just targeting mostly Americans. Might investigate the regional differences some time later. Though this is just for curiosity and fun, if I was serious, then I could set up a proper honeypot.

Some technical notes on making this picture:

  • GeoIP Python API looks one of the worst documented codes I’ve ever seen. I found a tutorial that helped me to get the results I wanted: cities and locations, not just countries.
  • Static maps are quick, dirty and limited. Will try to figure out use the Google Map API for a proper zoomable, scrollable, annotated map. Could imagine making a heat-map of threats, or better colour-coding of the number of attempts from each IP/City.

Anyways, at least there’s no sign of unauthorized entry so far, since most of these attacks are not sophisticated at all. I wonder if I’d recognize if I ever was targeted by a sophisticated attack, but that’s not something to fret over. Just keep the automated backups going and it will be all fine. :D

Update:

The Python script I used to get that map can be found over here.

Categories
Computers Life

New Laptop or You Had Me at “No OS”

I’ve been wanting to upgrade my laptop for quite a while. It was a good ol’ Acer Travelmate 4501wlmi from 2004. I’m not sure why I have kept it for such a long time, maybe I liked torturing myself. In the end the screen was barely hanging on its hinges, the video card memory was corrupt so the screen was all funky sometimes, but what finally did it is the flaky/failing wireless.

Lenovo X201i
Artificially arranged desktop:)

I did check out before what are the acceptable alternatives for a new laptop. Then last weekend I went and got myself a new Lenovo X201i, When I first went to the store, I wasn’t sure whether I’ll get it, or which model to go for. Tried to get some information from the clerk about the available options, but with this communication gap I usually have here in Taiwan, due to my limited Chinese, wasn’t for an advantage. In the end all I did is pretty much confirmed what I have already known: the Lenovo X-series is their smallest ultraportable, they can be quite powerful, and pretty popular. When he asked me what kind of system I wanted and I told him: none, I got a good confirmation that I came to the right place. All other stores the reactions range from apological raised eyebrows to statements that “selling laptops without Windows is illegal” (true story). Here on the other hand, he just got out his “No OS deals” sheet, and I just checked out of the most powerful of them: it had everything I needed and was altogether about 20% cheaper than the other model I was considering before. He was saying that there were only 3 left, so I just galloped off the the nearest ATM, and there I had it, good times.

A few days later I went back to get a few small details sorted out: exchanged to a larger battery (6 to 9 cell), upgraded the memory (2 to 8Gb) and switched the keyboard cover to the right one. This time the limited Chinese was for my advantage. I was talking to a different person this time, who knew even less English than my previous clerk, so whenever the new one was contradicting the deals I was promised, I just had to question it and they gave me the deal, instead of going into any conversation why I couldn’t have it. It’s all fine, I wasn’t abusing this “power”, but not going to be taken advantage of that easily either. All in all, it was quite good deal, even if it would have been cheaper to order it directly from America on the internet.

Experience so far (~5 days):

  • This machine does not compete for any beauty prize, so don’t mind that the 9 cell battery does not improve on that front. It is still okay for me. The matt finish on the cover picks up every touch, so it’s going to be pretty “used” looking soon. The keyboard cover is a good idea, knowing myself, but does not improve things either.
  • It is not really fair to compare it to a computer 6 years its senior, but it’s such a breath of fresh air how snappy it is. Not the most powerful computer I’m using (hard to beat the office’s quad core) but certainly a small powerhouse on the go.
  • The size is just right. Had an EeePC before, and I thought I could get really used to it, but in the end the limitations were just too much. Still got to find a good, small, laptop-enabled backpack, but with its 12″ it shouldn’t be a big deal
  • With the 9-cell I got about 6-7 hours of light use out of it. This is before I did any real power optimization. Linux does have a lot of tricks and even things like sound card power saving can go a long way. Still has to investigate
  • Installed my usual Arch Linux, now with all encrypted filesystem (not that I’m planning to let it be stolen). It will take a while to get my old settings back again, but at least I can organize them better.
  • That ESC key is at some weird place in the corner, keep pressing F1 instead. Even if No OS version (and they saved the “Windows7” sticker) I still have the Windows button. Will try to find some appropriate role for it.
  • Haven’t had a chance to try the WiMAX or the built in camera. The first will probably stay like that, the second I should get going with Skype.
  • Keyboard lighting is ace for nighttime stuff, just like now.
  • The pointing stick does not really like the keyboard cover. It is no big deal, I’m more of a touchpad fan. That touchpad has 5 different buttons but none of them emulates a mouse wheel as far as I can tell. Want to find out what does emulate it, should be very useful. The pad itself acts up sometimes, but nothing too annoying.
  • The 320Gb hard drive is not bad at all, but I’ll look out for a good SSD – should save on power and improve on speed.
  • The screen is a bit picky of the angles it wants to be looked at from. I know the tablet version (X201t) is muc better, this one I just got to live with.
  • Built in fingerprint reader – got to get the drivers working, but it would be awesome to use it for the constant sudo goodness that is required for a well secured system.

Now I have no excuse to be very productive anywhere and everywhere.

Categories
Programming

Hacker Cup Round 1 Redux

Last night’s Hacker Cup round, which was a second try of the first round after last week’s disaster, didn’t go quite as well for me as I hoped. Probably not that much for the others either, as even if 1000 people will advance, less than that have submitted code.  Can be disappointment due to last week’s failure, I too was hesitating to take part, but also because the problems were I think at least one level up. Which is not a problem, just observation.

by jailman @ Flickr
White Keyboard and Coffee by jailman @ flickr

Got my coffee at 1am, set in to mood and started off right on time at 2am.  In the next 3 hours I finished one of the problems, and got halfway with another one, which is below par, but what can I do. Need to learn more. One thing that I’ve noticed is if one just searched the web with the right keywords, Problem 1 and 3 could be handed over on a silver plate, more or less… Which is kind of random for a programming competition.  Did they want people who can find the right solutions or maybe someone who knows the solution for all of these problems alread? The first one begs the question whether there will be  Wikipedia access in the final. The second one begs the question: who can be such a person? And if neither – why choose such problems that cannot possible be solved in the allotted time without prior knowledge or external input?

Well, I wonder what would be the right keywords for Problem 2, which I could not solve yet.

I’m also bothered, that it seems that the input file one had to download for Problem 3 does not conform to the input specs. Why? Especially when if you download the file, the system gives you 6 minutes to submit. This is rather underhanded….

I will certainly try to check out the other two sub-rounds as well, even if they are 5am next Wednesday and next Sunday local time. Not doing this for the glory but for the education anyway. :)

Notes on solutions

Warning: Spoilers ahead. Don’t read if you want to solve them yourself.

Almost all my code for this can be found in this Github repo, together with other rounds’ code.

1) Wine Tasting

This one is more mathematics than anything. Let’s have G glasses of wine and minimum C guesses to be right. For i = C to  G calculate the number of : how many ways one can have good guesses, multiplied by how many ways none of the remaining guesses are right. Sum these numbers up, and that’s your result.

I was thinking for a long time how to calculate the number of ways none of the guesses are right (i.e. the number of permutations where non of the elements are at their original place). Finally I found there’s a name for that: derangement. And it is not at all simple, so I guess I wouldn’t have figured it out myself.

In the end, what we have to return is:
$$\mathrm{result} = \sum_{i = C}^{G}{G \choose i} (G – i)! \sum_{j = 0}^{(G-i)} \frac{(-1)^{j}}{j!}$$

Now that it actually makes sence, it turned out this very expression has a name too: recontres numbers.

Update: fixed my program and now should be correct.

This can be a very large number due to the factorials, so got to use the right types. In my python code I had to use 64bit numbers, but it’s rather ad-hoc, just choose the ones not to break. I do have to revisit this again and fix the types.  (And don’t forget the modulus).

Also, I used SciPy to get the results, but only needed the factorial. If I quickly write a fast factorial subroutine then SciPy is not needed – another thing to fix in my code.

It turned out that my code has failed, so basically this round is 0 out of 3. Nevertheless, wanted to see what was wrong, so I rewrote most of the math code. After messing around for a while, it turns out that all my problems came from

from __future__ import division

which is basically the newer, py3k-style integer division code. If I excluded that, suddenly everything was fine. I really have to remember this, because I used this include in practically every math-related code I wrote lately. Besides this change, I kept the other upgrades: fast factorial and choose. My output for the given input file has md5sum of d17dfc9e9fef637f771da3d693d7920b

2) Diversity Number

This one I failed on, and seems very few people actually succeeded in general (less than 60 people in the whole round). I start to understand bits and pieces of the possible solution, but I have nothing complete yet.  I wonder if there’s such an insight into this one as in the previous problem with the derangement.

Anyway, one thing I know: if there’s a list A, sorted, with length n and index starting at 0, then the diversity number is
$$\mathrm{diversity} = \prod_{i=0}^{n-1} \left(A_{i} – i\right)$$
or in Python

prod = lambda array: reduce(lambda x, y: x*y, array)
diverse = lambda array: prod([array[i]-i for i in xrange(len(array))]) if len(array) > 0 else 0

Now just have to figure out, what to do, not to enumerate all the $$2^{100} > 10^{30}$$ subsequences in the worse case scenario. :-?

(I’ll update this when I’ve found the solution).

3) Turn on the Lights

This one is pretty much straight out of the game Lights Out, with the only twist that one has to turn on all the lights instead of turning off. This I found only well after the round ended but this makes the puzzle a bit of a duh for me, nevertheless it is worth doing, since there are some optimization choices that are not trivial.

The method I used to get the solution, which follows most of the steps as outlined on the above link. Let’s call the button in row i and column j as [i, j].

  1. While importing the input, turn ever “on” into “off” to get the right setting for this problem.
  2. Turn off every light down to the bottom row. This is done by pressing button at [i, j] if [i-1, j] is on. Save all the buttons that are pressed.
  3. Check the state of last row at that point and keep it as a “target”.
  4. Starting from a blank puzzle (all lights off), turn on a single line in the first row and propagate it down just like in step 2. Save the last row. Do this for all of the lights in the first row, which will create the “base vectors” of the solutions we seek.
  5. We have to find the combination of those base vectors that equals to the above “target”. This can be done (i suppose) with a Gaussian elimination kind of method, but that’s just too much to implement for this little puzzle. According to the specs, the maximum number of colums is 18, thus there are $$2^{18}=262144$$ sett different configurations of lights at max.  That is not too bad, if there’s a fast comparison method, one can do a brute force search. That’s what I did. Also, I made use of binary expression of the light settings, e.g. switching lights can be done with XOR-ing the light pattern and the switch pattern.
  6. If found a match in the previous step then do the swicthing in the first line according to that result and do the propagation again. The final result should be an all-off board. Save all the button presses again.
  7. Check the pressed buttons. Since pressing a button twice is equal to not pressing it at all, if a button is pressed N times it is equivalent to N mod 2. Sum up all those presses, and that should be less than (rows * columns).

This is probably not the quickest method due to the brute force search, but it should be good enough for a programming competition where the speed of programming is more important the the speed of the program.

I do have one big complaint that I mentioned int the intro: the test file downloaded from Facebook did not follow the input specifications. In the specs every puzzle is a single line, the rows of the puzzle separated by whitespace. In the downloaded file for every puzzle every row is a new line.

The md5sum of my solution for the downloaded input file is 1e38422048a6aa9aeb007955d8b66f46