Blackbox: Final Analysis

Posted by Matthew Simoneau,

Blackbox, Final Analysis

by Alan Chalker

Test Suite Probing
Case Probing
Final Submissions
Other’s ‘Hacking’
Partnering Offer


Unlike many of the past contests’ final analysis pages, this analysis isn’t going to focus on the details of the winning entry’s algorithm, but rather on the various entries and procedures surrounding all the ‘probing’ that went on during this contest. While probing itself is somewhat controversial amongst the competitors, it is not prohibited in the rules like many other things are. I think a detailed discussion of what transpired will be enlightening to some and help facilitate improvements to future contests to make probing less of a factor.

As pointed out on the Contest Evolution page:

“The fifth entry to take the lead, Nick Howe’s Decimator2, returned the exact answer for every problem. So well before we came out of Darkness, the challenge was to reduce the beam use.”

Time also wasn’t much of an issue, since the majority of the leading entries throughout the contest fell within the 8 – 12 seconds range. While algorithmic improvements certainly helped to reduce the beam use, the ultimate way to reduce beam usage is via the use of hard coded lookup tables that contain the actual answers to specific problems.

As an aside, the “Decimator2” entry got a result of 18211, while the winning entry (“End of it all”) got a result of 11653, an improvement of 6558 or ~36%. I estimate the final lookup tables accounted for about 900 points of the improvement, or ~14% of the total improvement. Improvements in the core solver algorithm clearly accounted for the majority of the gains in the score, and with another entry submission strategy very well could have won in the end.


Test Suite Probing

Early in daylight, Hannes Naude decided to start ‘welching’ the contest and successfully used the regexp function to execute a ‘clear beam’ (“You’ve been Welched 2!”), which gave him an impossibly good result of 0. Steve Hoelzer then decided to try to hack the highest result, and submitted several entries (“worst score ever 4” ended up with a result of 88964999999999934464). It occurred to me that information about the test suite could be encoded in the results, and that with a bit of effort someone could reconstruct some of the actual cases in the test suite over many entries. These cases could then be hard-coded into a winning entry via the use of ‘lookup tables’, which is exactly what I ended up doing.

While it would have been very convenient to be able to pass back ~20 digits worth of info for each entry, the contest team shortly thereafter patched the entry parser to limit results to only 8 digits due to some ‘hacking’ of the queue (more on that further down).

My initial probing was of the specific cases within the test suite (“Probe 1” through “Probe 20”). For the majority of the probing, I first had to establish a baseline result. This was done in “Probe 1” by just zeroing out all answers:

function solution = solver(n)
solution = zeros(n);

The result for this was 126394. Note while there were fractional digits returned, I didn’t utilize them because they were too dependent upon the number of beams used in later probing. “Probe 2” established the total number of cases in the test suite via this code:

function solution = solver(n)
solution = zeros(n);
for x=1:10

This initial probing was very crude because it utilized calls to the beam function (which increments the beamcount) in order to increase the results. It wasn’t until a bit later that I switched to a much better method of just seeding values into the solution. The reason for the loop is that the beamcount is divided by 10 when it’s added to the result, so I wanted to be able to read it off directly.

The result of “Probe 2” was 126571, which is 177 more than the baseline “Probe 1” result. This meant that my solver function was called 177 times, because there are 177 cases in the test suite. “Probe 3” through “Probe 20” were used to determine the specific number of cases for each size ‘n’. An example is shown below:

function solution = solver(n)
solution = zeros(n);
switch n
   case 25
      for x=1:10
   case 26
      for x=1:1000
   case 27
      for x=1:100000

Again, calls to the beam function were used to increment the results, but this time or certain cases beam was called either 1000 or 100000 times to encode values in more significant digits. The result of this particular entry (“Probe 11”) was 206596, which is 80202 more than the baseline. This means there are 8 n=27 cases, 2 n=26 cases, and 2 n=25 cases.

It was during this process I realized how inefficient calling the beam function that many times was. While the first test (“Probe 1”) only took 2.6 seconds to run, most of the probes took much longer, which the longest (“Probe 12”) taking 81 seconds. The distribution of cases in the test suite ended up being as follows:

n= cases n= cases n= cases n= cases n= cases
1 0 14 8 27 8 40 1 53 1
2 0 15 3 28 3 41 0 54 0
3 0 16 1 29 2 42 2 55 1
4 28 17 7 30 10 43 0 56 0
5 3 18 2 31 5 44 0 57 0
6 3 19 0 32 3 45 0 58 0
7 4 20 4 33 4 46 0 59 0
8 6 21 4 34 1 47 0 60 1
9 6 22 8 35 1 48 0 61 0
10 7 23 4 36 3 49 0 62 0
11 5 24 7 37 3 50 0
12 5 25 2 38 3 51 0
13 4 26 2 39 2 52 0

Note that there are 7 values for n where only 1 case exists: 16, 34, 35, 40, 53, 55, 60. These were the obvious cases to begin probing since they could be easily distinguished within the solver code. While there are various techniques that could be used to differentiate between the non-singleton n cases (such as using the output of a particular beam), there wasn’t time enough in the contest for me to pursue them.

In order to prioritize amongst the cases, I then decided it would be helpful to get an estimate of the number of items in each case. I initially tried to do this in small groups again by reporting back more than 8 digits worth of information in the results (“Summary 1” through “Summary 13”, but by then the parsing code had been fixed to limit results to 8 digits. Therefore, I had to probe on a case by case basis (“Hard Way 1” through “Hard Way 22”, a reference to it being more difficult). These entries relied upon the then leading solver to find a solution first, then it extracted the needed info via these lines:

solution  = zeros(n);
switch n
   case 4

Essentially, the number of non zero elements in the solution was multiplied by 10000 and inserted into cell 1,1 of the solution. This was much faster then repeatedly calling the beam function and had the same net effect on the result since it would be a large ‘mistake’ in the solution. This code had the effect of summing all the elements in non-singleton cases, but that wasn’t important since I was planning on focusing on the 7 standalone cases. The results of this probing (which didn’t include the entire test suite) were as follows:

n= items n= items n= items n= items n= items
4 140 18 6 32 231 38 98 55 10
5 14 25 43 34 9 39 319 60 4
6 14 26 31 35 85 40 53
15 26 28 80 36 158 42 185
16 51 29 184 37 68 53 12

The n=60 case was clearly the best one to start with since it only had 4 items according to the probing. Starting at this case had an interesting effect in that it would reduce the number of general beam calls by more than in any other case since it was the largest n (i.e. there were at a minimum 60 beams needed to probe that case).

The only other general probing I did was to try to determine the largest items that existed and in which cases they existed (“sense 1” through “sense 4”). These ended up being 7 instances of the value 98 across 2 separate cases. I didn’t end up having time to determine which specific n values those cases belonged to.


Case probing

My initial case probing focused on the n=60 case and was rather inefficient (e.g. “try 1” through “try 41”). As part of the probing I just added onto the then leading entry in order to try to ‘hide’ my probe code somewhat. The additions were as follows:

if n==2^6-4

This code was added at the very beginning of the entry in order to ‘escape’ to the probe function for the appropriate case (n=60 is shown). The use of 2^6-4 was another weak attempt to hide some of what was going on.

function sol=nb(n)
for d=n:-1:1
   if r~=-d

The actual probe code started at the bottom row and worked it’s way upwards, shooting beams into each row of the board (in some cases I started at the top or went column by column). When a beam ended up not passing straight through the board, the appropriate row number was then encoded in the result as explained before. I would then examine the results and modify the code to use a high beam to destroy the item in the appropriate row, as follows:

function sol=nb(n)
for d=n:-1:1
   if r~=-d

I repeated this process many times, until I knew all the rows that had items in them. Once I knew all the rows, I could then proceed to getting the column and score values using a similar technique of destroying the ones I already knew before probing an item:

function sol=nb(n)

if 1 

While this process worked, it was very time consuming and had numerous issues with it, including:

  • Confusion over converting the appropriate digits back into column, score, or row values due to having to subtract the baseline offsets
  • Weird results when either the column or row returned a negative number
  • Inability to get results from certain item geometries near the edges of the board, particularly those that cause a beam to ‘bounce back’

I gradually refined my techniques through a series of probe sets that examined all of the singleton cases (e.g. “try 1” through “try 90”, “eye1” through “eye315”, “leg1” through “leg120”, etc.). Note the names were chosen because they were short and easy to type quickly. By the time I was working on the last singleton case on Wednesday afternoon, my probe code had evolved to variations of the following:

function rots
skip = 0;
while x~=1
   if c~=-x
      if skip==0

To probe a case, all I needed to do was to increment the skip = x line to the next number and the code would handle the rest. In this example it started at the right edge of a board and shot beams into the board until it encountered an item. If it hadn’t skipped the appropriate number of items yet it would destroy that item and continue. Otherwise it would encode the coordinates and score of the item directly into the result so that that I could read it without having to subtract out baseline results or worrying about beam offsets.

For example, the result of entry “eye250” was 9182776, which meant that there was an item at coordinates 9,18 with a value of 27 (this was for the n=35 case). This made the probing go a LOT faster, although I occasionally altered the encoding format so that others couldn’t pickup a consistent pattern across all my entries.

In the end, I was able to generate the following 7 lookup tables, containing a total of 224 items:


25,43 35 29,34 35 32,36 35 37,31 35 36,43 35
28,37 35 30,32 35 30,42 35 39,37 35 36,42 35


R,Cstrong> S R,C S R,C S R,C S R,C S
3,23 71 10,20 81 18,6 81 27,15 81 34,39 81
4,14 81 11,4 81 19,23 81 28,16 81 36,7 81
6,18 81 11,18 81 20,28 81 29,6 81 36,23 81
6,21 81 11,23 81 20,31 81 30,2 81 38,4 81
6,27 81 13,9 81 21,1 81 30,9 81 38,16 81
6,30 81 13,32 81 21,13 81 30,10 81 39,4 77
7,24 80 13,33 81 21,27 81 30,29 81 39,32 81
8,24 81 14,39 81 22,36 81 31,6 81
9,30 81 15,9 81 24,23 81 33,24 81
10,15 81 16,23 81 24,30 81 34,6 81
10,16 81 18,2 81 27,9 81 34,18 81


1,16 17 6,34 11 13,21 28 22,11 28 28,23 9
1,23 9 7,4 8 13,23 27 22,26 10 29,3 11
1,24 9 7,10 24 14,1 22 22,31 8 29,4 1
1,31 1 7,35 4 14,30 20 22,33 5 29,15 27
2,15 26 8,11 28 14,31 4 22,34 8 29,23 27
2,31 8 9,18 27 15,17 21 23,19 28 30,31 12
3,5 10 9,19 27 16,1 1 24,26 27 31,4 10
3,20 24 9,31 9 16,12 24 24,30 1 31,26 4
4,20 28 10,1 23 16,14 27 25,25 22 31,32 11
4,31 11 10,24 27 16,27 6 26,4 2 32,32 3
5,6 27 10,27 8 17,33 17 26,19 28 33,24 16
5,15 13 11,16 15 18,34 17 26,22 24 34,2 5
5,24 26 12,9 27 18,35 19 26,24 25 34,4 12
6,7 5 12,13 9 19,4 26 26,32 15 34,30 3
6,17 21 12,30 2 19,6 26 27,24 24 35,12 22
6,27 18 13,17 28 20,15 22 28,3 23 35,13 3
6,30 14 13,19 12 21,17 28 28,12 10


19,29 12 29,29 12 30,42 12 31,23 11


15,10 39 19,13 1 8,24 11 18,26 12 13,25 73
21,10 8 17,15 71 14,24 23 9,28 4


27,18 45 35,22 45 30,27 45 24,26 45
29,12 39 34,23 45 23,18 45 26,28 45
27,12 1 31,8 45 24,20 27 29,40 45


1,8 39 5,5 40 8,11 18 11,9 40 14,9 40
1,11 24 5,7 40 8,16 34 11,12 40 14,15 40
2,9 39 6,12 10 9,2 40 11,15 40 15,2 38
3,6 40 6,16 40 9,9 36 12,2 40 15,14 20
3,8 28 7,7 22 9,12 38 12,3 40 16,1 27
3,14 31 7,10 2 9,13 40 12,4 40 16,2 40
4,9 40 7,11 40 10,3 31 12,7 40 16,16 40
4,10 2 7,14 40 10,9 40 12,9 16
4,13 40 7,15 40 10,14 40 12,10 16
4,16 27 8,3 40 10,15 40 12,16 38
5,3 39 8,7 11 11,7 40 14,6 40



The initial probing I did on Saturday of the n=60 case was quickly noticed by others such as David Jones (e.g. “dark probe”, “black probe”), who started his own probing of the n=16 case. I assume this was based upon his analysis of the results of my test suite probing listed above. Anders Skjal was the first to publicly discuss the concept of probing on the newsgroup thread on Sunday afternoon, which resulted in a series of discussions about the technique amongst a variety of people.

Since it was obvious that at least some people had already interpreted some of my probing results, which required a significant amount of effort, I reluctantly decided to start trying to obscufate my entries to ensure others wouldn’t be able to easily decipher what exactly I was probing.

My initial obscufation was simply based upon changing the indents a random amount on all the lines (via selecting all and using the indent hotkeys in the MATLAB editor). The principle there was that the ‘diff’ functionality for comparing 2 entries would show the entire code as having changed, as seen below:

- function solution = solver(n)
- global alijoon; solution = -ones(n); alijoon=-ones(2*n+1,2*n+1,8,3); 
- djx=rand(1,13); if n==2^6-4
- solution=nb(n); return; end
- for desp = 0:2
- for rot = [1:7]
- [solution,done] = rotsolver(n,rot,solution,desp);
- if(done || all(all(solution>=0)))
- solution = max(0,solution);
- return
- end
- end
- end
+ function solution = solver(n)
+ global alijoon; solution = -ones(n); alijoon=-ones(2*n+1,2*n+1,8,3); + djx=rand(1,13); if n==2^6-4
+ solution=nb(n); return; end
+ for desp = 0:2
+ for rot = [1:7]
+ [solution,done] = rotsolver(n,rot,solution,desp);
+ if(done || all(all(solution>=0)))+ solution = max(0,solution);
+ return
+ end
+ end
+ end

I thought this worked fine for a while, however if you knew which key lines to look at in the code then it was trivial to ignore the indentions and you didn’t need the diff views at all. After David Jones introduced the n=55 case during the 1000 character challenge, I was worried that he or others were still interpreting my results. Thus I created the following short MATLAB script:

function mixsol(filename)
fid = fopen(filename,'rt');
if fid >-1
   while feof(fid)==0
   linecount = 1;
   for x=1:line-1
      randline = ceil(rand*(line-2));
      newline = [text{x}(1:end-1),'%%',text{randline}];
      newtext{linecount}= newline;
      linecount=linecount +1;
      if rand > .6
         randline = ceil(rand*(line-2));
         newline = ['%%',text{randline}];
         newtext{linecount}= newline;
         linecount=linecount +1;
   fid2 = fopen('output.m','wt');
   if fid2 > -1
      for x=1:length(newtext)

This code reads in a text file line by line, then randomly adds comments to the end of each line by using other random lines from the same file as the
comment. It also randomly introduces lines filled with just comments into the mix. As an example of what this does to the readability of the file, below on the left is the first few lines of the actual winning entry and on the right is the original version of it:

function solution = solver(n)%end
S=0; %V(L)=-1;
if n==10 %end
solution = zeros(n); %charge = charge+B;
%solution(nn,c) = s;
sol=ones(n).*-1; %end
rT=-1.*ones(1,n); %elseif ~c%
cT=-1.*ones(1,n); % solution=s;
%if rB(i)<0 && cB(i)==0
sT=-1.*ones(1,n); %
rL=-1.*ones(1,n); %if cR(i)<0;
cL=-1.*ones(1,n); %end
function solution = solver(n)
if n==10
solution = zeros(n);

By executing this script on every entry, they are all significantly different and very difficult to interpret. It did require a certain amount of tedious work to do this on each entry, but I personally feel that trying to automate the actual entry submission process with scripts would be ‘stepping over the line’. Of course it was still critical to keep track of what each entry is probing so that I could still interpret the results.

As an aside, if possible I recommend that the contest team make some changes to the ‘diff’ functionality to counter these techniques in the future. The best change is for the function to first strip out all extra whitespace, linebreaks, and comments before comparing the code.


Final Submissions

As there always is, in the final minutes of the competition there was a flurry of entries submitted to the queue. During the last contest (Blockbuster), Tim Vaughan was able to quickly submit a tweak to the winning code that Hannes and Cobus had just submitted, and thus was able to ‘steal the lead’ from them. This is a common theme due to the nature of the contest.

Since I had won the Tuesday Leap via the introduction of 4 lookup tables, and knew many people would be watching closely for my last minute entries, I decided ahead of time to submit a series of entries during the ‘end game’, with what I hoped was the best one hidden in the middle somewhere.

These entries were as follows, in chronological order:

  1. “End game?” – deliberately bad syntax, didn’t even run
  2. “Or is this the end?” – deliberately bad lookup tables
  3. “This must be the end!” – deliberately bad lookup tables
  4. “The real end!” – 2nd best entry that introduced the n=16 table
  5. “When will it end?” – 3rd best entry that introduced the n=35 table
  6. “End of it all” – winning entry with all 7 lookup tables
  7. “That wasn’t the end, this is” – deliberately bad lookup tables
  8. “The end of the ends” – deliberately bad syntax, didn’t even run
  9. “The  ultimate end” – deliberately bad syntax, didn’t even run

Entry 1 was submitted at 16:54:30, with 5 ½ minutes left until the contest ended. David Jones was the first person to resubmit it, at 16:57:25, less than 3 minutes later. Before the queue closed, variations of the first few entries were resubmitted a total of 12 times by various people. Sergey Yurgenson was able to submit a variation of entry 5 with 48 seconds to spare, which ended up in 5th place overall. Tim Vaughan (the cyclist) was able to resubmit entry 4 with only 44 seconds to spare, which ended up in 3rd place overall. Luckily, nobody saw entry 6 in time to resubmit it to the queue.


Other’s ‘Hacking’

As previously mentioned, the first ‘hacking’ started just after the contest went into daylight, when Hannes Naude was able to clear the beam count via use of a regexp. This started a whole different ‘contest’ for some of the competitors, who were able to find some very ingenious methods for either returning information from the test suite or crashing the queue. Understanding these attempts can be very educational, and thus I’ve listed many of
them below. As a reminder, the contest rules explicitly prohibit the following:

  • MEX-files
  • Java commands or object creation
  • eval, feval, inline, function handles, etc.
  • Shell escape such as !, dos, unix
  • Handle Graphics commands
  • ActiveX commands
  • File I/O commands
  • Debugging commands
  • Printing commands
  • Simulink commands
  • Benchmark commands such as tic, toc, flops, clock, pause
  • error, lasterr
  • clear, persistent, mlock /li>

The reason many of these functions are prohibited is because they can easily be used to access items or modify the way the runcontest program executes, and thus alter the calculated score. The submission queue has a variety of procedures for checking for the presence of these functions. Thus much of the hacking centered around finding ways to sneak such functions past the filters.

Hannes’ entry “You’ve been Welched 2!” contained the following line:

regexp('i', 'i(?@clear beam);)');

In reality, the regexp wasn’t needed since the queue filters weren’t blocking use of the clear function (which is demonstrated in the “Simpler” entry), even though the rules explicitly prohibit its use. This hack just cleared the master beamcounter after each solution was found, resulting in a final beamcount of 0 and a very low score.

Hannes’ also attempted to directly copy the current case being evaluated via this line:

regexp('i', 'i(?@solution=evalin(''caller'',''testsuite{k}'');)');

The special dynamic regular expression operator ?@ is designed to just execute the MATLAB command that immediately follows it, which in this case was an evalin call. Evalin allows for the execution of a command in a specific workspace, which is this case was the ’caller’’ workspace, or the run contest program. Thus this command was the same as copying the contents of testsuite{k} from the run contest workspace to the solution in the solver workspace, resulting in a perfect board match for every case.

Hannes also succeeded in crashing the queue via the use of the NaN keyword with his “Not an Entry” entry. Another attempt of his (“Research project ;-)”) at sneaking functions past the filters included:

str2num('evalc(''clear beam'')');

The reason this works is straight out of the help for str2num as follows:

“CAUTION: STR2NUM uses EVAL to convert the input argument, so side effects can occur if the string contains calls to functions. Use STR2DOUBLE to avoid such side effects or when S contains a single number.”

Marko Stefanovic and Hannes also utilized similar EVAL quirks in a variety of other built-in functions, as listed below:

  arrayviewfunc('setdatalogic', w, {1},'{@evalin}')



  inlineeval([],['error(''' str ''')'],'');







In another very clever entry (“Last Hack”), Hannes included the following code:


He used subtraction to ‘hide’ the strings he was trying to utilize as part of executing some java methods. The code is translated below:


Via the use of error message construction, Marko (moggy) was able to pass a lot of information back. The first entry to utilize this was “mo2”:

w = arrayviewfunc('setdatalogic', {[]}, {1}, str1);

This entry utilized a function handle to the rethrow command in conjunction with the veal ability of arrayviewfunc to generate the message ‘hello’, which showed up in the Result field on the entry listing. Note that rethrow was explicitly being filtered out already.

In the “mo9” entry, Marko was able to encode part of a solution in the result error message via this code:

w = arrayviewfunc('setdatalogic', {[]}, {1}, str1);
format=['\n' repmat('%d ',1,size(s,1))];

The result was displayed as follows: “0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 56 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 70 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0”

Because this was very obvious to see in the chronological listing of the entries, Marko then took advantage of the fact that the entry listings generated on the chronological or complete ranking pages only show the first few characters of the results field, whereas the pages that show an entry’s code display more characters for the results. Thus he ‘hid’ the solutions at the end of the error messages, using this code (“mo10”):

w = arrayviewfunc('setdatalogic', {[]}, {1}, str1);
format=['\n' repmat('%d ',1,size(s,1))';'];
str=['Line: 335 Column: 4 The function "fish" was clo...'
sprintf(format,s(:)') ' n='

The result was the following: “Line: 335 Column: 4 The function “fish” was clo… 0 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 56 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 0 5 0 0 0 ; 0 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 0 ; 0 0 0 0 70 0 0 0 0 0 ; 0 0 0 0 0 0 0 0 0 …”

Once the contest team blocked the use of arrayviewfunc, he switched to inlineeval (“okram6”), as shown below (and also switched usernames to okram):

function s=bb(s)
if n~=[10 17 22]
str=['n=' mat2str(n) ';' sprintf('%d,%d,',c)];
inlineeval([],['error(''' str ''')'],'');

function c = cc(s,n)
[a s v] = find(s);

This code also generated error messages with info on the board, but it stripped out all the zeros that took up too much space in the previous attempts.
By adding elements to the array in the third line he was able to step through the cases in the test suite in the order they appeared. The output of this code was as follows: “Error using ==> inlineeval n=30;12,31,13,31,18,31,20,7,24,27,29,28,43,12,48, …”

When the use of inlineeval was blocked, Marko decided to post his probing results to the newsgroup thread. Had he had more time, he clearly would have been able to quickly probe the entire test suite.

  suitenr: hash;n;index1,value1,index2,value2,...
  01: 141;10;26,56,47,5,75,70,
  02: 1462;17;5,77,6,5,8,35,9,77,10,13,13,78,44,80,64,8 ...
  03: 856;22;42,73,50,14,51,4,73,33,76,11,108,6,170,20, ...
  04: 3000;30;12,31,13,31,18,31,20,7,24,27,29,28,43,12, ...
  05: 144;14;6,8,11,14,44,13,50,6,55,1,78,12,80,12,96,1 ...
  06: 75;7;12,1,21,17,23,30,38,20,
  07: 197;33;412,38,508,38,578,38,673,12,674,38,
  08: ?;37;304,80,366,76,371,80,380,80,415,80,447,80,46 ...
  09: ?;23;33,54,53,22,59,46,68,43,72,43,86,43,93,21,10 ...
  10: ?;4;8,1,9,3,

Interestingly, nobody ended up hardcoding any of his results into a lookup table, even though he had complete tables for some of the n=10, 7, 33, and 4 cases. This is probably due to the fact that part of the challenge would be to determine within the solver whether you were currently on the correct board since none of those are singleton cases (i.e. there are 7 n=10 cases in the test suite). Marko also discovered a very interesting, but probably useless quirk in “okram3” and “okram2”. This command:


enables the last error message from whichever previous entry faulted to be regenerated in the current entry.

Heinrich Acker found a simpler way to utilize error messages to pass back information (“Not again!”):

  ex_str = char(rand(1,122)*95+32);

Which generates the following result: “Undefined variable “S” or class “S.6#xP<wl0MRIAd%6{CrvwbJ7mA6)`O4rlv]]aJrrc8 …” While this is a very elegant technique, it’s in violation of the rules which prohibit the use of error messages.

Finally, rather late in the contest on Tuesday morning Magnus and Erik discovered a loophole in the beam function (“Cheat?”):

  bc = beam([]);

The code reinitialized the internal persistent board in the beam function, after solving the original case, to be a puzzle of size n=1. Its only cell was set to an item of value equal to the negative of the current beam count. Because the beam function inherently assumes items are all of positive values, it just adds an item’s value to the beamcounter when the high beam mode destroys an item. With a negative item value, this had the effect of zeroing out the beamcounter for each case.

There were several variations on this technique that were tried, including making the beamcount a very large negative number, which had the effect of making the final result also a large negative number since the grading function made the same assumptions.

All in all, the ‘hackers’ were able to find a lot of loopholes in the system and kept the contest team quite busy patching those issues towards the end of the competition. Hopefully all of those changes will carry over into future contests and facilitate a return to focusing on the development of better algorithms to solve the contest problems.


Partnering Offer

On a final note, I want to mention the concept of developing teams. There is a precedent for teaming in the history of the contest, such as the duo of Hannes Naudé and Cobus Potgieter. I posted a message midway through the contest in the newsgroup offering to partner with anybody who was interesting in helping out with the probing effort. Two people were initially interested in the offer, David Jones and Anders Skjäl, but ended up deciding NOT to partner with me because they were both too busy with ‘real world’ obligations.

During the discussions, there was of course an issue of whether we could ‘trust’ each other. While there isn’t really anything at stake in the competition other than pride, I can understand how this could be an issue preventing others from teaming up.

I would like to suggest that the contest team develop a mechanism by which people can ‘register’ as a team in order to eliminate the concern over trusting each other. This could be as simple as a dedicated email address which each team member sends an email to that documents their ‘enrollment’ in the team. Such a mechanism should help further the cooperative and collaborative spirit of the competition.

Comments are closed.

These postings are the author's and don't necessarily represent the opinions of MathWorks.