View the Project on GitHub ignacioarnaldo/generateMondrianTableaux
Have you ever seen one of Mondrian's Tableaux and thought: "mmmm I could paint that". Well, this tutorial explores an AI method to generate many!
Let's start by examining these handcrafted examples:
We can generate a large number of Tableaux like these ones following simple 4 steps:
There will be at least 4 cells and at most 6 cells (in the case where the two rows are split into three columns). This means that we can code these tableaux with 9 variables where 3 variables (X1, X2, X3) determine the structure of the tableau, and 6 variables (X4, X5, X6, X7, X8, X9) code for the cell colors:
This program exhaustively generates all feasible configurations in this notation (X1...X9). From the terminal:
$ java -jar allPaintings.jar all_paintings.csv
Even with this simplification of Mondrian's tableaux, 18433 tableaux are created! Unfortunately, most of them (like the examples below) do not look great:
After examination of the tableaux, we create a set of arbitrary rules that will help us discard "bad-looking" tableaux (note that this step is completely subjective). We discard the tableaux that satisfy one of the following conditions:
To discard the bad-looking paintings according to this set of rules, please download this other executable and run:
$ java -jar discardPaintings.jar all_paintings.csv good_paintings.csv bad_paintings.csv
After filtering the tableaux, we are left with 648 great tableaux! Below we show 4 nice paintings:
Let's say that we do not want to store all 648 tableaux. Instead, we want to learn the patterns that characterize these "good" tableaux, and generate new aesthetic tableaux on the fly. We can learn the structure and parameters of a Bayesian network from which we can sample as many tableaux as we want! For this part we use the Bayes Net Toolbox for Matlab by Kevin Murphy. You can download the Matlab code for the following steps from here.
Dependencies between variables can be represented graphically with Directed Acyclic Graphs (DAG) where:
The goal of this step is to learn a DAG that captures the conditional dependencies between the variables X1,...,X9. We help the process by entering manually the number of discrete emissions (called choices above) and order of the variables X1,...,X9, and run the K2 algorithm to learn the structure of the DAG with the 648 "good" paintings:
node_sizes = [2 3 3 5 5 5 5 5 5];
order = [X1 X2 X3 X4 X5 X6 X7 X8 X9];
dag_k2 = learn_struct_K2(cases, node_sizes, order, 'max_fan_in', 8);
The obtained Directed Acyclic Graph (DAG) captures well the hierarchy of our representation. In fact, X8 and X9 (Nodes 8 and 9) both depend on X2 and X3 (nodes 2 and 3). This is exactly what we should expect since X2 and X3 determine the number of cells of the painting, and if the final number of cells is 4 (or 5) then the color assigned to X8 (and X9) will be the dummy value 5. |
We can now create a Bayesian network with the generated DAG:
bnet_k2 = mk_bnet(dag_k2, node_sizes, 'names', {'X1','X2','X3','X4','X5','X6','X7','X8','X9'}, 'discrete', 1:9);
We now compute the maximum likelihood parameters of the Bayesian network with the 648 "good" paintings:
bnet_k2 = learn_params(bnet_k2, cases);
Once the parameters of the model are learned, we can generate as many samples as we want with:
sample = sample_bnet(bnet_k2);
We sample 10000 paintings with replacement and select the 30 most frequent ones.
To generate the html version of the generated paintings, download this executable and run:
$ mkdir generated_htmls
java -jar execs/generateHTMLs.jar generated_paintings.csv generated_htmls
We show the 30 paintings below. We did it!
This project was developed by Ignacio Arnaldo (@ignacioarnaldo)