Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Computational Thread Art, published by TheMcDouglas on August 7, 2023 on LessWrong.
This post describes the iterative process I went through while creating the thread art which you can see featured on my website. This is also crossposted to my personal blog.
Black & White Algorithm
The basic version of the algorithm is pretty straightforward. The image is rescaled so that white equals zero, and black equals one. A bunch of random lines are generated, and the line which goes through the darkest pixels on average (i.e. the highest average value per pixel) is chosen. The value of every pixel along this line is decreased slightly (i.e. the image is made lighter) and the process is repeated, with each new batch of random lines constrained to start where the previous one finishes. Because each line only changes the brightness by a small amount along a very short width, this process gradually builds up gradients, and after a few thousand lines the full image emerges.
The very first image I ever made was of my mum, for her 50th birthday, and it turned out surprisingly well given how basic the algorithm was at that point.
Computational efficiency
Initially, each piece would take about 6 hours to run, because I had no understand of things like computational efficiency. About a month after making my first pieces, I realised that the numpy sum function worked much faster than the built-in Python method, and I could store coordinates in a dictionary rather than recomputing them on the fly, which reduced the time for each piece from 6 hours to just under 10 seconds (I wish I was joking).
Algorithm improvements
There were a few tweaks to this algorithm which made it work slightly better. For example:
Darkness penalty - rather than just drawing the lines which went through the darkest pixels on average, I introduced a penalty for drawing too many lines through an area.
Importance weighting - the penalty on each pixel was scaled by some value between zero and one. This allowed me to improve accuracy in some areas (e.g. facial features) at the expense of less important areas (e.g. the background). I also made this different depending on whether the value of the pixel was positive or negative - this allowed me to specify more complex behaviours like "don't draw any lines through the whites of the eyes".
When these tweaks were all combined, each line was minimizing the penalty function:
where ω± are the importance weighting for the positive and negative versions of the image (i.e. for whether the pixel value was positive or negative), p are the pixel values (after lines having been drawn), D is the size of the darkness penalty (usually between zero and one), and N is the weighted length of the line (i.e. the sum of the importance weighting of each pixel).
This allowed me to create more complex images. For instance, the following nightmare-fuel image (from the Shining) wouldn't have been possible without these tweaks, since the simpler version of the algorithm would do things like draw horizontal lines through the image, or draw over the whites of the eyes.
I also adapted this algorithm in super hacky way to create multicoloured images. I could just use editing software to create different versions of black and white images, then generate lines for these images using the standard method, then overlay them. This is how I created the David Bowie image, which is my personal favourite of all the black and white ones:
This was all well and good, but what I really wanted was to create "full colour images", i.e. ones which blended several colours together & actually looked like the photo it was based on, rather than just using colours in a narrow way.
Full colour images
The algorithm I used for monochrome images basically just worked straight away (although I continued to improve it as I made m...