Various Formats from: http://caca.zoy.org/browser/libpipi/trunk/examples/img2twit.cpp http://caca.zoy.org/export/4891/libpipi/trunk/examples/img2rubik.c Regarding img2twit: My full solution can be found at http://caca.zoy.org/wiki/img2twit. It has the following features: Reasonable compression time (around 1 minute for high quality) Fast decompression (a fraction of a second) Keeps the original image size (not just the aspect ratio) Decent reconstruction quality (IMHO) Message length and character set (ASCII, CJK, Symbols) can be chosen at runtime Message length and character set are autodetected at decompression time Very efficient information packing Here is a rough overview of the encoding process: The number of available bits is computed from desired message length and usable charset The source image is segmented into as many square cells as the available bits permit A fixed number of points (currently 2) is affected to each cell, with initial coordinates and colour values The following is repeated until a quality condition is met: A point is chosen a random An operation is performed at random on this point (moving it inside its cell, changing its colour) If the resulting image (see the decoding process below) is closer to the source image, the operation is kept The image size and list of points is encoded in UTF-8 And this is the decoding process: The image size and points are read from the UTF-8 stream For each pixel in the destination image: The list of natural neigbours is computed The pixel's final colour is set as a weighted average of its natural neighbours' colours What I believe is the most original part of the program is the bitstream. Instead of packing bit-aligned values (stream <<= shift; stream |= value), I pack arbitrary values that are not in power-of-two ranges (stream *= range; stream += value). This requires bignum computations and is of course a lot slower, but it gives me 2009.18 bits instead of 1960 when using the 20902 main CJK characters (that's three more points I can put in the data). And when using ASCII, it gives me 917.64 bits instead of 840. I decided against a method for the initial image computation that would have required heavy weaponry (corner detection, feature extraction, colour quantisation...) because I wasn't sure at first it would really help. Now I realise convergence is slow (1 minute is acceptable but it's slow nonetheless) and I may try to improve on that. The main fitting loop is loosely inspired from the Direct Binary Seach dithering algorithm (where pixels are randomly swapped or flipped until a better halftone is obtained). The energy computation is a simple root-mean-square distance, but I perform a 5x5 median filter on the original image first. A Gaussian blur would probably better represent the human eye behaviour, but I didn't want to lose sharp edges. I also decided against simulated annealing or other difficult to tune methods because I don't have months to calibrate the process. Thus the "quality" flag just represents the number of iterations that are performed on each point before the encoder ends. Even though not all images compress well, I'm surprised by the results and I really wonder what other methods exist that can compress an image to 250 bytes. Edit: here is how the compression method compares with JPEG. On the left, jamoes's above 536-byte picture. On the right, Mona Lisa compressed down to 534 bytes using the method described here (the bytes mentioned here refer to data bytes, therefore ignoring bits wasted by using Unicode characters):