It’s been a while since I’ve noodled around with the ‘physical’ side of computers. Mostly I work on the web, but sometimes it’s not the right tool. For my day at The Small Museum I wanted to use one of the Raspberry Pis I had lying around, to do something fun.
I came in last Thursday, when the object of focus was the Colossal foot. It made me think of stepping on things…I thought I’d play with a light sensor to see what could happen as you stepped on the sensor. Various hitches (slow connections, fiddly micro SD cards, Linus going awry) meant frustratingly little happened on Thursday.
Today I wanted to get something working quickly, especially as we had some young visitors coming in who weren’t going to be interested in code, just fun things that work. I hit Google in search of other people’s code that I could glue together into a Frankenstein’s monster.
This was the first thing I found: https://learn.adafruit.com/basic-resistor-sensor-reading-on-raspberry-pi/basic-photocell-reading. This let me listen to a light sensor with the Pi.
The Pi is a ‘real’ computer, significantly more powerful than the Arduinos I’ve built with before. How on earth could I plug in components and read them? Turns out it’s two components and a dozen lines of code. Amazing.
As usual the open source community had done the hard bit (of plugging together the scripting language and the hardware), so quickly we had a streaming list of numbers on my laptop showing the amount of light hitting the photocell.
At this point old habits kicked in, and I had that code pushing the numbers to a tiny Sinatra web server running on my laptop, and then some jQuery in the browser consuming the code and scaling an image of a foot in a browser window. It was quick and dirty and it worked. Or at least for a few minutes, before the browser got confused/ran out of memory/the updates and browser drifted out of sync.
So I threw that out and went looking for some image manipulation code. I don’t have a ‘graphical’ environment running on the Pi, so I decided to stick in 80s hacker land – green text on a black background. Which meant our foot would have to be implemented in ASCII art. 80s indeed.
I came across this post: https://www.hackerearth.com/notes/beautiful-python-a-simple-ascii-art-generator-from-images/
After installing a couple of python libraries I was away; I could copy a jpeg of a foot silhouette over to the pi, and spit out cool ASCII art feet in the terminal window. But I needed to hook it up to the code which was listening to the light sensor. I fleshed out the code snippet and tweaked the ASCII characters used for the different shades of grey in the image. Now we had different sized ASCII feet on the command line. Almost there!
Then it was just a matter of gluing the two bits of code together, the unholy result of which you can see over on github: https://github.com/goodformandspectacle/light-sensor-pi/blob/master/foot_light.py
And here’s a video of the final result…
So what’s the lesson?
Prototyping is so often a matter of gluing things together. Tom Armitage has written and talked about this much better than I can. It is fascinating to dip my toe back in and see how far things have moved in a tiny amount of time.
Roll on more experiments with small computers in The Small Museum.