somersethouse, v1

The Scrappiest Little Prototype

It’s been a while since I’ve noodled around with the ‘physical’ side of computers. Mostly I work on the web, but sometimes it’s not the right tool. For my day at The Small Museum I wanted to use one of the Raspberry Pis I had lying around, to do something fun.

I came in last Thursday, when the object of focus was the Colossal foot. It made me think of stepping on things…I thought I’d play with a light sensor to see what could happen as you stepped on the sensor. Various hitches (slow connections, fiddly micro SD cards, Linus going awry) meant frustratingly little happened on Thursday.

Today I wanted to get something working quickly, especially as we had some young visitors coming in who weren’t going to be interested in code, just fun things that work. I hit Google in search of other people’s code that I could glue together into a Frankenstein’s monster.

This was the first thing I found: https://learn.adafruit.com/basic-resistor-sensor-reading-on-raspberry-pi/basic-photocell-reading. This let me listen to a light sensor with the Pi.

The Pi is a ‘real’ computer, significantly more powerful than the Arduinos I’ve built with before. How on earth could I plug in components and read them? Turns out it’s two components and a dozen lines of code. Amazing.

As usual the open source community had done the hard bit (of plugging together the scripting language and the hardware), so quickly we had a streaming list of numbers on my laptop showing the amount of light hitting the photocell.

At this point old habits kicked in, and I had that code pushing the numbers to a tiny Sinatra web server running on my laptop, and then some jQuery in the browser consuming the code and scaling an image of a foot in a browser window. It was quick and dirty and it worked. Or at least for a few minutes, before the browser got confused/ran out of memory/the updates and browser drifted out of sync.

So I threw that out and went looking for some image manipulation code. I don’t have a ‘graphical’ environment running on the Pi, so I decided to stick in 80s hacker land – green text on a black background. Which meant our foot would have to be implemented in ASCII art. 80s indeed.

I came across this post: https://www.hackerearth.com/notes/beautiful-python-a-simple-ascii-art-generator-from-images/

After installing a couple of python libraries I was away; I could copy a jpeg of a foot silhouette over to the pi, and spit out cool ASCII art feet in the terminal window. But I needed to hook it up to the code which was listening to the light sensor. I fleshed out the code snippet and tweaked the ASCII characters used for the different shades of grey in the image. Now we had different sized ASCII feet on the command line. Almost there!

Then it was just a matter of gluing the two bits of code together, the unholy result of which you can see over on github: https://github.com/goodformandspectacle/light-sensor-pi/blob/master/foot_light.py

And here’s a video of the final result…

So what’s the lesson?

Prototyping is so often a matter of gluing things together. Tom Armitage has written and talked about this much better than I can. It is fascinating to dip my toe back in and see how far things have moved in a tiny amount of time.

Roll on more experiments with small computers in The Small Museum.

Standard
somersethouse, v1

Small Museum for Smalls

I came to The Small Museum as a visitor today, along with a couple of little visitors.

Henry checked out the not so colossal foot. We tried to fit it in some size 5s but sadly it didn’t fit!

Arthur put the objects in height order…he remembered what George had said about Nandi Bull being bigger than a tree so guessed him the biggest.

IMAG5580

Felix had them enthralled with his sensor which made a picture of the foot get bigger and smaller on the screen…

IMAG5571

And, like everyone else, they signed the visitors book.

IMAG5584

Thanks for making us so welcome!

Standard
somerset house, v1

Day 7: Playing with Scale

It’s a bit odd having a set of 3D prints of objects whose printed sizes don’t scale with reality. I mean where the biggest 3D prints aren’t the biggest actual objects. In fact, our House post is actually the tallest thing in reality, but it’s almost the smallest print. I’d also thought from the beginning that it would just be funny to print the Colossal Foot double the size of everything else just because it was called colossal. Turns out it’s the shortest thing in reality.

That led us to thinking about scale today, as our object of focus was the foot.

Tom found a fantastic augmented reality app called Augment, which I loaded on to my iPad. Online, you can configure a “tracker” image that your iPad will recognize easily and connect it with a 3D model. You can also specify the 3D model’s actual dimensions, so that’s what we did with the Colossal Foot, and now, at the eleventh hour, we’re popping in the House post, which is 2.5 metres tall.

Here’s the (garish) tracker image being found…

File 26-03-2015 16 02 11

And some screenshots of what turns up…

File 26-03-2015 15 55 25 File 26-03-2015 15 57 19 File 26-03-2015 16 00 34

And a funny video of Tom and I getting stuff running for the first time. What a pair of giggling dummies!

In the end, it was quite a simple and visceral experience, a really nice way to get a feel for the size of the thing if you weren’t able to visit the museum.

Standard
somerset house, v1

Day 7: and so it begins

I must admit to a slight fatigue at this point. But, along with that comes real and new enjoyment at the challenge of finding and illustrating a new story or thing each day.

Yesterday, as Harriet and I were chatting to a couple of visitors all the way from Suffolk, it dawned on me just how much our actual output has been governed by both the semi-random set of objects we selected to print, and the shape of the room and tables. The whole short practice has sprung from those two things. It’s interesting too, to note that the explorations themselves have gravitated towards the history of the British Museum’s collection itself, and not especially features of the objects. That’s a growing area of personal research interest on my part; how big museums have come to be, and the characters who formed them. Henry Salt, from Day 6, exemplifies the kinds of slightly shadow-y figures behind these incredible collections.

Bye bye, Henry Salt.

   

Hello, Colossal Foot.

Standard
somersethouse, v1

Day 5: Video documentation

Amidst the wires and brains and things, we ended up making two main things yesterday. First, a way for the Museum in a Box to recognise the objects in it, in a very simple form. We stuck RFID stickers on each object, and attached a .WAV file to each tag, and then wrote a little magic dust to play the .WAV for each object. (You can hear the dulcet tones of volunteer helper and archivist to the stars, Geoff Browell, describing Hathor and the Colossal Foot.) You can see what it was like here:

Secondly, we took the Rosetta Stone as our object of focus, and worked on making it a physical trigger to hear the text on the actual stone in three slightly more modern languages: English, Greek and Arabic. Voila:

Standard