somersethouse, v1

The Scrappiest Little Prototype

It’s been a while since I’ve noodled around with the ‘physical’ side of computers. Mostly I work on the web, but sometimes it’s not the right tool. For my day at The Small Museum I wanted to use one of the Raspberry Pis I had lying around, to do something fun.

I came in last Thursday, when the object of focus was the Colossal foot. It made me think of stepping on things…I thought I’d play with a light sensor to see what could happen as you stepped on the sensor. Various hitches (slow connections, fiddly micro SD cards, Linus going awry) meant frustratingly little happened on Thursday.

Today I wanted to get something working quickly, especially as we had some young visitors coming in who weren’t going to be interested in code, just fun things that work. I hit Google in search of other people’s code that I could glue together into a Frankenstein’s monster.

This was the first thing I found: This let me listen to a light sensor with the Pi.

The Pi is a ‘real’ computer, significantly more powerful than the Arduinos I’ve built with before. How on earth could I plug in components and read them? Turns out it’s two components and a dozen lines of code. Amazing.

As usual the open source community had done the hard bit (of plugging together the scripting language and the hardware), so quickly we had a streaming list of numbers on my laptop showing the amount of light hitting the photocell.

At this point old habits kicked in, and I had that code pushing the numbers to a tiny Sinatra web server running on my laptop, and then some jQuery in the browser consuming the code and scaling an image of a foot in a browser window. It was quick and dirty and it worked. Or at least for a few minutes, before the browser got confused/ran out of memory/the updates and browser drifted out of sync.

So I threw that out and went looking for some image manipulation code. I don’t have a ‘graphical’ environment running on the Pi, so I decided to stick in 80s hacker land – green text on a black background. Which meant our foot would have to be implemented in ASCII art. 80s indeed.

I came across this post:

After installing a couple of python libraries I was away; I could copy a jpeg of a foot silhouette over to the pi, and spit out cool ASCII art feet in the terminal window. But I needed to hook it up to the code which was listening to the light sensor. I fleshed out the code snippet and tweaked the ASCII characters used for the different shades of grey in the image. Now we had different sized ASCII feet on the command line. Almost there!

Then it was just a matter of gluing the two bits of code together, the unholy result of which you can see over on github:

And here’s a video of the final result…

So what’s the lesson?

Prototyping is so often a matter of gluing things together. Tom Armitage has written and talked about this much better than I can. It is fascinating to dip my toe back in and see how far things have moved in a tiny amount of time.

Roll on more experiments with small computers in The Small Museum.

somersethouse, v1

Small Museum for Smalls

I came to The Small Museum as a visitor today, along with a couple of little visitors.

Henry checked out the not so colossal foot. We tried to fit it in some size 5s but sadly it didn’t fit!

Arthur put the objects in height order…he remembered what George had said about Nandi Bull being bigger than a tree so guessed him the biggest.


Felix had them enthralled with his sensor which made a picture of the foot get bigger and smaller on the screen…


And, like everyone else, they signed the visitors book.


Thanks for making us so welcome!

somersethouse, v1

Day 6: Henry Salt

Today I found out that a whopping 1,659 objects in the British Museum collection were bought from Henry Salt. A good few more were ‘donated by’, ‘from’ and ‘purchased through’ him, so the true figure is probably over 1,700.

Today we were investigating the Goddess Hathor. She originally sat as part of the Temple of Amenhotep III but when that was ruined in an earthquake she moved to the Temple of Merenptah (a mere 8 minutes walk away, according to Google Maps!)

She was excavated (probably between 1824 and 1827) by Giovanni Battista Belzoni, who was working for Salt. And she was auctioned at Sothebys and bought by the British Museum in 1835.

Henry Salt (1780 – 1827) seems to have been a key figure for the British Museum’s Egyptian collection.

He became British Consul-General for Egypt in 1815. He sponsored excavations, carried out his own excavations and wrote on deciphering hieroglyphs.

Through his two agents (Belzoni and D’Athanasi) he built up his ‘First Collection’ within two years of arriving in Egypt. It was offered to the British Museum in 1818, it looks like the terms were finally agreed (£2,000) in 1821 or even 1823 as those dates crop up a lot.

His ‘Second Collection’ of over four thousand objects (collected 1819-1824) were sold to Charles X of France for £10,000.

His ‘Third Collection’ were auctioned off at Sotherby’s in 1835 (after his death). There were 1,083 objects on offer and the British Museum bought many of these. Hathor was one of them.

The Museum’s Egyptian galleries would look wholly different without the objects bought from Salt.

He was responsible for the paintings from the Tomb of Nebamun (around 1350BC)

Some of the massive Egyptian sculptures that dominate Gallery 4.

And some of the most popular mummies, including three of the animals.

I didn’t go for a pun in the title, but can’t resist…there’s no denying, he was a real Salt Seller.

somersethouse, v1

Day 5: Video documentation

Amidst the wires and brains and things, we ended up making two main things yesterday. First, a way for the Museum in a Box to recognise the objects in it, in a very simple form. We stuck RFID stickers on each object, and attached a .WAV file to each tag, and then wrote a little magic dust to play the .WAV for each object. (You can hear the dulcet tones of volunteer helper and archivist to the stars, Geoff Browell, describing Hathor and the Colossal Foot.) You can see what it was like here:

Secondly, we took the Rosetta Stone as our object of focus, and worked on making it a physical trigger to hear the text on the actual stone in three slightly more modern languages: English, Greek and Arabic. Voila:

somersethouse, v1

Day 5: Panorama

Yesterday we had lots of helpful visitors, which was lovely. Adrian McEwen worked on the Museum in a Box, Geoff Browell and Bridget McKenzie recorded voiceovers for some of our items, Frankie Roberto also recorded a voiceover and worked on our translation display for the Rosetta Stone (which emerged as Day 5’s object of focus), and Tom Stuart stopped by to take superb photographs like this and work on some code mugging for another project we’ll be working on soon. Thanks for this super pic, Tom!

The Small Museum panorama

museuminabox, somersethouse, v1

Day 5: Box with a brain

Today we’re giving the box a brain.

Can the box know what’s in it? Can it know when you pick something up? Can it tell you what it is?

Adrian has brought his magic box of tricks, and his own (amazing) brain.

Arduino kit

Adrian’s Arduino kit

We are using RFID (Radio-frequency identification) tags to identify the different objects.

The RFID stickers were a bit big for most of the objects, so we stuck them on plinths so we could attach the tags.


Then Adrian did some magic…the RFID reader senses the tag, the Arduino reads the tag and sends it to the Raspberry Pi (some readers can speak directly to Pis, but we didn’t have one like that).


And now when you put the Rosetta Stone on the reader you can hear what it is and a translation of the text.


Now we’re going to record the names and label text for all the objects.

And we’re (well, Adrian is) going to set up an infra-red distance sensor to allow us to play different translations of the Rosetta Stone (a different language plays depending on the distance).

The options are endless…

Could people add something to the object and send it on to someone else?

Can we put different boxes in close proximity and they talk to each other?

Can the box collect stories? or responses to stories? or answer questions?

museuminabox, somersethouse

The Museum Vs. Reality


Our fourth day publicly prototyping Museum in a Box was centered around the idea that what you see in an object – be it a miniature 3D print or the original object in a museum – rarely tells you the whole story.

The former gives you an idea of the shape of an artifact, the latter adds scale, detail and information on colour and material. You might even be lucky enough to have one of those museum labels nearby to give you even more data:

label-nandi 2

But this doesn’t always give you a great sense of what these objects meant or mean to the humans that made or used the object. In our research on our 4th object in focus, the Figure of Nandi, we learned that these iconic statues have been part of Hindu religious ceremonies for thousands of years – and are still celebrated today.

As I lean forward to softly hum my wishes in His ears, I feel myself detaching from the chaos of the world outside. It is like stepping into a quiet room – filled with peace, pin drop silence. […] It’s in those silent moments, I feel His power… and a connection is established – me with the divine, me with myself…
And, me with the Nandi!


The significance of Nandi bull in religion is huge. Nandi bull is the animal that is often associated with the Lord Shiva. Nandi Bull was a great devotee of the lord and would always be seen with him.


… and how much of this spiritualism and life is presented to us as at the Museum?


So that was the simple idea underpinning today’s exhibition – the two sides of an objects life: the one you are presented with in a Museum and the one that exists in real life:

Final thought: it is really fun to be thinking and making non-digital displays of these objects! I highly recommend a hands on craft prototype day to anyone!


somersethouse, v1

Day 4: Flower power

We’ve found some amazing photos of Nandi Bull in situ.

Often he is adorned with stunning flowers in vibrant oranges and yellows.

Nandi Chamundi Mysore

Nandi Chamundi Mysore, By Sanjay Acharya (Own work) [CC BY-SA 3.0 or GFDL], via Wikimedia Commons

Today I had much fun embracing my inner-crafter, creating some flowers for our display.

Creating flowers for Nandi Bull

Here is the first batch.

Tissue paper flowers for Nandi

Here’s the full display.


And Tom S and George with the reveal.


Remember to whisper your wishes to Nandi, he has the ear of Lord Shiva.

somersethouse, v1

Day 4: Nandi Bull

We started this morning with what has now become our cleansing ritual, where we remove the previous day’s display, stick that on the wall, and then prepare our new day’s actual tabula rasa.

File 23-03-2015 14 56 23 File 23-03-2015 14 55 51 File 23-03-2015 14 55 28

Our basic idea is to prepare a new display each day based on one or more of the objects. Today is our Nandi Bull.

File 23-03-2015 14 55 06

Here’s his label:

Figure of Nandi
India, Deccan, 1500s
Carved Granite

The humped bull Nandi (which means ‘rejoicing’) appears at the entrance of every temple dedicated to the Hindu god Shiva, facing the god with a constant serene gaze. Symbolising strength, virility and fertility, as well as religious and moral duties, Nandi is widely recognised both as Shiva’s gatekeeper and as the animal on which he rides. Seated with his legs tucked underneath his body, this figure portrays a representation of Nandi from the southern Indian tradition.
Asia 1923.0306.1

And here’s what he looks like in the museum:


We were struck that the austere granite figure of the Nandi Bull in situ was so inert and static compared to the energy and colour and life that surrounds the bulls installed at shrines to Siva, in real life. They’re celebrated, covered in garlands, whispered to, and surrounded by people, fire and music. The museum experience shows us nothing of that. It didn’t take us long to pick an idea where the display transforms from something bland into something with energy, color and movement.

Here’s the work in progress. We’ll post the finished thing when it’s, well, finished.

File 23-03-2015 14 54 43 File 23-03-2015 14 53 43 File 23-03-2015 14 53 00