somersethouse, v1

The Scrappiest Little Prototype

It’s been a while since I’ve noodled around with the ‘physical’ side of computers. Mostly I work on the web, but sometimes it’s not the right tool. For my day at The Small Museum I wanted to use one of the Raspberry Pis I had lying around, to do something fun.

I came in last Thursday, when the object of focus was the Colossal foot. It made me think of stepping on things…I thought I’d play with a light sensor to see what could happen as you stepped on the sensor. Various hitches (slow connections, fiddly micro SD cards, Linus going awry) meant frustratingly little happened on Thursday.

Today I wanted to get something working quickly, especially as we had some young visitors coming in who weren’t going to be interested in code, just fun things that work. I hit Google in search of other people’s code that I could glue together into a Frankenstein’s monster.

This was the first thing I found: https://learn.adafruit.com/basic-resistor-sensor-reading-on-raspberry-pi/basic-photocell-reading. This let me listen to a light sensor with the Pi.

The Pi is a ‘real’ computer, significantly more powerful than the Arduinos I’ve built with before. How on earth could I plug in components and read them? Turns out it’s two components and a dozen lines of code. Amazing.

As usual the open source community had done the hard bit (of plugging together the scripting language and the hardware), so quickly we had a streaming list of numbers on my laptop showing the amount of light hitting the photocell.

At this point old habits kicked in, and I had that code pushing the numbers to a tiny Sinatra web server running on my laptop, and then some jQuery in the browser consuming the code and scaling an image of a foot in a browser window. It was quick and dirty and it worked. Or at least for a few minutes, before the browser got confused/ran out of memory/the updates and browser drifted out of sync.

So I threw that out and went looking for some image manipulation code. I don’t have a ‘graphical’ environment running on the Pi, so I decided to stick in 80s hacker land – green text on a black background. Which meant our foot would have to be implemented in ASCII art. 80s indeed.

I came across this post: https://www.hackerearth.com/notes/beautiful-python-a-simple-ascii-art-generator-from-images/

After installing a couple of python libraries I was away; I could copy a jpeg of a foot silhouette over to the pi, and spit out cool ASCII art feet in the terminal window. But I needed to hook it up to the code which was listening to the light sensor. I fleshed out the code snippet and tweaked the ASCII characters used for the different shades of grey in the image. Now we had different sized ASCII feet on the command line. Almost there!

Then it was just a matter of gluing the two bits of code together, the unholy result of which you can see over on github: https://github.com/goodformandspectacle/light-sensor-pi/blob/master/foot_light.py

And here’s a video of the final result…

So what’s the lesson?

Prototyping is so often a matter of gluing things together. Tom Armitage has written and talked about this much better than I can. It is fascinating to dip my toe back in and see how far things have moved in a tiny amount of time.

Roll on more experiments with small computers in The Small Museum.

Standard
somersethouse, v1

Small Museum for Smalls

I came to The Small Museum as a visitor today, along with a couple of little visitors.

Henry checked out the not so colossal foot. We tried to fit it in some size 5s but sadly it didn’t fit!

Arthur put the objects in height order…he remembered what George had said about Nandi Bull being bigger than a tree so guessed him the biggest.

IMAG5580

Felix had them enthralled with his sensor which made a picture of the foot get bigger and smaller on the screen…

IMAG5571

And, like everyone else, they signed the visitors book.

IMAG5584

Thanks for making us so welcome!

Standard
somersethouse, v1

Day 6: Henry Salt

Today I found out that a whopping 1,659 objects in the British Museum collection were bought from Henry Salt. A good few more were ‘donated by’, ‘from’ and ‘purchased through’ him, so the true figure is probably over 1,700.

Today we were investigating the Goddess Hathor. She originally sat as part of the Temple of Amenhotep III but when that was ruined in an earthquake she moved to the Temple of Merenptah (a mere 8 minutes walk away, according to Google Maps!)

She was excavated (probably between 1824 and 1827) by Giovanni Battista Belzoni, who was working for Salt. And she was auctioned at Sothebys and bought by the British Museum in 1835.

Henry Salt (1780 – 1827) seems to have been a key figure for the British Museum’s Egyptian collection.

He became British Consul-General for Egypt in 1815. He sponsored excavations, carried out his own excavations and wrote on deciphering hieroglyphs.

Through his two agents (Belzoni and D’Athanasi) he built up his ‘First Collection’ within two years of arriving in Egypt. It was offered to the British Museum in 1818, it looks like the terms were finally agreed (£2,000) in 1821 or even 1823 as those dates crop up a lot.

His ‘Second Collection’ of over four thousand objects (collected 1819-1824) were sold to Charles X of France for £10,000.

His ‘Third Collection’ were auctioned off at Sotherby’s in 1835 (after his death). There were 1,083 objects on offer and the British Museum bought many of these. Hathor was one of them.

The Museum’s Egyptian galleries would look wholly different without the objects bought from Salt.

He was responsible for the paintings from the Tomb of Nebamun (around 1350BC)

Some of the massive Egyptian sculptures that dominate Gallery 4.

And some of the most popular mummies, including three of the animals.

I didn’t go for a pun in the title, but can’t resist…there’s no denying, he was a real Salt Seller.

Standard
somersethouse, v1

Day 5: Video documentation

Amidst the wires and brains and things, we ended up making two main things yesterday. First, a way for the Museum in a Box to recognise the objects in it, in a very simple form. We stuck RFID stickers on each object, and attached a .WAV file to each tag, and then wrote a little magic dust to play the .WAV for each object. (You can hear the dulcet tones of volunteer helper and archivist to the stars, Geoff Browell, describing Hathor and the Colossal Foot.) You can see what it was like here:

Secondly, we took the Rosetta Stone as our object of focus, and worked on making it a physical trigger to hear the text on the actual stone in three slightly more modern languages: English, Greek and Arabic. Voila:

Standard
somersethouse, v1

Day 5: Panorama

Yesterday we had lots of helpful visitors, which was lovely. Adrian McEwen worked on the Museum in a Box, Geoff Browell and Bridget McKenzie recorded voiceovers for some of our items, Frankie Roberto also recorded a voiceover and worked on our translation display for the Rosetta Stone (which emerged as Day 5’s object of focus), and Tom Stuart stopped by to take superb photographs like this and work on some code mugging for another project we’ll be working on soon. Thanks for this super pic, Tom!

The Small Museum panorama

Standard
museuminabox, somersethouse, v1

Day 5: Box with a brain

Today we’re giving the box a brain.

Can the box know what’s in it? Can it know when you pick something up? Can it tell you what it is?

Adrian has brought his magic box of tricks, and his own (amazing) brain.

Arduino kit

Adrian’s Arduino kit

We are using RFID (Radio-frequency identification) tags to identify the different objects.

The RFID stickers were a bit big for most of the objects, so we stuck them on plinths so we could attach the tags.

IMAG5509

Then Adrian did some magic…the RFID reader senses the tag, the Arduino reads the tag and sends it to the Raspberry Pi (some readers can speak directly to Pis, but we didn’t have one like that).

IMAG5510

And now when you put the Rosetta Stone on the reader you can hear what it is and a translation of the text.

IMAG5518

Now we’re going to record the names and label text for all the objects.

And we’re (well, Adrian is) going to set up an infra-red distance sensor to allow us to play different translations of the Rosetta Stone (a different language plays depending on the distance).

The options are endless…

Could people add something to the object and send it on to someone else?

Can we put different boxes in close proximity and they talk to each other?

Can the box collect stories? or responses to stories? or answer questions?

Standard