Using an Iterative Design Process to Re-vision Cultural Halls Through Digital Media

Last year we had an idea — one of those late-night-shouldn’t-you-be-sleeping kind of crazy ideas — for helping visitors form deeper connections with our the Hall of Northwest Coast Indians. To my delight it was included within a slate of other prototypes for the space. For the past year and a half, a great team across the Museum has figured out how to make it work while we’ve iterated the idea with our partners within Canadian First Nations. Last summer we took it up a notch, bringing in telepresence robotics, with both Double Robotics and Suitable Technologies.

Unlike other digital learning projects in development here at the Museum, I’ve tended not to write about it. But last month the Wall Street Journal covered it, in this article entitled, “Robo Tour Guides Are Ready to Roll at Museums.” I thought now might be a good time to explore how we used an iterative design process, with frequent public user-testing, to support an idea as it matured into a robust public offering.

BACKGROUND
Below was our first attempt to put this concept into words, through this imagined sign copy:

Anthropologists who traveled from this Museum to the communities of the Northwest often carried with them pictures of the Museum and the people of New York. To explain their mission, they gave these pictures to informants in order to persuade them that objects and images they contributed to the collection would reside in a regal house in a great city.  

The process of representing others actually began, then, with natives visually imagining this very Museum… and you.

This Video Pole, and its sibling located within [community TBD], is a 21st Century version of this original exchange as we initiate new and contemporary ways to connect Museum visitors with this Hall and the cultures it represents.  

I squirm a bit re-reading this, 15 months later, but it still captures the initial idea. Nothing too sophisticated was envisioned. All we would need were two fixed cameras – one in our Hall and one in a partnering community. And nothing too obtrusive; it should be something a visitor could stumble upon. I imagined a visitor in our Halls, looking at a Haida puffin mask or ceremonial spoon, and then turning to see two video screens – one with the visitor on it and another of an unknown space. Reading the sign the visitor would realize the live feed was coming from the same community that produced the mask and spoon. In fact, someone from THAT community might even be there, or come by any moment, and look at the visitor looking at cultural treasures from their ancestors.

At its core, the idea was to create a personal moment of self-discovery supporting visitors to reflect on what it means in the 21st Century to encounter 19th century cultural treasures from indigenous communities.

And then, working with others (both our partners at the Haida Gwaii Museum and museum staff to pull it off) we turned it into something else, something it needed to be from the beginning but we couldn’t have discovered without months of prototyping.

ITERATIONS

Over a period of eight months, we ran six live prototypes with the public. After each we evaluated and refined the concept. Before each test we established clear questions we aimed to explore and discovered others in the process. We had to be open to whatever each prototype taught us, yet not read too much into the limitations of each prototype. We also had to be ready to pivot the project in whatever direction was required to achieve our original educational goals (to connect our visitors with the Hall’s collections and increase their knowledge of their contemporary communities of origin).

Below are a few of the key pivots we made along the way, just a few of the many iterative cycles we had to loop through to get to tomorrow’s big step.

  1. THE GUIDING METAPHOR

The original name, “video pole,” was soon replaced by “Observation Station”. The rhyme felt nice and it telegraphed the core idea of “observation” – to see and be seen. However, the first conversation we had with our partner, the Haida Gwaii Museum, nixed that right away. The idea of being “observed” had too many echoes of past mistreatment, while “stations” had unintended military connotations. After a few more conversations we realized we needed a new metaphor. We landed on the idea of a “bridge,” something we’d invite our visitors to cross, connecting our two communities, and we’d create it through video. Thus, “video bridge.” And the name stuck.

  1. SIGNAGE

No one comes to our halls expecting to step into a live video exchange. So what sort of signs would we need to prepare them? And would the signs need to also answer or reinforce some of the key learning we wanted visitors to take away? The first sign had good information on it, but we quickly saw it wasn’t appealing to the families who made up the bulk of the participants. The second sign added more playful colors and call-outs, which seemed to have the right tone, but visitor interviews revealed they still had to no idea where the video came from. Even adding a map didn’t help. But now some people were reporting the video was from “Hawaii,” which left us befuddled until we realized the location name to these visitors – “Haida Gwaii” – did indeed scan like “Ha…Waii” if you read it fast. We then decided that the video bridge itself had to provided all the required context – not a sign outside the experience which a visitor might or might not observe. In fact, it felt odd spending time trying to get people to read a sign when in fact we wanted them to engage with the video. This led to experiments with putting different types of copy ON the video screen itself, running in the lower-third. This led to an improvement – people reporting the location was “Hawaii, Canada.” Still wrong, and they sensed it this time, but it was progress. Then we supplemented the lower-third text with an introductory video on an associated iPad, filmed on Haida Gwaii, introducing the visitors to the experience, transitioning them from their “regular” museum experience to a live feed with their community. Now everyone understood where the community lived. We turned the lower-third off and they still understood. It took many iterations but we finally learned that the most effective way to prepare visitors for the experience was not a sign but a video directly engaging the visitor.

  1. VISIBILITY

The original idea was for visitors to stumble upon the Video Bridge unexpectedly. However, nothing could have been more obtrusive about our set-up. Our first prototype offered two large video screens, one atop the other, next to a large tripod mounted camera, all in front of a table supporting the audio and video equipment run by a technician. Rather that offer a discrete, personal moment to discover, visitors encountered a film shoot to be avoided. Over time we moved the table and support staff out of site, turned the large camera into something small that could be attached to the screen, and so forth. But we could never really get it all small enough to be unobtrusive. So we pivoted in the other direction – rather than face the Video Bridge into the alcove (for those within to find) we pointed it outside (to attract those outside to come in). Groups would come over together to interact with it, which generated attention and attracted others as well. We never decided it COULDN’T work as a private experience, but once we realized we didn’t think that could be effectively prototyped we shifted focus, exploring what could support a visible group experience. We added elements that encouraged people to make big movements – such as waving to the camera – and focused on making it feel inviting (rather than a film shoot to be avoided).

  1. AUDIO

A debate from the beginning was whether or not their should be sound. The initial idea was no sound. Instead visitors were encouraged to use pen and paper to write notes and share them through the video. This didn’t work at first, as the fidelity of the pen on paper didn’t transfer via Skype video. But we eventually landed on nice thick markers, and visitors, on both sides, spent a good amount of time sharing their country of origin and sending questions back and forth. But other times we offered sound – sometimes both sides could hear each other over speakers while other times only the person with a headset could communicate – but these times were just as engaging. But whether or not audio was available, visitors recognized the direct video connection as something they were used to at home – via Skype or FaceTime – and were interested in exploring how they could connect with those on the other side. Sometimes it was through talking or making signs, but others times it was through waving, or dancing, or acrobatics. The constraint of no sound forced people to be intentional, slow down, and get creative; the presence of sound was what people expected, it facilitated quicker and, more importantly, more substantive communication.

  1. CONNECTIONS

Is the video bridge designed to connect a visitor with another PLACE or another PERSON? This is also something we explored throughout. Was it only meaningful if someone was on the other side? If so, did we need to have per-arranged facilitators or guides, as fillers, or schedule the Video Bridge to occur during public events, to fill the room? Or maybe simply the opportunity to look into another community, like through a window, was meaningful enough, and we could let whatever might happen just play out? It was clear over time that nothing was more engaging than a face on a screen. But we still aren’t sure if it was essential, leaving this also as an open question worth exploring.

ROAMING TELEPRESENCE ROBOT

After the eight months of prototyping (and the generosity and patience of the Haida Gwaii Museum) we learned enough to decide if it was worth further exploration. We had data both from our own internal evaluations and those from an outside firm. And during that decision process I encountered my first roaming telepresence robot. These are robots controlled over the Internet which are designed for basic movement – rolling forward and back, turning right and left – featuring a video screen on top so its controller can speak with people around it. In other words, these machines, which have been around for about two years (and were recently featured in this episode of Modern Family), are designed to create a sense of presence for someone who is actually somewhere else.

And if you had described this to me in advance, I wouldn’t have believed it could work. But it did. At a conference in April, during the cocktail party, I felt just as connected to the couple driving around the room as I did to the others I met that night. So when we learned the Video Bridge was approved for continued experimentation (hurrah!) it seemed like roaming telepresence robots might be exactly what we were looking for.

From the beginning of the project we had been trying to bend technology to our methods, methods which were continually shifting through each iteration. We still had many questions left and directions we might explore. But the idea of a roaming telepresence robot changes all that. The technology now comes with its own constraints, and we can start exploring how the affordances which remain can help us achieve our education goals.

With a robot in each community, it still makes sense to call it a video bridge. Someone, for example, from Haida Gwaii can roll up to visitors in our hall and invite them to take a tour of our Haida alcove. Other visitors can take a seat and drive around the Haida Gwaii Museum, led by a tour guide on the Northwest. But now won’t need a pre-recorded video to provide context – the person projected on the robot will do that. And nothing is more visible that a robot rolling through the Hall – it’s a magnet that invites people to come check it out. This also solves the audio question – the robot makes no sense WITHOUT audio; and the idea of a robot roaming around WITHOUT a person is just creepy (like a headless ghost).

So looking to a roaming telepresence robot as a solution for the Video Bridge has helped us make some decisions about what sort of experience we want to offer the public. At the same time, it took months of iterations to identify our needs so we could recognize how the robots might provide a good solution and, if they work, what we might lose as a result.

FIRST TEST

So last July we did a simple test. First I drove our loaned machine, from Double Robotics, around my office.

Untitled

Me in my office through the robot.

Then I drove the robot on Haida Gwaii around their museum, guided by their director. I have virtually visited their museum many times through the Video Bridge pilots. But that day was the first time that I felt like I was there, that I moved through the space, that I understood how it was all connected. I looked at the totem poles I have seen so many times through the prototypes, but felt now like it was for the first time. I even rolled up to the window, looked out, and watched people swimming in the Pacific Ocean. But I don’t just feel like you might now, looking at the photo below at what I saw, as I didn’t just look at a photo. I had an embodied experience, projecting myself through an avatar that I controlled. And that day I feel like I saw the Pacific Ocean, in person, while in actuality mere miles away from the Atlantic. After years exploring the learning potential of embodied virtual worlds like Second Life and Minecraft, I can understand why.

Untitled

Touring the totem poles.

Untitled

The driving interface.

Untitled

The Pacific Ocean. Click to see people swimming.

Untitled

Scott, my trusty tour guide.

Over the rest of the summer, we explored how the Video Bridge functioned through the form of the roaming robots. We ran a number of public tests, using not only the Double but also Suitable Technologies‘ BeamPro, working with both our partners on Haida Gwaii but also the Ahtsik Native Art Gallery on Vancouver Island.

And what we learned, as documented by the recent Wall Street Journal article, blew us away. The learning objectives we worked so hard to achieve seemed to be resolved in the first few moments of the visitor’s experience with the virtual guide. And rather than our needing to strategize how to get visitors in front of the video bridge screen, the screen could now bring itself to the visitors. And whether young or old, no matter where they came from, all types of visitors found ways to connect with the indigenous guides taking them through our Hall, helping them develop a “need to know” about the objects behind the glass and their communities of origin.

So with that, we pass from our more experimental, rapid-prototyping, high-risk testing for the video bridge into an on-going, public prototyping phase. At this point, we have settled on the technology – the BeamPro – and the presentation format – AMNH visitors will encounter a virtual guide on a telepresence robot and be offered the opportunity to drive their own located within a host community – but the best way to integrated it into the program flow of the Hall and the Museum, as well as or our First Nation partners, have yet to be worked out.

Watch this space as we learn more.

About Barry

Innovating solutions for learning in a digital age.
This entry was posted in From My Work, Practice, Theory and tagged . Bookmark the permalink.