April Update: A Virtual Shark You Can Hold in Your Hand

This post is part of an ongoing monthly series of posts that will focus on our current efforts in the Museum’s Science Bulletins team to create and test prototypes of Hall-based digital interactions using AR and VR using our scientists’ digital science data, and to share some of the lessons we learn along the way.

This past year we’ve been exploring how our eyes, and sometimes our ears, can be be invited to play Let’s Pretend: imagine you are seeing a CT scan of a shark in front of you, or imagine you are hearing the HVAC in the Big Bone Room. Our hands, so far, have been left out of this virtual party: imagine you can touch the shark, or pick up that dinosaur fossil. We’ve used HTC Vive’s controllers to manipulate things – imagine you can click here to make the weevil turn transparent – but we haven’t had a tool that invites visitors to (pretend to) touch augmented objects with their own hands.

That is, until now.

Family Game Night, Public Programs, 3/31/2017

One of the products coming out of this year’s Consumer Electronics Show was the Holocube, from Merge:

We knew its developers (a chunk of the team that worked with us on such augmented reality experiences like MicroRangers now works at Merge). When we received our developer version, and access to the SDK, we were excited to learn what we might find if we took the same digital specimens we’ve been porting into such platforms as Hololens, Tango, and Vive into something designed for a visitor to hold in her hand.

The cube is designed to work with their Merge VR viewer – a snazzy, museum-friendly Google Cardboard-style device – but can also work with an unadorned mobile device. With a nice weight in your hand, and slightly squishy like the free swag you often find at a conference, the cube offers a visual cue to the mobile device’s camera. The app decides what to do with what it sees. In other words, the Holocube is dumb – just a collection of prettified QR codes – with all the intelligence residing in the app. And that’s what makes it so compelling – the technology is invisible to the user (like the paper cards in Disney World’s Sorcerers of the Magic Kingdom). And code can always be updated, the possibilities only limited by our imagination (and resources).

We started with our mako shark and created two different experiences. The first surrounds the cube with a rock column, which the user can rotate or turn over, as the shark ominously circles round. In this example, the user is not holding the shark but rather using the Holocube as a device to control the its movement. In the second experience, the cube IS the shark – and invites the visitor to play with it, as one would with a wooden block. You can turn the shark upside down, move it in the air like a toy to eat your friend, or move it through the camera to reveal the layers within the CT scan. We also added some touch features – click on the screen (not the cube, which would probably be better) to watch the jaws open and close.

The third experience comes from our recent youth course on microfossils. The youth went into our collections and researched some previously-unstudied forams (the size of a grain of sand). One of their CT scans was turned into a digital specimen you can hold and, with a click, look inside as it separates into two halves. And the final experience is a bat skull, which like the previous two digital specimens you can observe and interact with through physical manipulation.

Below is a short video of all four, on my desk:

It took just a few days to code the app but, once it was up and running, we took it out to the Hall of Biodiversity, where we just happen to have the mako shark overhead and a bat on the wall. Located between the two, we set up an iPad on a stand and invited passers-by to “hold a shark in their hand.”

After months of anxiously handing over devices to children that cost hundreds, if not thousands, of dollars, it was quite a relief to watch them fight over a block of foam. And yes, people loved it. When you work with a new piece of technology, you need to spend time and energy learning how to operate it. But everyone knows how to “operate” a block. It’s design is an invitation to play, and that’s what people did. They picked up the Holocube and marvelled at the digital specimens in their hand. And while they played with the specimen or its animations we offered facilitation that connected the toy in their hand back to the scientist who produced it, the tools they used to create it, and the research questions they used it to explore.

We tried other directions as well – like accessing the front camera, rather than the back, so you could see yourself with the object, as well as making a smaller one-cubic-inch cube (thank you 3D printer!) to see if children preferred the shorter distance. But after observing and interviewing around 150 people, here are some of the key lessons we took away from this round of prototyping:

  • HANDABLES ARE COMPELLING: Okay, it’s not a real word (at least not yet) but visitors LOVE “handables” – augmented objects you can hold in your hand.
  • HANDABLES ARE INTUITIVE: It was very satisfying to offer visitors an experience with a high level of innovation but a low learning curve to master, as its interaction design intuitively builds on visitor’s prior knowledge of working with blocks.
  • PLAY IS ENGAGING: As much as visitors enjoyed the moment of designed discovery – the shark swimming around your hand, the microfossil that opens – they were equally engaged, if not more so, with their ability to simply explore the specimen through non-directed play.
  • UNADORNED ASSETS ARE EDUCATIONAL: Offering the object on its own, with context provided by live facilitation, provided visitors with a direct line to achieving our intended learning objectives.

Family Game Night, Public Programs, 3/31/2017

Untitled

Untitled

Posted in From My Work | Tagged , , | Comments Off on April Update: A Virtual Shark You Can Hold in Your Hand

Using Mobile VR to Convey WONDER: An Interview with Sara Snyder, the Chief of the Media and Technology Office at the Smithsonian American Art Museum

Below is my most recent post on DMLcentral. You can read it here or just continue below:

Last year I was gob-smacked on a trip to D.C. by the temporary WONDER exhibit at the Renwick Gallery (and wrote about it here). Last fall I was excited to see the Gallery release a mobile VR version of the now-closed exhibit. I reached out to Sara Snyder, the Chief of the Media and Technology Office at the Smithsonian American Art Museum, to learn how and why it was developed.

Sara, Thank you for joining us today? Why don’t we start by introducing your museum (the Smithsonian American Art Museum) and your department (the Media and Technology Office).

When people think of the Smithsonian, they often think of the big museums on the mall, but the Smithsonian American Art Museum (SAAM) and its branch museum, the Renwick Gallery, belong to the “off-mall” contingent of Smithsonian destinations. SAAM shares a grand, historic building—the old Patent Office—with the National Portrait Gallery, in the Penn Quarter neighborhood.  The Renwick Gallery, just under a mile away, is a fabulous little jewel of a building hidden on the stretch of Pennsylvania Avenue better known for another tourist destination, the White House.

In the Media and Technology Office (MTO) we manage SAAM and the Renwick’s websites, blog, and social media accounts, and we lead emerging media projects, such as our current experiments in VR.  We produce all of the in-house video and live streams for the SAAM YouTube channel, and also provide day-to-day IT support for the museum’s staff.  In addition, we oversee the Luce Foundation Center, an innovative visible storage space within SAAM.  For a fairly small department, we Media and Technology staff wear a lot of hats!

For sure! To be frank, I’ve spent my life visiting museums in D.C. but had never heard of the Renwick. Then EVERYONE I knew told me your WONDER was the D.C. exhibit not to be missed. In fact, when I saw it last May, I visited it twice – once on my own, when I was in town for a conference, and then again that same week, once my family had joined me. I did NOT want them to miss it. For those who couldn’t make it, how do you even begin to describe what they missed?

Ha, you are not alone!  For many years, the Renwick was something of a hidden gem, a place known primarily to D.C. locals, or devotees of craft, but not, perhaps, on the top of a tourist’s “must-see” list.  Then, in 2013, the museum building closed for a two-year renovation.  While it was closed, then-curator Nicholas Bell conceived of the idea to invite contemporary American artists to completely take over the nine galleries in the building, an unprecedented opportunity for the Renwick to reinvent itself as a 21st-century destination for art lovers.

The result was the WONDER exhibition, a magical, immersive experience unlike anything people had ever seen.  As Nicholas said in the introductory video, the artists took everyday objects that you wouldn’t necessarily expect to see in an art museum—tires, index cards, sticks, string—but “pulled them together in such a way as to completely amaze you.”

It’s true. I was amazed.

As you experienced, the show had incredible word of mouth and social media exposure, which led to huge attendance figures.  Visitors of every generation truly were overcome by a sense of wonder, and people came back (as you did), again and again.

So let’s shift over to the virtual reality app, Renwick Gallery WONDER 360. Did you know from the beginning you’d be creating this app? How’d it come about?

We had no idea we’d end up creating the app!  Our energy back in 2015 was focused on producing video content, launching a refreshed Renwick website, and on re-orienting our social media strategy towards Instagram.  However, it was fortuitous that in 2015, VR hit the mainstream, and hardware and software for producing and publishing VR experiences became much more accessible and affordable than it had ever been before, putting it within reach for even a non-profit art museum.  We knew that WONDER was special, and we longed for a way to preserve the experience for posterity.  That same year, MTO staffer Carlos Parada made some contacts with an innovative startup called InstaVR at the SXSW interactive conference, and with their help, we realized that we would be able to shoot, create, and publish Renwick Gallery WONDER 360 using our own equipment and staff, and without the huge budget that an outside contractor would have required.

Was the decision to make the images 360 photos versus a 360 film of the exhibit motivated more by aesthetics or technical constraints?

It was definitely because of technical, practical, and budgetary constraints.  We would have loved to have done video capture…or even better, full 3D scanning and photogrammetry. But that just wasn’t possible, given our resources and incredibly compressed timeframe.  The full show was only up for six months, and the galleries were almost never empty, so we were limited to shooting before opening hours.  I’m actually still amazed that we pulled it all off!

What have you learned from WONDER 360, both through producing it and seeing how visitors are using it, that will inform your future uses of the medium?

My takeaway from producing the app is really the same as my takeaway from seeing the success of the WONDER exhibition: content is everything.  The app has such good reviews because the artworks represented within it are beautiful and astounding.  I don’t want us to employ a new technology—now or in the future—just for technology’s sake.  I want us to employ VR in the future because it is the right tool for the job, and because it enables our visitors to more fully enjoy and appreciate American art.  This is something I want to hold onto as we enter our next phase with more robust, gaming-quality VR.

If you knew then what you know now, and if the decision to make the app had been part of the initial design of WONDER, how might the VR experience have been designed differently? And how might it have been integrated into the experience of the exhibit itself (not just offered as a digital postcard, a virtual memento) in the way visitor’s photography was also incorporated?

Looking back, perhaps the VR app could have had more features, or contained more variety of photographic angles.  And if it had been available earlier, we certainly could have promoted it during the exhibition or in the galleries—something we did not have the time or budget to do.  But the app wasn’t intended to be, and never could have been, a substitute for the real life exhibition.  The whole point of the show was to be present, and to have the emotional experience of being dwarfed by the scale and juxtaposition of materials in the physical installations.

The truth is, I actually sort of like the fact that there wasn’t obvious technology incorporated into most of the galleries (save what visitors carried in their own pockets) because it meant that the focus was on the experience of being present in a room with an amazing artwork and a bunch of strangers.  Why look away from that gorgeous rainbow to tap on some kiosk or stare at a screen?  We did incorporate social media into one screen in a central space, but I think that feature was only interesting because it was so organic and unfiltered, coming from the minds of other visitors.

WONDER was a show that people loved to experience together.  VR isn’t social yet, so that specific technology just couldn’t deliver on the power of sharing the same way that Instagram could.  Instagram let people show their friends what they were seeing, and it looked amazing, which is why it, not VR, was ultimately the defining technology for the WONDER show.

Posted in Interviews | Tagged | Comments Off on Using Mobile VR to Convey WONDER: An Interview with Sara Snyder, the Chief of the Media and Technology Office at the Smithsonian American Art Museum

Lessons Learned in the Iterative Design Process with AR Constellation

Next week at the annual American Alliance of Museums conference in St. Louis, I’ll be presenting with John Durrant, Marco Castro and moderator Lizzy Moriarty to “demystify VR content development and offer attendees the chance to get their hands on some VR tech.” In advance of the session (on Monday, May 8, at 8:45 am) I was asked by the Center for the Future of Museums to share a preview of some of the session, which you can read here or below.

Last fall we launched a new initiative at the American Museum of Natural History in New York City: develop recommendations for engaging visitors with modern science practices by adding digital layers to permanent halls. What this looks like on the ground is working with one of the Museum’s scientists (we have over 200) and then turning their digital specimens (CT scans, genomic data, astronomical observations) into a digital asset we can port into a variety of digital tools to be tested with the public.

constellationFor example, we tried various ways of using digital astronomical data to explain the three dimensional nature of constellations. When people look up at the night sky, all the stars seem to lie in a single plane, all the same distance from Earth. In fact, stars occupy a vast three dimensional space—each a different distance from our planet. If you could change your perspective by flying off Earth to somewhere else in space, changing the distance and angle between yourself and each of the stars, you would see Orion “distort”— in other words, the 2D picture we create by drawing imaginary lines from star to star would change shape.

See how long it took me to explain that? We wanted to learn if we could use augmented reality (AR) or virtual reality (VR) to get visitors there, faster. Working with a slice of our Digital Universe database, we created a digital asset that simulates a number of constellations, like Orion. Then we tested a variety of ways for people to interact with this digital simulation of space

1: THE TANGO EXPERIENCE: In our Hall of the Universe (HoU) Visitors viewed a virtual Orion constellation on a Tango handheld device, which they could move forward and backwards, to see the constellation’s shape/line change. Tango is like an iPad with one key difference: it knows where it sits in the space around it. This means, for AR, you can place augmentations in space and then use your Tango to walk around or (in the case of stars) among them.

RESULT: Failure. Visitors did not leave having learned that stars sit in a 3D space. We concluded that was in part because constellations are too abstract (the points in a constellation represent real stars but the lines between are just pretend). But what if we made the experience less abstract, something you’d notice was different if its shape changed, like your face?

2: YOUR FACE IN SPACE: It’ll take too long to explain here, but humor me and presume there’s a good reason why we have a computer app that lets you turn your face into a constellation. We took the app into the HoU and invited visitors to map points around a live image of their face, switch the star names on and off, and then rotate their perspective around the new constellation.

RESULT: Not there yet. On one hand, it seemed to work to ask visitors to use their own face as a metaphor for stars in relationship within a constellation; lowering the level of abstraction was effective. However, many visitors experienced the rotation of the constellation image as due to the constellation itself rotating (which is incorrect), not the visitor’s perspective shifting through space. What if it turned out that visitor’s misunderstandings about stars in 3D space are just being reinforced when shown through a 2D medium? And what if,, instead, we offered them, through a 3D medium?

3: ENTER HOLOLENS: Visitors now viewed a virtual Orion constellation (as well as three smaller constellations) through a Hololens device. (Hololens is an augmented reality headset that enables the wearer to see, and navigate, computer generated images or landscapes.) Walking back and forth, and around, visitors viewed the constellations as existing in a 3D space, with a backdrop of real stars.

RESULT: It worked! While the first iteration failed to communicate the core idea, and the second iteration was successful half the time, the Hololens version worked EVERY time. As the visitor walked around or through the constellation, the stars “moved” at different speeds, depending on their distance from the observer. But could we up the bar, designing the experience to require a visitor wearing Hololens to interact with other visitors, to make it a social experience?

4: ESCAPE THE PLANET: Over a four day design sprint, co-developed with Museum youth learners, we created a prototype of an escape room with an astro-theme: Escape the Planet. (Escape rooms are physical adventure games that require players to solve a series of puzzles.) One of the puzzles required a group of players to use a UV flashlight to find clues in posters that identified one particular constellation. A different player, wearing the Hololens loaded with a new version of the AR Constellation experience, had to look at the name of the closest star to Earth within that constellation (also known as its catalog number) so another player could record those digits and use them to open a padlocked case.

RESULT: Hololens users playing Escape the Planet maintained social contact with the rest of their group, and appeared to have done so more often and with more intensity than during the first three iterations. But was this due to features of the new version of AR Constellation, or due to placing it within a game?

5: STAND ALONE AR: The week after testing Escape the Planet, we took this latest version of the AR Constellation in Hololens back out into the Hall, specifically to watch how users interacted (or not) with the others within their party.

RESULT: Most visitors using the standalone AR said that wearing the Hololens did not affect the way they related with the people around them (in other words, they ignored them and focused on the AR Constellation experience). This is in stark contrast with the Escape the Planet players who not only reported a “heightened desire to cooperate” but expressed a need to share.

And so it goes. Now, a few months later, we are porting a number of our digital specimens into a holdable AR device called a Holocube. Do you think visitors would like to hold a constellation in their hand? It might be time for a new iteration…

 

 

Posted in Conferences, From My Work | Tagged , , , , , | Comments Off on Lessons Learned in the Iterative Design Process with AR Constellation

Does Digital Media Have A Place in a Hands-On Science Learning Space: An Interview with Rebecca Bray on the National Museum of Natural History’s Q?rius

 

Below is a re-blog of my most recent post on DMLcentral.

Rebecca Bray is the Chief of Experience Development at the Smithsonian’s National Museum of Natural History in Washington, D.C. I reached out to her to learn about how the Museum developed and now runs its innovative Q?rius (pronounced “curious”) space, opened in 2013 as an interactive and educational lab with microscopes, touch screens, interactive activities and a “collection zone” housing over 6000 different specimens and artifacts visitors can handle.

In our conversation below we explored their design process, the role of youth learners, the pros and cons of integrating digital media into a hands-on learning space, and more.

Rebecca, Welcome to Mooshme. So how do you describe Q?rius?

Q?rius is a space, an interactive space, in the museum. We always said that it’s not an exhibit, right?  It’s really an interactive learning space, designed mainly for 10 to 18-year-olds and their loved ones.

The space itself is really very flexible. Everything there is on wheels, except for a large collection space, and even in there everything is very modular and flexible; but it’s really meant to be a space for visitors to do hands-on interactive work around the specific kinds of natural history science that our researchers do. And it’s also a space for the education team to experiment with new ways of interacting with the public; we think of it as our learning lab as well – we do a lot of experimenting and testing of new ideas in there.

image credit: James Di Loreto/Smithsonian

When you think about it overall, what would you identify as some of the key innovations you took on?

So many things! When we were designing the space we had a lot of conversations about this target audience of 10 to 18-year-olds. The outside exhibit design company that we were working with was saying at first, “Oh, you should have a lot of technology, because teens love technology.”  But after we did some front-end studies we saw how much people really value their encounters with the authentic objects of the museum. So we said, “Let’s actually de-emphasize the screens in the space and have the focus be on the objects and doing things with the objects.”

And so, we did that.

But we also at the same time were trying to do a bunch of stuff with screens. We wanted them to see a video of the scientists in the field and we thought the screens could really lead people through the activity. So, you would have a touch screen and then you could kind of click through and it would give you instructions about how to interact with the objects in the room.

After making that, and putting it out there, and having it in the space, we pretty quickly realized that it wasn’t working. People couldn’t do both – they didn’t want to both interact with the screen and with the objects. It was just too much.

So, we actually stripped away even more of the screens from the space. We made the activities more about the objects themselves, with very simple paper instructions, and then kept the screens for very particular purposes, which was really to access more information about the objects themselves, separate from the activities. So, that was an important learning. But it’s still an ongoing question about this balance between screens and non-screen experiences.

What else do you need to consider when thinking about integrating screens?

Making sure that we’re designing for social experiences between groups. Physically designing the space, so that people can fit around things, in the right way. Making sure that they’re big enough for people.

I think at some point in the design process we thought about having everybody carrying around an iPad that would be like their personal digital Field Book as they go around the space, collecting objects  But, again we found that they weren’t social enough and we also had this challenge of object versus screen.

Yet you found another way to do the Field Book, which my daughter enjoyed when we visited.

Yeah, so if I had a million dollars we would redesign the Field Books. And we actually knew that even going in. We knew we didn’t have enough money to do it perfectly but we still decided we had enough that we could pilot something and be an enjoyable experience. We have lots of visitors who really like it and they collect their digital collections into Field Books and look at them at home; but yeah, I mean, I think with software you need to have enough money to continuously upgrade it as you learn more.

P1020588

What role do youth play supporting the space?

We’ve had them continuously involved in giving feedback. We have over a hundred teen volunteers, and some of those have been leveled up to be captains. They help us develop activities and programs and give us feedback on a lot of stuff that happens in the space.

How do you design new activities for the space?

Since we use an iterative design process for the activities that we build, we’ll work with a scientist and our design team of educators to develop some very rapid prototypes. And then we’ll go out and do testing and observations. We have developed some assessment instruments that we use to test things and to see, really to understand, how visitors are interacting with it and how to move along a spectrum of understanding. We’ll test things at least 10 times and collect a lot of data about how people are interacting with it and then we’ll use that to refine something as we go along.

A big part of this has been creating a culture of rapid prototyping and testing within our department and helping to spread that to other departments, to test everything that we do in a pretty deep way, beyond just going to visitors and asking, “Do you like this title for this activity?” It’s a difficult thing. It takes a lot of time and you really need to train your staff to know how to do it.

In fact, when we were in the conceptualization stage, we were able to go into the museum and do a bunch of testing of the kinds of activities that we knew we wanted to do. And it was so useful. I wish that we had actually been able to do more of that, to really spend some time actually making the stuff that we thought was going to be in the space and getting it in front of visitors and being really reflective and really thoughtful about how they were responding to it.

 

Posted in Interviews | Tagged | Comments Off on Does Digital Media Have A Place in a Hands-On Science Learning Space: An Interview with Rebecca Bray on the National Museum of Natural History’s Q?rius

Creating Context to See the Unseen: An interview with Jasper Buikx of Amsterdam’s microbial museum

Jasper Buikx, microbiologist at the Micropia Museum, in Amsterdam.How do you design a museum around microbes, a subject that is all around and ON us, yet still remains out of sight? How do you design a space to enable visitors to see the unseen? To find out I spoke with Jasper Buikx, microbiologist at the Micropia Museum, in Amsterdam.

Jasper, Welcome to Mooshme. What is Micropia, and what is your role there?

Micropia is officially the first and only microbe museum in the world. I, as microbiologist, am responsible for the content that we produce and that we show our visitors in our schools. I make sure that everything we say is correct, which is quite important of course for a scientific museum.

So, yeah – it’s a really cool job.

Is Micropia more like a natural history museum with specimens on display that represent the natural world or more like a zoo with live animals?

That’s what we have been wondering ourselves. A traditional museum is often what we call a dead collection; its objects are deceased organisms that are on display. Micropia is in that sense special, because we have over 300 different species of microbes there, alive, that people can actually see with their own naked eyes. So in that sense it’s kind of a new type of museum, a combination between micro-zoo and the museum. It’s something new.

The wall with Petri dishes containing different micro-organisms. Photo Micropia, Maarten van der Wal

The wall with Petri dishes containing different micro-organisms. Photo Micropia, Maarten van der Wal

How did Micropia come about and why do you think Amsterdam is the city that is hosting the first microbial museum?

If you look at the history of microbiology, Netherlands have always played a big part in microbiology. Antonie van Leeuwenhoek was a cotton trader from Holland, and he was the first to actually discover microbes and make them visible. So in that sense, Netherlands discovered the microbial world and has played a big part in microbiology.

We have been designing Micropia for about 12 to 13 years, which is a lot for a museum that’s not that big. It was mainly trying to find out how we can make these invisible worlds visible. It kind of sounds simple – researchers have been doing this for centuries with simple microscopes – but that is not simple to the general public. So we have to find a way to get them to use such a complex instrument in a way that feels natural to them. So most of our design time, if you call it that has went into kind of making a exhibit that makes it visible in a smart way.

Of course we still use the microscope, but their microscope is about 50,000 Euros, and has to be handled with care by the lab techs and professors, not by the general public. So we’ve designed complete exhibit around a 3D viewer with which you still have the same 3D view that you would normally have through the microscope but you now have a simple joystick. You can operate the entire machine with a simple joystick opposed to the 20 – 30 different knobs and buttons that you normally have on the microscope. Plus we’ve added a nice interactive screen next to each microscope so that not only you but your entire family, your colleagues or whomever, can also join in to see what’s under the microscope.

I am hearing that one of your approaches is to take what are typical tools of science and adapt them so they can be used by visitors in the museum.

Yeah, that’s true.

Are there other tools of science that you’ve also made user-friendly to help the visitors see the invisible world around them?

We understand that if you have 40 microscopes in a row, of course it’s going to be boring by microscope number five. We get that. So we show the microbial world in different ways. We have a lab – all those different kinds of microbes have to be cultivated (I mean they don’t live as long as one of our elephants, so we have to cultivate them). And a lab, to a lot of people, is something that they only know from CSI or a Hollywood movie; they rarely see an actual lab themselves. So the lab in our case is also a part of the experience. It had a lab assistant telling stories, explaining what they do in the lab.

Discover your own microbes with the body scan. Photo Micropia, Maarten van der Wal

Discover your own microbes with the body scan. Photo Micropia, Maarten van der Wal

We also have lot of interactive exhibits where, for instance, we have a body scan where you stand in front of a big screen, it scans your body, and then you can kind of travel through your own body and find out that it’s completely filled with microbes. Then you can actually get to know yourself in a new way.

I presume this is a simulated scan. It’s not actually looking at the real microbes in your body, right?

We get that question a lot. It’s simulated. We find people expect us to give them a small sheet of paper at the end of their visit saying, “You should visit a physician because you have this and this disease.” No, it’s a simulated scan of course, but based on actual scientific facts, It gives a general story about what kind of microbes lives on and in you, based on your size and your width and your age.

Some of the things that people experience are actual real data using digital tools of science, while others are simulations to help the visitor understand things that they otherwise wouldn’t understand.

Yeah, exactly.

Is there anything the Museum does to make science data visible to the public?

One of the more interactive parts with Micropia is that you can collect microbes, kind of like you collect microbes on a day to day basis. I mean, if you touch something or you kiss or you eat or whatever, you collect microbes, and you can also do that in Micropia. You collect them on a stamp card, which sounds very simple but the idea is that you put your stamp card with your microbe collection on a special table with a scanner and it scans your collection and it enlarges it on the elevator wall. We have a big elevator and we have about 12 or 14 meters of elevator wall where your microbe collection is then shown. When we tell people about microbes we say they are microorganisms which are too small to be seen by the naked eye, but there is such a huge variation in size between microbes. We needed a special exhibit to show people how microbes differ in size, so they can see that a fungus is 10,000 times the si ze of a bacteria and the bacterium is a thousand times the size of a virus. So in that sense we try to make data about size visible in a more understandable way.

Can you give me an example of an unexpected challenge you have had with Micropia since launch and how you responded to it?

If people see an elephant, they know, “Oh, this is an elephant.” But if we show them a bacterium or a fungus, you have to tell a lot. Because it’s invisible, it doesn’t have context. It’s way too abstract for many people. So you have to first bridge the gap between what people know and what they see. You have to continuously create context, which in some cases is very difficult because for some people the context is completely different than to others. I mean, if you talk to students which are six or seven years old, you use completely different language than you use with adults. So finding the right context for the right audience has been a big challenge. Look around – you there are many examples of microbes in your day-to-day life, so you have a lot of possible contexts to choose from. But then making the translation to your specific audience, it’s quite difficult. I mean nobody knows microbes, at least out on the streets. So that’s been difficult.

What are you hoping the takeaway will be from visitors who come to the museum?

I hope that people kind of get the understanding that there is more than they can see and that it’s not always negative. If you ask a hundred people out on the streets, “ What do you think of a bacterium?”, 99 people will give you a very dirty look and will try to walk away, because they think bacteria are dangerous and disgusting and they want to get rid of them. But they rarely know that if you look in your own body you have 10 times more microbes than human cells.

Your body is more microbe than it’s actually man.

 

 

Posted in Interviews | Comments Off on Creating Context to See the Unseen: An interview with Jasper Buikx of Amsterdam’s microbial museum

March Update: Paleo Behind-the-Scenes 360 Videos

This post is part of an ongoing monthly series of posts that will focus on our current efforts in the Museum’s Science Bulletins team to create and test prototypes of Hall-based digital interactions using AR and VR using our scientists’ digital science data, and to share some of the lessons we learn along the way.

bbr_danny-1

Last fall, the 2016 Margaret Mead Film Festival Virtual Reality Showcase invited both general Museum visitors and Mead participants to explore new cultural perspectives through cutting edge technology. The showcase, called the VR lounge, was held within an alcove of our Hall of Northwest Coast Indians.

Participants were invited to put on one of the 20 VR devices (Samsung Gear VR), sit on a stool, and watch a non-interactive 360-degree VR film. “Take cutting-edge virtual reality for a test drive and see how this new technology is transforming filmmaking,” we promised. “Experience the lives of nomadic cultures around the world and dive into the history of Cuban dance in this casual drop-in environment.”

Over the course of the weekend, over 1,000 people watched at least one video. An evaluation we ran on a small subset of that group found that we could offer a VR experience at scale, supported by volunteers, and that it engaged visitors in a fresh, new way with contemporary cultural content. While technical challenges were persistent, in most cases they were overcome and were secondary to the strength of the VR experience itself. Visitors and Museum volunteers alike left the experience wanting more 360 VR content about – and in more places within – the Museum.

Untitled

One of the most frequently recommended locations was our dino halls. So this March we began prototyping 360 videos, to learn if 360 video could meet visitors’ interests in getting a peek behind-the-scenes at how the science performed in labs and offices around the campus intersect with their experiences within the permanent halls.

First, let’s talk about terms. At the Mead Festival, we used the term virtual reality, as that’s the current term of art. If you want to watch 360 videos by the New York Times, for example, you download their app called “NYC VR”. So that’s how we started, asking visitors if they wanted to try out a 360 VR behind-the-scenes experience.

We used 6 Go Pros on a stick to film 4 scenes.

Scene 1: You’re starting in front of our T.rex, in our Hall of Saurischian dinosaurs, being led on a paleontology tour by a Museum guide as visitors around you ask questions.

Scene 2: You’re in the Museum’s Big Bone Room, as Danny Barta, a Ph.D. Candidate in the Museum’s Richard Gilder Graduate School, explains the dinosaur fossils that surround us.

Scene 3: You’re in a circular room, the office of Mark Norell, Division Chair and Macaulay Curator, Division of Paleontology, who tours us around his work space.

Scene 4: You’re back at the T.rex, with our Museum guide, who concludes the tour.

360 Storyboard - Master.001We wanted to learn how long visitors might want to be immersed in the world of a 360 video. We also wanted to learn if it made a difference if there was a meaningful relationship between the subject of the video and the location in the Hall where they watched it; a related but different question was whether or not the Hall could provide context to the video. Finally, did visitors prefer to have an immersive experience with wearable headgear (we used the Merge VR Goggles) or a social experience with a Samsung phone they can hold up and move around to explore the scene?

To address these questions we broke the larger narrative into three units:

Video 1: The Full Video – all four scenes, at 7 minutes long, to be offered by the T.rex,  just a few feet away from the location the camera shot the first scene.

Video 2: Big Bone Room – just Scene 2, at 3 minutes, offered by a new permanent exhibit about the Big Bone Room in the 4th Floor Orientation Center (the bone featured in the video is now featured in the exhibit itself).

Video 3: Mark Norell’s Office – just Scene 3, at 1.5 minutes, offered in the 4th Floor Astor Turret, a few floors beneath the office itself (which means the windows around you offer the same view as the windows in the 360 video).

norell-1

After observing and interviewing 150 people, we learned some things.

Some (and far from all) of the key lessons we took away from this round of prototyping:

  • BEHIND-THE-SCENE 360 video meets a need. Visitors of all, of all ages and backgrounds, find the experience compelling. They are excited to peak behind the scenes, and 360 video is an effective way to offer a “you are there” experience.
  • VIDEO length works – Visitors were comfortable with sitting and watching a 7 minute or less video in 360 video. That is an unusually high length of time when compared with visitor’s average time spent watching hall-based videoes. Yet many in fact wished it had gone even longer.
  • INTERACTION desired – Most visitors want to be able to interact with the video in some capacity, either by zooming in on a particular bone or by getting more information about a specific bone. It didn’t matter if they were in the immersive or social experience – getting to see it in 360 video suggested to them it might be possible to interact with content in the scene.
  • FACILITATION required – Using the VR headset with visitors still requires some facilitation as most visitors are inexperienced with how VR headsets works. Setup, audio, and getting the video to play are areas of high facilitation need. While some Halls were quiet enough to let the Samsung’s speakers carry the audio, other times visitors required, or just preferred, the offered headset.
  • 360 VR is a confusing term. If we understand virtual reality as an experience that replaces what we see and hear, then VR, in this context, is not the media but the console. We offered the same videos in both the VR headgear and the Samsung – but only one could honestly be described as immersive. So rather than ask visitors “Would you prefer the VR experience, or not?” we only felt we began to get unbiased answers when we asked “Would you prefer immersive 360 video or social 360 video?” Each term speaks to the strength of the experience.

 

Posted in From My Work | Tagged , , | 1 Comment

Microbiomes, Museums, and Minecraft

I am a Minecraft Mentor. Part of my duties is to write about our experiences on the Minecraft Education Edition Blog, the official Minecraft blog for educators. Based on reports by both Hannah Jaris, AMNH Senior Coordinator in Youth Initiatives, and a program evaluation by Chris Vicari, I prepared the following post, highlighting what we learned by collaborating with high school students to create a map and an associated YouTube video within a course on microbiology. You can also check out the new page we created on our web site within the OLogy section, that provides you with the opportunity to download and play the map yourself (featuring) not only the youth’s videos but also one with Museum staff playing the game with their children.

Microbiomes, Museums, and Minecraft

“Now, everyone walk into the head and then teleport to the stomach…” and so began the second and final summer week of the the Minecraft & Human Microbiome program here at the American Museum of Natural History. For the previous six days, 14 students had been using Minecraft to take a deep dive into several of the themes and concepts presented in our temporary exhibit, the Secret World Inside You.

How?

In a Minecraft map custom-made to resemble a human body, of course! Once in this map youth in the program explored the diversity of microbes across the human body, role-played as white blood cells and antibiotics to protect the body from foreign invaders, and more.

During the second week of the program the youth spent the majority of their time designing and building activities that focused on a microbial topic of their choosing. Trips to the exhibit, visits to Microbiology Labs, and feedback from scientists and their peers were all incorporated into their designs before the summer portion of the program came to an end.

Last fall they returned to develop YouTube-bound Let’s Play videos and other educational resources to accompany this microbial Minecraft world. These videos, along with new ones made by Museum scientists, are now available at the Museum’s web site, OLogy. Along with the Let’s Play videos, the web page offers the entire Minecraft map as well, so anyone can play Minecraft, learn about the human microbiome, and post their own Let’s Play videos.

For us as educators, we were excited to have the opportunity to understand the impact Minecraft could have in informal science learning. We entered it with a number of questions:

Could Minecraft be used to consolidate science content into digestible activities to leverage student understanding of class topics in an approachable manner?

In what ways can educators utilize the game’s building mechanics to develop educational activities and embed instructional content that pair with in-class learning goals to support student learning?

When introducing microbiology topics (e.g., microbe scale, microbe diversity and environmental influence, etc.) to a class, in what ways could designing simulated in-game activities that connect to science topics strengthen students’ understanding of the course content?

How would pairing Minecraft with in-class microbial simulations and a design-based final project allow students to direct their own learning and utilize the game’s features to support knowledge construction?

Working with Chris Vicari, an outside evaluator who observed and interviewed the students, and analyzed both their Minecraft activities and their Let’s Play commentaries, we learned that:

Minecraft Let’s Play videos served as evidence of youth’s deep understanding of science content

Video recordings of the students playing their own activities as they narrated the experience (e.g., discussed goals, purpose of the activity, connection to science, etc.) provided rich descriptions of the microbiology material represented in their activities. Students also emphasized a clear understanding of its connection to real-world science.

Minecraft is a flexible tool that can support learning

Minecraft’s creative building mechanics enabled teachers to design a custom Minecraft world resembling the human body with five separate microbiology activities that paired with in-class learning goals to support student content knowledge construction.

Minecraft helped students connect to microbiology

Pairing the Minecraft activities with direct instruction and reflection exercises helped teachers maintain focus on the course material and learning objectives. As a result, students clearly articulated their microbiology knowledge as they described how their in-game experiences fit within a broader microbiology context.

Minecraft helped students articulate microbiology knowledge

By probing student content knowledge and requiring them to illustrate connections between the course material with the Minecraft activities, students articulated their microbiology knowledge via rich descriptions of their experiences and used microbiology terms to describe their in-game actions

Minecraft helped students understand the iterative design process

Playtests and teacher feedback strengthened student understanding of the design process and how ideas can change over time. This was exemplified by their engagement and experience using Minecraft as they willingly continued improving, testing, and iterating on their designs throughout the course.

Students found Minecraft to be fun and engaging

Despite the structured class environment, students often asked if more time could be dedicated to playing the Minecraft activities or if they could be more challenging. They also spent hours designing their own activities without requiring breaks.

Of course, there were still many challenges when using Minecraft within the course. Designing activities for instruction can be incredibly challenging and often requires a teacher with a lot of experience or an outside consultant. Students also require guidance to make meaningful connections between their experiences in Minecraft and the in-class learning content and activities.

While extensive preparation, setup, and knowledge are needed to effectively leverage Minecraft for education, such effort can be worth pursuing to provide enriching science learning opportunities to our students.

 

Posted in From My Work | Tagged , | Comments Off on Microbiomes, Museums, and Minecraft