Hi. I am Barry Joseph.I am the Associate Director For Digital Learning at the American Museum of Natural History. This is where I talk about my adventures @AMNH and explore issues related to digital media and museum-based learning. I feature original interviews, thought pieces, and highlights from my work and those of my colleagues at the AMNH. Find me on Twitter (@MMMooshme).
Subscribe to Blog via Email
- Gutsy Video Review by Zee Garcia June 8, 2017
- Gamifying the Museum with NYU Game Center Graduate Students May 10, 2017
- Our work in the Verge: “20,000-year Old Artifacts, 21st Century Technology” May 10, 2017
- April Update: A Virtual Shark You Can Hold in Your Hand May 5, 2017
- Using Mobile VR to Convey WONDER: An Interview with Sara Snyder, the Chief of the Media and Technology Office at the Smithsonian American Art Museum May 4, 2017
Tag Cloud#AAM2016 #scienceFTW 3d modelling 3d printing amnh audioboo augmented activity guide augmented reality badges capturing dinosaurs card game card games clive thompson crime scene neanderthal data visualization digital fabrication digital learning week DML2013 dreams of the haida child edge project field museum games gutsy hololens killer snails maker-faire Maker Faire NYC micromuseum microrangers minecraft Minecraft at the Museum mobile mobile apps mobile games NC Museum of Natural Sciences neanderthals object-oriented pterosaurs SciBullUpdate tinyteddy video video bridge video games virtual reality voicethread
Top Posts & Pages
This Spring, the American Museum of Natural History and the NYU Game Center partnered to create a classroom-in-residence at the Museum for a course entitled “Designing for Museums.”
Students partnered with different departments at the museum to create prototypes, both digital and physical, of new playful experiences and games that educate as well as entertain. With the museum acting as a client providing feedback and guidance, these students created a wide range of different prototype designed to further the museum’s learning goals.
They developed six prototypes. I invited the students to describe their project and share some visual assets.
AstronoME: This card games helps players understand the different techniques that astronomers actually use to learn more about objects in our universe!
DinosAR: Help a modern day bird learn about his ancestors in this AR scavenger hunt!
Food or Foe: By allowing you to see the world through the eyes of several different sea creatures, Food or Foe helps players understand why it’s so easy for animals to confuse food, such as jellyfish, with trash that is harmful for them to eat, like plastic bags.
Snacky: Insert yourself into your exhibits and make your friends jealous with the AR selfie app!
Skeleton Closet: Skeleton Closet is an interactive exhibit in Augmented Reality using the Google Tango. In the exhibit, users learn about the skeleton of the whale by piecing one together. Users will be able to manipulate and interact with the digital objects that seem to be existing in the physical world in this educational and engaging experience.
A new piece came out a few days ago in the Verge, “20,000-year-old artifacts, 21st century technology: Museums are turning to virtual reality, apps, and interactive experiences to keep tech-savvy visitors engaged“. It’s a lovely overview of how a number of NYC-based museums are taking on this topic.
The work of my museum shows up in a number of places. Below I’ll highlight the work from my area, Science Bulletins:
In nearly two decades working at the American Museum of Natural History, Vivian Trakinski, director of the museum’s Science Bulletins, has witnessed the evolution of visitor experiences firsthand. Originally hired to produce short science documentaries, Trakinski now spends most of her time working on data visualizations in a variety of digital formats.
“When I came here [in 1999], we were focused on video,” she says. She still produces videos, but says that “now, we are focusing on more immersive and interactive platforms […] People want to be able to curate their own content. People want to be engaged in the creation of it.”
Trakinski’s team is currently working on a number of augmented reality prototypes that will allow visitors to more actively engage with the museum’s specimens and datasets, including an immersive AR experience of what it would be like to play golf on Mars, using data from the Mars Reconnaissance Orbiter’s Context Camera. Her team also took a CT scan of a Mako shark and created an AR experience in which visitors can look through a Google Tango tablet or a stereoscopic AR headset, see the scanned skeleton overlaid on top of the museum’s actual shark model, and make the shark swim or bite.
“It’s not a passive experience where we’re telling you something,” says Trakinski. “[Visitors] are actually creating the learning through the interaction with this real artifact of science.”
And then later on:
For Trakinski and her work on data visualization, the future revolves around “communal creativity,” like open-source projects that elicit involvement from partner institutions and outside developers. She cites the Museum of Natural History’s current involvement in the NASA-funded project OpenSpace — an open-source data visualization software to communicate space exploration to the general public — as an example of a growing movement.
“I think sharing resources, sharing knowledge, open-source software development, customization, [and] using common tools is something of a trend that I would see driving all of our work forward in a communal context,” she says.
I recommend reading the entire piece here.
This post is part of an ongoing monthly series of posts that will focus on our current efforts in the Museum’s Science Bulletins team to create and test prototypes of Hall-based digital interactions using AR and VR using our scientists’ digital science data, and to share some of the lessons we learn along the way.
This past year we’ve been exploring how our eyes, and sometimes our ears, can be be invited to play Let’s Pretend: imagine you are seeing a CT scan of a shark in front of you, or imagine you are hearing the HVAC in the Big Bone Room. Our hands, so far, have been left out of this virtual party: imagine you can touch the shark, or pick up that dinosaur fossil. We’ve used HTC Vive’s controllers to manipulate things – imagine you can click here to make the weevil turn transparent – but we haven’t had a tool that invites visitors to (pretend to) touch augmented objects with their own hands.
That is, until now.
One of the products coming out of this year’s Consumer Electronics Show was the Holocube, from Merge:
We knew its developers (a chunk of the team that worked with us on such augmented reality experiences like MicroRangers now works at Merge). When we received our developer version, and access to the SDK, we were excited to learn what we might find if we took the same digital specimens we’ve been porting into such platforms as Hololens, Tango, and Vive into something designed for a visitor to hold in her hand.
The cube is designed to work with their Merge VR viewer – a snazzy, museum-friendly Google Cardboard-style device – but can also work with an unadorned mobile device. With a nice weight in your hand, and slightly squishy like the free swag you often find at a conference, the cube offers a visual cue to the mobile device’s camera. The app decides what to do with what it sees. In other words, the Holocube is dumb – just a collection of prettified QR codes – with all the intelligence residing in the app. And that’s what makes it so compelling – the technology is invisible to the user (like the paper cards in Disney World’s Sorcerers of the Magic Kingdom). And code can always be updated, the possibilities only limited by our imagination (and resources).
We started with our mako shark and created two different experiences. The first surrounds the cube with a rock column, which the user can rotate or turn over, as the shark ominously circles round. In this example, the user is not holding the shark but rather using the Holocube as a device to control the its movement. In the second experience, the cube IS the shark – and invites the visitor to play with it, as one would with a wooden block. You can turn the shark upside down, move it in the air like a toy to eat your friend, or move it through the camera to reveal the layers within the CT scan. We also added some touch features – click on the screen (not the cube, which would probably be better) to watch the jaws open and close.
The third experience comes from our recent youth course on microfossils. The youth went into our collections and researched some previously-unstudied forams (the size of a grain of sand). One of their CT scans was turned into a digital specimen you can hold and, with a click, look inside as it separates into two halves. And the final experience is a bat skull, which like the previous two digital specimens you can observe and interact with through physical manipulation.
Below is a short video of all four, on my desk:
It took just a few days to code the app but, once it was up and running, we took it out to the Hall of Biodiversity, where we just happen to have the mako shark overhead and a bat on the wall. Located between the two, we set up an iPad on a stand and invited passers-by to “hold a shark in their hand.”
After months of anxiously handing over devices to children that cost hundreds, if not thousands, of dollars, it was quite a relief to watch them fight over a block of foam. And yes, people loved it. When you work with a new piece of technology, you need to spend time and energy learning how to operate it. But everyone knows how to “operate” a block. It’s design is an invitation to play, and that’s what people did. They picked up the Holocube and marvelled at the digital specimens in their hand. And while they played with the specimen or its animations we offered facilitation that connected the toy in their hand back to the scientist who produced it, the tools they used to create it, and the research questions they used it to explore.
We tried other directions as well – like accessing the front camera, rather than the back, so you could see yourself with the object, as well as making a smaller one-cubic-inch cube (thank you 3D printer!) to see if children preferred the shorter distance. But after observing and interviewing around 150 people, here are some of the key lessons we took away from this round of prototyping:
- HANDABLES ARE COMPELLING: Okay, it’s not a real word (at least not yet) but visitors LOVE “handables” – augmented objects you can hold in your hand.
- HANDABLES ARE INTUITIVE: It was very satisfying to offer visitors an experience with a high level of innovation but a low learning curve to master, as its interaction design intuitively builds on visitor’s prior knowledge of working with blocks.
- PLAY IS ENGAGING: As much as visitors enjoyed the moment of designed discovery – the shark swimming around your hand, the microfossil that opens – they were equally engaged, if not more so, with their ability to simply explore the specimen through non-directed play.
- UNADORNED ASSETS ARE EDUCATIONAL: Offering the object on its own, with context provided by live facilitation, provided visitors with a direct line to achieving our intended learning objectives.
Using Mobile VR to Convey WONDER: An Interview with Sara Snyder, the Chief of the Media and Technology Office at the Smithsonian American Art Museum
Below is my most recent post on DMLcentral. You can read it here or just continue below:
Last year I was gob-smacked on a trip to D.C. by the temporary WONDER exhibit at the Renwick Gallery (and wrote about it here). Last fall I was excited to see the Gallery release a mobile VR version of the now-closed exhibit. I reached out to Sara Snyder, the Chief of the Media and Technology Office at the Smithsonian American Art Museum, to learn how and why it was developed.
Sara, Thank you for joining us today? Why don’t we start by introducing your museum (the Smithsonian American Art Museum) and your department (the Media and Technology Office).
When people think of the Smithsonian, they often think of the big museums on the mall, but the Smithsonian American Art Museum (SAAM) and its branch museum, the Renwick Gallery, belong to the “off-mall” contingent of Smithsonian destinations. SAAM shares a grand, historic building—the old Patent Office—with the National Portrait Gallery, in the Penn Quarter neighborhood. The Renwick Gallery, just under a mile away, is a fabulous little jewel of a building hidden on the stretch of Pennsylvania Avenue better known for another tourist destination, the White House.
In the Media and Technology Office (MTO) we manage SAAM and the Renwick’s websites, blog, and social media accounts, and we lead emerging media projects, such as our current experiments in VR. We produce all of the in-house video and live streams for the SAAM YouTube channel, and also provide day-to-day IT support for the museum’s staff. In addition, we oversee the Luce Foundation Center, an innovative visible storage space within SAAM. For a fairly small department, we Media and Technology staff wear a lot of hats!
For sure! To be frank, I’ve spent my life visiting museums in D.C. but had never heard of the Renwick. Then EVERYONE I knew told me your WONDER was the D.C. exhibit not to be missed. In fact, when I saw it last May, I visited it twice – once on my own, when I was in town for a conference, and then again that same week, once my family had joined me. I did NOT want them to miss it. For those who couldn’t make it, how do you even begin to describe what they missed?
Ha, you are not alone! For many years, the Renwick was something of a hidden gem, a place known primarily to D.C. locals, or devotees of craft, but not, perhaps, on the top of a tourist’s “must-see” list. Then, in 2013, the museum building closed for a two-year renovation. While it was closed, then-curator Nicholas Bell conceived of the idea to invite contemporary American artists to completely take over the nine galleries in the building, an unprecedented opportunity for the Renwick to reinvent itself as a 21st-century destination for art lovers.
The result was the WONDER exhibition, a magical, immersive experience unlike anything people had ever seen. As Nicholas said in the introductory video, the artists took everyday objects that you wouldn’t necessarily expect to see in an art museum—tires, index cards, sticks, string—but “pulled them together in such a way as to completely amaze you.”
It’s true. I was amazed.
As you experienced, the show had incredible word of mouth and social media exposure, which led to huge attendance figures. Visitors of every generation truly were overcome by a sense of wonder, and people came back (as you did), again and again.
So let’s shift over to the virtual reality app, Renwick Gallery WONDER 360. Did you know from the beginning you’d be creating this app? How’d it come about?
We had no idea we’d end up creating the app! Our energy back in 2015 was focused on producing video content, launching a refreshed Renwick website, and on re-orienting our social media strategy towards Instagram. However, it was fortuitous that in 2015, VR hit the mainstream, and hardware and software for producing and publishing VR experiences became much more accessible and affordable than it had ever been before, putting it within reach for even a non-profit art museum. We knew that WONDER was special, and we longed for a way to preserve the experience for posterity. That same year, MTO staffer Carlos Parada made some contacts with an innovative startup called InstaVR at the SXSW interactive conference, and with their help, we realized that we would be able to shoot, create, and publish Renwick Gallery WONDER 360 using our own equipment and staff, and without the huge budget that an outside contractor would have required.
Was the decision to make the images 360 photos versus a 360 film of the exhibit motivated more by aesthetics or technical constraints?
It was definitely because of technical, practical, and budgetary constraints. We would have loved to have done video capture…or even better, full 3D scanning and photogrammetry. But that just wasn’t possible, given our resources and incredibly compressed timeframe. The full show was only up for six months, and the galleries were almost never empty, so we were limited to shooting before opening hours. I’m actually still amazed that we pulled it all off!
What have you learned from WONDER 360, both through producing it and seeing how visitors are using it, that will inform your future uses of the medium?
My takeaway from producing the app is really the same as my takeaway from seeing the success of the WONDER exhibition: content is everything. The app has such good reviews because the artworks represented within it are beautiful and astounding. I don’t want us to employ a new technology—now or in the future—just for technology’s sake. I want us to employ VR in the future because it is the right tool for the job, and because it enables our visitors to more fully enjoy and appreciate American art. This is something I want to hold onto as we enter our next phase with more robust, gaming-quality VR.
If you knew then what you know now, and if the decision to make the app had been part of the initial design of WONDER, how might the VR experience have been designed differently? And how might it have been integrated into the experience of the exhibit itself (not just offered as a digital postcard, a virtual memento) in the way visitor’s photography was also incorporated?
Looking back, perhaps the VR app could have had more features, or contained more variety of photographic angles. And if it had been available earlier, we certainly could have promoted it during the exhibition or in the galleries—something we did not have the time or budget to do. But the app wasn’t intended to be, and never could have been, a substitute for the real life exhibition. The whole point of the show was to be present, and to have the emotional experience of being dwarfed by the scale and juxtaposition of materials in the physical installations.
The truth is, I actually sort of like the fact that there wasn’t obvious technology incorporated into most of the galleries (save what visitors carried in their own pockets) because it meant that the focus was on the experience of being present in a room with an amazing artwork and a bunch of strangers. Why look away from that gorgeous rainbow to tap on some kiosk or stare at a screen? We did incorporate social media into one screen in a central space, but I think that feature was only interesting because it was so organic and unfiltered, coming from the minds of other visitors.
WONDER was a show that people loved to experience together. VR isn’t social yet, so that specific technology just couldn’t deliver on the power of sharing the same way that Instagram could. Instagram let people show their friends what they were seeing, and it looked amazing, which is why it, not VR, was ultimately the defining technology for the WONDER show.
Next week at the annual American Alliance of Museums conference in St. Louis, I’ll be presenting with John Durrant, Marco Castro and moderator Lizzy Moriarty to “demystify VR content development and offer attendees the chance to get their hands on some VR tech.” In advance of the session (on Monday, May 8, at 8:45 am) I was asked by the Center for the Future of Museums to share a preview of some of the session, which you can read here or below.
Last fall we launched a new initiative at the American Museum of Natural History in New York City: develop recommendations for engaging visitors with modern science practices by adding digital layers to permanent halls. What this looks like on the ground is working with one of the Museum’s scientists (we have over 200) and then turning their digital specimens (CT scans, genomic data, astronomical observations) into a digital asset we can port into a variety of digital tools to be tested with the public.
For example, we tried various ways of using digital astronomical data to explain the three dimensional nature of constellations. When people look up at the night sky, all the stars seem to lie in a single plane, all the same distance from Earth. In fact, stars occupy a vast three dimensional space—each a different distance from our planet. If you could change your perspective by flying off Earth to somewhere else in space, changing the distance and angle between yourself and each of the stars, you would see Orion “distort”— in other words, the 2D picture we create by drawing imaginary lines from star to star would change shape.
See how long it took me to explain that? We wanted to learn if we could use augmented reality (AR) or virtual reality (VR) to get visitors there, faster. Working with a slice of our Digital Universe database, we created a digital asset that simulates a number of constellations, like Orion. Then we tested a variety of ways for people to interact with this digital simulation of space
1: THE TANGO EXPERIENCE: In our Hall of the Universe (HoU) Visitors viewed a virtual Orion constellation on a Tango handheld device, which they could move forward and backwards, to see the constellation’s shape/line change. Tango is like an iPad with one key difference: it knows where it sits in the space around it. This means, for AR, you can place augmentations in space and then use your Tango to walk around or (in the case of stars) among them.
RESULT: Failure. Visitors did not leave having learned that stars sit in a 3D space. We concluded that was in part because constellations are too abstract (the points in a constellation represent real stars but the lines between are just pretend). But what if we made the experience less abstract, something you’d notice was different if its shape changed, like your face?
2: YOUR FACE IN SPACE: It’ll take too long to explain here, but humor me and presume there’s a good reason why we have a computer app that lets you turn your face into a constellation. We took the app into the HoU and invited visitors to map points around a live image of their face, switch the star names on and off, and then rotate their perspective around the new constellation.
RESULT: Not there yet. On one hand, it seemed to work to ask visitors to use their own face as a metaphor for stars in relationship within a constellation; lowering the level of abstraction was effective. However, many visitors experienced the rotation of the constellation image as due to the constellation itself rotating (which is incorrect), not the visitor’s perspective shifting through space. What if it turned out that visitor’s misunderstandings about stars in 3D space are just being reinforced when shown through a 2D medium? And what if,, instead, we offered them, through a 3D medium?
3: ENTER HOLOLENS: Visitors now viewed a virtual Orion constellation (as well as three smaller constellations) through a Hololens device. (Hololens is an augmented reality headset that enables the wearer to see, and navigate, computer generated images or landscapes.) Walking back and forth, and around, visitors viewed the constellations as existing in a 3D space, with a backdrop of real stars.
RESULT: It worked! While the first iteration failed to communicate the core idea, and the second iteration was successful half the time, the Hololens version worked EVERY time. As the visitor walked around or through the constellation, the stars “moved” at different speeds, depending on their distance from the observer. But could we up the bar, designing the experience to require a visitor wearing Hololens to interact with other visitors, to make it a social experience?
4: ESCAPE THE PLANET: Over a four day design sprint, co-developed with Museum youth learners, we created a prototype of an escape room with an astro-theme: Escape the Planet. (Escape rooms are physical adventure games that require players to solve a series of puzzles.) One of the puzzles required a group of players to use a UV flashlight to find clues in posters that identified one particular constellation. A different player, wearing the Hololens loaded with a new version of the AR Constellation experience, had to look at the name of the closest star to Earth within that constellation (also known as its catalog number) so another player could record those digits and use them to open a padlocked case.
RESULT: Hololens users playing Escape the Planet maintained social contact with the rest of their group, and appeared to have done so more often and with more intensity than during the first three iterations. But was this due to features of the new version of AR Constellation, or due to placing it within a game?
5: STAND ALONE AR: The week after testing Escape the Planet, we took this latest version of the AR Constellation in Hololens back out into the Hall, specifically to watch how users interacted (or not) with the others within their party.
RESULT: Most visitors using the standalone AR said that wearing the Hololens did not affect the way they related with the people around them (in other words, they ignored them and focused on the AR Constellation experience). This is in stark contrast with the Escape the Planet players who not only reported a “heightened desire to cooperate” but expressed a need to share.
And so it goes. Now, a few months later, we are porting a number of our digital specimens into a holdable AR device called a Holocube. Do you think visitors would like to hold a constellation in their hand? It might be time for a new iteration…
Does Digital Media Have A Place in a Hands-On Science Learning Space: An Interview with Rebecca Bray on the National Museum of Natural History’s Q?rius
Below is a re-blog of my most recent post on DMLcentral.
Rebecca Bray is the Chief of Experience Development at the Smithsonian’s National Museum of Natural History in Washington, D.C. I reached out to her to learn about how the Museum developed and now runs its innovative Q?rius (pronounced “curious”) space, opened in 2013 as an interactive and educational lab with microscopes, touch screens, interactive activities and a “collection zone” housing over 6000 different specimens and artifacts visitors can handle.
In our conversation below we explored their design process, the role of youth learners, the pros and cons of integrating digital media into a hands-on learning space, and more.
Rebecca, Welcome to Mooshme. So how do you describe Q?rius?
Q?rius is a space, an interactive space, in the museum. We always said that it’s not an exhibit, right? It’s really an interactive learning space, designed mainly for 10 to 18-year-olds and their loved ones.
The space itself is really very flexible. Everything there is on wheels, except for a large collection space, and even in there everything is very modular and flexible; but it’s really meant to be a space for visitors to do hands-on interactive work around the specific kinds of natural history science that our researchers do. And it’s also a space for the education team to experiment with new ways of interacting with the public; we think of it as our learning lab as well – we do a lot of experimenting and testing of new ideas in there.
When you think about it overall, what would you identify as some of the key innovations you took on?
So many things! When we were designing the space we had a lot of conversations about this target audience of 10 to 18-year-olds. The outside exhibit design company that we were working with was saying at first, “Oh, you should have a lot of technology, because teens love technology.” But after we did some front-end studies we saw how much people really value their encounters with the authentic objects of the museum. So we said, “Let’s actually de-emphasize the screens in the space and have the focus be on the objects and doing things with the objects.”
And so, we did that.
But we also at the same time were trying to do a bunch of stuff with screens. We wanted them to see a video of the scientists in the field and we thought the screens could really lead people through the activity. So, you would have a touch screen and then you could kind of click through and it would give you instructions about how to interact with the objects in the room.
After making that, and putting it out there, and having it in the space, we pretty quickly realized that it wasn’t working. People couldn’t do both – they didn’t want to both interact with the screen and with the objects. It was just too much.
So, we actually stripped away even more of the screens from the space. We made the activities more about the objects themselves, with very simple paper instructions, and then kept the screens for very particular purposes, which was really to access more information about the objects themselves, separate from the activities. So, that was an important learning. But it’s still an ongoing question about this balance between screens and non-screen experiences.
What else do you need to consider when thinking about integrating screens?
Making sure that we’re designing for social experiences between groups. Physically designing the space, so that people can fit around things, in the right way. Making sure that they’re big enough for people.
I think at some point in the design process we thought about having everybody carrying around an iPad that would be like their personal digital Field Book as they go around the space, collecting objects But, again we found that they weren’t social enough and we also had this challenge of object versus screen.
Yet you found another way to do the Field Book, which my daughter enjoyed when we visited.
Yeah, so if I had a million dollars we would redesign the Field Books. And we actually knew that even going in. We knew we didn’t have enough money to do it perfectly but we still decided we had enough that we could pilot something and be an enjoyable experience. We have lots of visitors who really like it and they collect their digital collections into Field Books and look at them at home; but yeah, I mean, I think with software you need to have enough money to continuously upgrade it as you learn more.
What role do youth play supporting the space?
We’ve had them continuously involved in giving feedback. We have over a hundred teen volunteers, and some of those have been leveled up to be captains. They help us develop activities and programs and give us feedback on a lot of stuff that happens in the space.
How do you design new activities for the space?
Since we use an iterative design process for the activities that we build, we’ll work with a scientist and our design team of educators to develop some very rapid prototypes. And then we’ll go out and do testing and observations. We have developed some assessment instruments that we use to test things and to see, really to understand, how visitors are interacting with it and how to move along a spectrum of understanding. We’ll test things at least 10 times and collect a lot of data about how people are interacting with it and then we’ll use that to refine something as we go along.
A big part of this has been creating a culture of rapid prototyping and testing within our department and helping to spread that to other departments, to test everything that we do in a pretty deep way, beyond just going to visitors and asking, “Do you like this title for this activity?” It’s a difficult thing. It takes a lot of time and you really need to train your staff to know how to do it.
In fact, when we were in the conceptualization stage, we were able to go into the museum and do a bunch of testing of the kinds of activities that we knew we wanted to do. And it was so useful. I wish that we had actually been able to do more of that, to really spend some time actually making the stuff that we thought was going to be in the space and getting it in front of visitors and being really reflective and really thoughtful about how they were responding to it.