June Update: Miniature Golf, Martian-style (or, adventures in mixed reality)

This post is part of an ongoing monthly series that  focuses on our current efforts in the Museum’s Science Bulletins team to create and test prototypes of Hall-based digital interactions using AR and VR using our scientists’ digital data, and to share some of the lessons we learn along the way.

If you’re like me, you knew the Curiosity rover is on Mars, sending back some amazing photos. But I had no idea that many of those photos were being used to create topographical maps at resolutions previous available only in a scientist’s dreams. In June we ported that data into a Microsoft Hololens to explore how our visitors might feel walking across the red planet… while playing golf.

a Curiosity selfie, on Mars

a Curiosity selfie, on Mars

Mars, however, was not where this project began. Originally, we were focused on the moon. We have a small section in our Hall of the Universe dedicated to our little Luna, with signage, a metal globe, and a floor scale that displays your weight on the moon.

Moon exhibit

Moon exhibit

That scale became our inspiration. Think about it a moment  – the design of the experience is so elegant. You stand on it, because you already know what to do with a scale, and all it says is “Your weight on the moon” while a red digital number appears on the display. And that’s it. It’s like an intellectual pile of lego blocks, that  challenge you to put it all together to make sense of your experience, which most people do standing up straight, head down reading the display. Hmm? The number I see on the display is NOT my weight. What’s going on? Comparing the two – what I know with what I see – I realize I weigh less on the moon. That might make me wonder why. Maybe I know that gravity is less on the moon, and maybe I realize that is because the moon has less mass than the earth. Or maybe I don’t. But in an instant the scale tossed me a challenge and I can then choose whether or not to accept it.

It can be quite wonderful watching visitors go through all of the above, again and again, in just a few seconds, and note how they often bring the people around them into the experience.

Your weight on the moon

Your weight on the moon

So that’s where we started: What if you took the elegance of the scale, and expanded it through digital tools, allowing the visitor to both see the moon all around them and then explore how gravity differs on the moon (perhaps through tossing a ball). Working with Nathan Leigh, a Postdoctoral Fellow in the Museum’s Department of Astrophysics, we explored a number of game options. A tossed ball? Basketball? Throwing a ball into a bucket? It turned out a nice arc to the ball helps to illustrate some of the gravitational differences, and that led us to golf… Mooniature Golf!

We had presumed we had access to good lunar surface data – and we do – but positioning at the human-scale was a level of resolution we just couldn’t acquire. Luckily, the lunar scale is only one of a number of scales sending our visitors around the universe. And that’s when we learned about our interplanetary photographer, Curiosity, and considered our Mars scale. That’s how Mooniature Golf turned into Martian Golf.

We wanted to hue close to the existing Hall experience – you look at the scale and you’re invited to make a comparison between what you already know and what it tells you. After a number of over-elaborate concepts, we came up with the idea for a single golf hole, on the Martian surface, next to Curiosity itself.

For the user interface, we went with the Hololens clicker. We wanted something physical, that could be swung like a golf club, with a simple user interface. So one click and hold to start the swing, and then let go to end the swing, sending the ball flying. To bring home the educational objective, the ball now flew in two arcs Continue reading

Posted in From My Work | Tagged , , | Leave a comment

Gutsy Video Review by Zee Garcia

Thanks to Eric Teo for notifying me of this wonderful new video review by Zee Garcia of our card game, Gutsy, from the Dice Tower. Check it out below.

Posted in From My Work | Tagged | Comments Off on Gutsy Video Review by Zee Garcia

Gamifying the Museum with NYU Game Center Graduate Students

This Spring, the American Museum of Natural History and the NYU Game Center partnered to create a classroom-in-residence at the Museum for a course entitled “Designing for Museums.”

Students DinoPosterpartnered with different departments at the museum to create prototypes, both digital and physical, of new playful experiences and games that educate as well as entertain. With the museum acting as a client providing feedback and guidance, these students created a wide range of different prototype designed to further the museum’s learning goals.

They developed six prototypes. I invited the students to describe their project and share some visual assets.

AstronoME: This card games helps players understand the different techniques that astronomers actually use to learn more about objects in our universe!

DinosAR: Help a modern day bird learn about his ancestors in this AR scavenger hunt!

IMG_0155 IMG_7983 IMG_7985

Food or Foe: By allowing you to see the world through the eyes of several different sea creatures, Food or Foe helps players understand why it’s so easy for animals to confuse food, such as jellyfish, with trash that is harmful for them to eat, like plastic bags.

Food or Foe

Food or Foe

Snacky: Insert yourself into your exhibits and make your friends jealous with the AR selfie app!

snackyposterLook Up: Using cranks and gears, Look Up shows you what the night sky looks like in New York City during different times of year.

Skeleton Closet: Skeleton Closet is an interactive exhibit in Augmented Reality using the Google Tango. In the exhibit, users learn about the skeleton of the whale by piecing one together. Users will be able to manipulate and interact with the digital objects that seem to be existing in the physical world in this educational and engaging experience.
IMG_0015 IMG_0031

Posted in Practice | 2 Comments

Our work in the Verge: “20,000-year Old Artifacts, 21st Century Technology”

A new piece came out a few days ago in the Verge, “20,000-year-old artifacts, 21st century technology: Museums are turning to virtual reality, apps, and interactive experiences to keep tech-savvy visitors engaged“. It’s a lovely overview of how a number of NYC-based museums are taking on this topic.

The work of my museum shows up in a number of places. Below I’ll highlight the work from my area, Science Bulletins:

In nearly two decades working at the American Museum of Natural History, Vivian Trakinski, director of the museum’s Science Bulletins, has witnessed the evolution of visitor experiences firsthand. Originally hired to produce short science documentaries, Trakinski now spends most of her time working on data visualizations in a variety of digital formats.

“When I came here [in 1999], we were focused on video,” she says. She still produces videos, but says that “now, we are focusing on more immersive and interactive platforms […] People want to be able to curate their own content. People want to be engaged in the creation of it.”

 Image: American Museum of Natural History

Trakinski’s team is currently working on a number of augmented reality prototypes that will allow visitors to more actively engage with the museum’s specimens and datasets, including an immersive AR experience of what it would be like to play golf on Mars, using data from the Mars Reconnaissance Orbiter’s Context Camera. Her team also took a CT scan of a Mako shark and created an AR experience in which visitors can look through a Google Tango tablet or a stereoscopic AR headset, see the scanned skeleton overlaid on top of the museum’s actual shark model, and make the shark swim or bite.

“It’s not a passive experience where we’re telling you something,” says Trakinski. “[Visitors] are actually creating the learning through the interaction with this real artifact of science.”

And then later on:

For Trakinski and her work on data visualization, the future revolves around “communal creativity,” like open-source projects that elicit involvement from partner institutions and outside developers. She cites the Museum of Natural History’s current involvement in the NASA-funded project OpenSpace — an open-source data visualization software to communicate space exploration to the general public — as an example of a growing movement.

“I think sharing resources, sharing knowledge, open-source software development, customization, [and] using common tools is something of a trend that I would see driving all of our work forward in a communal context,” she says.

I recommend reading the entire piece here.

Posted in From My Work | Comments Off on Our work in the Verge: “20,000-year Old Artifacts, 21st Century Technology”

April Update: A Virtual Shark You Can Hold in Your Hand

This post is part of an ongoing monthly series of posts that will focus on our current efforts in the Museum’s Science Bulletins team to create and test prototypes of Hall-based digital interactions using AR and VR using our scientists’ digital science data, and to share some of the lessons we learn along the way.

This past year we’ve been exploring how our eyes, and sometimes our ears, can be be invited to play Let’s Pretend: imagine you are seeing a CT scan of a shark in front of you, or imagine you are hearing the HVAC in the Big Bone Room. Our hands, so far, have been left out of this virtual party: imagine you can touch the shark, or pick up that dinosaur fossil. We’ve used HTC Vive’s controllers to manipulate things – imagine you can click here to make the weevil turn transparent – but we haven’t had a tool that invites visitors to (pretend to) touch augmented objects with their own hands.

That is, until now.

Family Game Night, Public Programs, 3/31/2017

One of the products coming out of this year’s Consumer Electronics Show was the Holocube, from Merge:

We knew its developers (a chunk of the team that worked with us on such augmented reality experiences like MicroRangers now works at Merge). When we received our developer version, and access to the SDK, we were excited to learn what we might find if we took the same digital specimens we’ve been porting into such platforms as Hololens, Tango, and Vive into something designed for a visitor to hold in her hand.

The cube is designed to work with their Merge VR viewer – a snazzy, museum-friendly Google Cardboard-style device – but can also work with an unadorned mobile device. With a nice weight in your hand, and slightly squishy like the free swag you often find at a conference, the cube offers a visual cue to the mobile device’s camera. The app decides what to do with what it sees. In other words, the Holocube is dumb – just a collection of prettified QR codes – with all the intelligence residing in the app. And that’s what makes it so compelling – the technology is invisible to the user (like the paper cards in Disney World’s Sorcerers of the Magic Kingdom). And code can always be updated, the possibilities only limited by our imagination (and resources).

We started with our mako shark and created two different experiences. The first surrounds the cube with a rock column, which the user can rotate or turn over, as the shark ominously circles round. In this example, the user is not holding the shark but rather using the Holocube as a device to control the its movement. In the second experience, the cube IS the shark – and invites the visitor to play with it, as one would with a wooden block. You can turn the shark upside down, move it in the air like a toy to eat your friend, or move it through the camera to reveal the layers within the CT scan. We also added some touch features – click on the screen (not the cube, which would probably be better) to watch the jaws open and close.

The third experience comes from our recent youth course on microfossils. The youth went into our collections and researched some previously-unstudied forams (the size of a grain of sand). One of their CT scans was turned into a digital specimen you can hold and, with a click, look inside as it separates into two halves. And the final experience is a bat skull, which like the previous two digital specimens you can observe and interact with through physical manipulation.

Below is a short video of all four, on my desk:

It took just a few days to code the app but, once it was up and running, we took it out to the Hall of Biodiversity, where we just happen to have the mako shark overhead and a bat on the wall. Located between the two, we set up an iPad on a stand and invited passers-by to “hold a shark in their hand.”

After months of anxiously handing over devices to children that cost hundreds, if not thousands, of dollars, it was quite a relief to watch them fight over a block of foam. And yes, people loved it. When you work with a new piece of technology, you need to spend time and energy learning how to operate it. But everyone knows how to “operate” a block. It’s design is an invitation to play, and that’s what people did. They picked up the Holocube and marvelled at the digital specimens in their hand. And while they played with the specimen or its animations we offered facilitation that connected the toy in their hand back to the scientist who produced it, the tools they used to create it, and the research questions they used it to explore.

We tried other directions as well – like accessing the front camera, rather than the back, so you could see yourself with the object, as well as making a smaller one-cubic-inch cube (thank you 3D printer!) to see if children preferred the shorter distance. But after observing and interviewing around 150 people, here are some of the key lessons we took away from this round of prototyping:

  • HANDABLES ARE COMPELLING: Okay, it’s not a real word (at least not yet) but visitors LOVE “handables” – augmented objects you can hold in your hand.
  • HANDABLES ARE INTUITIVE: It was very satisfying to offer visitors an experience with a high level of innovation but a low learning curve to master, as its interaction design intuitively builds on visitor’s prior knowledge of working with blocks.
  • PLAY IS ENGAGING: As much as visitors enjoyed the moment of designed discovery – the shark swimming around your hand, the microfossil that opens – they were equally engaged, if not more so, with their ability to simply explore the specimen through non-directed play.
  • UNADORNED ASSETS ARE EDUCATIONAL: Offering the object on its own, with context provided by live facilitation, provided visitors with a direct line to achieving our intended learning objectives.

Family Game Night, Public Programs, 3/31/2017

Untitled

Untitled

Posted in From My Work | Tagged , , | Comments Off on April Update: A Virtual Shark You Can Hold in Your Hand

Using Mobile VR to Convey WONDER: An Interview with Sara Snyder, the Chief of the Media and Technology Office at the Smithsonian American Art Museum

Below is my most recent post on DMLcentral. You can read it here or just continue below:

Last year I was gob-smacked on a trip to D.C. by the temporary WONDER exhibit at the Renwick Gallery (and wrote about it here). Last fall I was excited to see the Gallery release a mobile VR version of the now-closed exhibit. I reached out to Sara Snyder, the Chief of the Media and Technology Office at the Smithsonian American Art Museum, to learn how and why it was developed.

Sara, Thank you for joining us today? Why don’t we start by introducing your museum (the Smithsonian American Art Museum) and your department (the Media and Technology Office).

When people think of the Smithsonian, they often think of the big museums on the mall, but the Smithsonian American Art Museum (SAAM) and its branch museum, the Renwick Gallery, belong to the “off-mall” contingent of Smithsonian destinations. SAAM shares a grand, historic building—the old Patent Office—with the National Portrait Gallery, in the Penn Quarter neighborhood.  The Renwick Gallery, just under a mile away, is a fabulous little jewel of a building hidden on the stretch of Pennsylvania Avenue better known for another tourist destination, the White House.

In the Media and Technology Office (MTO) we manage SAAM and the Renwick’s websites, blog, and social media accounts, and we lead emerging media projects, such as our current experiments in VR.  We produce all of the in-house video and live streams for the SAAM YouTube channel, and also provide day-to-day IT support for the museum’s staff.  In addition, we oversee the Luce Foundation Center, an innovative visible storage space within SAAM.  For a fairly small department, we Media and Technology staff wear a lot of hats!

For sure! To be frank, I’ve spent my life visiting museums in D.C. but had never heard of the Renwick. Then EVERYONE I knew told me your WONDER was the D.C. exhibit not to be missed. In fact, when I saw it last May, I visited it twice – once on my own, when I was in town for a conference, and then again that same week, once my family had joined me. I did NOT want them to miss it. For those who couldn’t make it, how do you even begin to describe what they missed?

Ha, you are not alone!  For many years, the Renwick was something of a hidden gem, a place known primarily to D.C. locals, or devotees of craft, but not, perhaps, on the top of a tourist’s “must-see” list.  Then, in 2013, the museum building closed for a two-year renovation.  While it was closed, then-curator Nicholas Bell conceived of the idea to invite contemporary American artists to completely take over the nine galleries in the building, an unprecedented opportunity for the Renwick to reinvent itself as a 21st-century destination for art lovers.

The result was the WONDER exhibition, a magical, immersive experience unlike anything people had ever seen.  As Nicholas said in the introductory video, the artists took everyday objects that you wouldn’t necessarily expect to see in an art museum—tires, index cards, sticks, string—but “pulled them together in such a way as to completely amaze you.”

It’s true. I was amazed.

As you experienced, the show had incredible word of mouth and social media exposure, which led to huge attendance figures.  Visitors of every generation truly were overcome by a sense of wonder, and people came back (as you did), again and again.

So let’s shift over to the virtual reality app, Renwick Gallery WONDER 360. Did you know from the beginning you’d be creating this app? How’d it come about?

We had no idea we’d end up creating the app!  Our energy back in 2015 was focused on producing video content, launching a refreshed Renwick website, and on re-orienting our social media strategy towards Instagram.  However, it was fortuitous that in 2015, VR hit the mainstream, and hardware and software for producing and publishing VR experiences became much more accessible and affordable than it had ever been before, putting it within reach for even a non-profit art museum.  We knew that WONDER was special, and we longed for a way to preserve the experience for posterity.  That same year, MTO staffer Carlos Parada made some contacts with an innovative startup called InstaVR at the SXSW interactive conference, and with their help, we realized that we would be able to shoot, create, and publish Renwick Gallery WONDER 360 using our own equipment and staff, and without the huge budget that an outside contractor would have required.

Was the decision to make the images 360 photos versus a 360 film of the exhibit motivated more by aesthetics or technical constraints?

It was definitely because of technical, practical, and budgetary constraints.  We would have loved to have done video capture…or even better, full 3D scanning and photogrammetry. But that just wasn’t possible, given our resources and incredibly compressed timeframe.  The full show was only up for six months, and the galleries were almost never empty, so we were limited to shooting before opening hours.  I’m actually still amazed that we pulled it all off!

What have you learned from WONDER 360, both through producing it and seeing how visitors are using it, that will inform your future uses of the medium?

My takeaway from producing the app is really the same as my takeaway from seeing the success of the WONDER exhibition: content is everything.  The app has such good reviews because the artworks represented within it are beautiful and astounding.  I don’t want us to employ a new technology—now or in the future—just for technology’s sake.  I want us to employ VR in the future because it is the right tool for the job, and because it enables our visitors to more fully enjoy and appreciate American art.  This is something I want to hold onto as we enter our next phase with more robust, gaming-quality VR.

If you knew then what you know now, and if the decision to make the app had been part of the initial design of WONDER, how might the VR experience have been designed differently? And how might it have been integrated into the experience of the exhibit itself (not just offered as a digital postcard, a virtual memento) in the way visitor’s photography was also incorporated?

Looking back, perhaps the VR app could have had more features, or contained more variety of photographic angles.  And if it had been available earlier, we certainly could have promoted it during the exhibition or in the galleries—something we did not have the time or budget to do.  But the app wasn’t intended to be, and never could have been, a substitute for the real life exhibition.  The whole point of the show was to be present, and to have the emotional experience of being dwarfed by the scale and juxtaposition of materials in the physical installations.

The truth is, I actually sort of like the fact that there wasn’t obvious technology incorporated into most of the galleries (save what visitors carried in their own pockets) because it meant that the focus was on the experience of being present in a room with an amazing artwork and a bunch of strangers.  Why look away from that gorgeous rainbow to tap on some kiosk or stare at a screen?  We did incorporate social media into one screen in a central space, but I think that feature was only interesting because it was so organic and unfiltered, coming from the minds of other visitors.

WONDER was a show that people loved to experience together.  VR isn’t social yet, so that specific technology just couldn’t deliver on the power of sharing the same way that Instagram could.  Instagram let people show their friends what they were seeing, and it looked amazing, which is why it, not VR, was ultimately the defining technology for the WONDER show.

Posted in Interviews | Tagged | Comments Off on Using Mobile VR to Convey WONDER: An Interview with Sara Snyder, the Chief of the Media and Technology Office at the Smithsonian American Art Museum

Lessons Learned in the Iterative Design Process with AR Constellation

Next week at the annual American Alliance of Museums conference in St. Louis, I’ll be presenting with John Durrant, Marco Castro and moderator Lizzy Moriarty to “demystify VR content development and offer attendees the chance to get their hands on some VR tech.” In advance of the session (on Monday, May 8, at 8:45 am) I was asked by the Center for the Future of Museums to share a preview of some of the session, which you can read here or below.

Last fall we launched a new initiative at the American Museum of Natural History in New York City: develop recommendations for engaging visitors with modern science practices by adding digital layers to permanent halls. What this looks like on the ground is working with one of the Museum’s scientists (we have over 200) and then turning their digital specimens (CT scans, genomic data, astronomical observations) into a digital asset we can port into a variety of digital tools to be tested with the public.

constellationFor example, we tried various ways of using digital astronomical data to explain the three dimensional nature of constellations. When people look up at the night sky, all the stars seem to lie in a single plane, all the same distance from Earth. In fact, stars occupy a vast three dimensional space—each a different distance from our planet. If you could change your perspective by flying off Earth to somewhere else in space, changing the distance and angle between yourself and each of the stars, you would see Orion “distort”— in other words, the 2D picture we create by drawing imaginary lines from star to star would change shape.

See how long it took me to explain that? We wanted to learn if we could use augmented reality (AR) or virtual reality (VR) to get visitors there, faster. Working with a slice of our Digital Universe database, we created a digital asset that simulates a number of constellations, like Orion. Then we tested a variety of ways for people to interact with this digital simulation of space

1: THE TANGO EXPERIENCE: In our Hall of the Universe (HoU) Visitors viewed a virtual Orion constellation on a Tango handheld device, which they could move forward and backwards, to see the constellation’s shape/line change. Tango is like an iPad with one key difference: it knows where it sits in the space around it. This means, for AR, you can place augmentations in space and then use your Tango to walk around or (in the case of stars) among them.

RESULT: Failure. Visitors did not leave having learned that stars sit in a 3D space. We concluded that was in part because constellations are too abstract (the points in a constellation represent real stars but the lines between are just pretend). But what if we made the experience less abstract, something you’d notice was different if its shape changed, like your face?

2: YOUR FACE IN SPACE: It’ll take too long to explain here, but humor me and presume there’s a good reason why we have a computer app that lets you turn your face into a constellation. We took the app into the HoU and invited visitors to map points around a live image of their face, switch the star names on and off, and then rotate their perspective around the new constellation.

RESULT: Not there yet. On one hand, it seemed to work to ask visitors to use their own face as a metaphor for stars in relationship within a constellation; lowering the level of abstraction was effective. However, many visitors experienced the rotation of the constellation image as due to the constellation itself rotating (which is incorrect), not the visitor’s perspective shifting through space. What if it turned out that visitor’s misunderstandings about stars in 3D space are just being reinforced when shown through a 2D medium? And what if,, instead, we offered them, through a 3D medium?

3: ENTER HOLOLENS: Visitors now viewed a virtual Orion constellation (as well as three smaller constellations) through a Hololens device. (Hololens is an augmented reality headset that enables the wearer to see, and navigate, computer generated images or landscapes.) Walking back and forth, and around, visitors viewed the constellations as existing in a 3D space, with a backdrop of real stars.

RESULT: It worked! While the first iteration failed to communicate the core idea, and the second iteration was successful half the time, the Hololens version worked EVERY time. As the visitor walked around or through the constellation, the stars “moved” at different speeds, depending on their distance from the observer. But could we up the bar, designing the experience to require a visitor wearing Hololens to interact with other visitors, to make it a social experience?

4: ESCAPE THE PLANET: Over a four day design sprint, co-developed with Museum youth learners, we created a prototype of an escape room with an astro-theme: Escape the Planet. (Escape rooms are physical adventure games that require players to solve a series of puzzles.) One of the puzzles required a group of players to use a UV flashlight to find clues in posters that identified one particular constellation. A different player, wearing the Hololens loaded with a new version of the AR Constellation experience, had to look at the name of the closest star to Earth within that constellation (also known as its catalog number) so another player could record those digits and use them to open a padlocked case.

RESULT: Hololens users playing Escape the Planet maintained social contact with the rest of their group, and appeared to have done so more often and with more intensity than during the first three iterations. But was this due to features of the new version of AR Constellation, or due to placing it within a game?

5: STAND ALONE AR: The week after testing Escape the Planet, we took this latest version of the AR Constellation in Hololens back out into the Hall, specifically to watch how users interacted (or not) with the others within their party.

RESULT: Most visitors using the standalone AR said that wearing the Hololens did not affect the way they related with the people around them (in other words, they ignored them and focused on the AR Constellation experience). This is in stark contrast with the Escape the Planet players who not only reported a “heightened desire to cooperate” but expressed a need to share.

And so it goes. Now, a few months later, we are porting a number of our digital specimens into a holdable AR device called a Holocube. Do you think visitors would like to hold a constellation in their hand? It might be time for a new iteration…

 

 

Posted in Conferences, From My Work | Tagged , , , , , | Comments Off on Lessons Learned in the Iterative Design Process with AR Constellation