Dr Peter Morse, Fulldome Artist, Hobart, Tasmania, 17 November 2017
PENNY VAILE: Good afternoon and welcome to the National Museum of Australia. My name is Penny. I have the great privilege of working here in our public programs team and being able to put these sorts of lecture series together. It is also my great privilege today to introduce Dr Peter Morse, who has probably one of the best job titles that I’ve ever heard – fulldome artist. So if you could please put your hands together and welcome Dr Peter Morse to the stage.
PETER MORSE: Terrific. Thank you very much and thanks for coming along today. I was certainly upstaged earlier on by my daughter who I think, hopefully, will not outshine my presentation here. I’ve got a lot of slides so I will try and move through them with great rapidity because it was a question of trying to discern how to pitch this talk, how technical I should be about things.
I’ll cover a bit of my background working in the medium of fulldome, which you will now be familiar with since you sat under a fulldome in the exhibition. I’ll talk a bit about DomeLab, which was a project initiated by Professor Sarah Kenderdine, formerly of UNSW and now EPFL in Lausanne in Switzerland. I’ll cover a few examples of different dome works. Some of you might think that fulldome is a relatively new medium. It’s actually been going on for quite a few decades now, obviously since the advent of planeteria, which date back to the 19th century and earlier, but digital fulldome is really something that’s emerged over the last decade, or 15 years. I’ll give a brief technical overview of how fulldome works and touch upon what we call ‘computational photography,’ which is where we don’t think of photographs as images anymore. You know, straight from a camera. You manipulate them with computers in all sorts of different ways. And then I’ll try to cover as much as I can of the cave experience and the art experience dome projects.
So the DomeLab is what’s known as a six-metre Zendome. Zendome is a German company, so it’s a negative pressure geodesic dome, and this is a super high speed time-lapse of it being set up. So there’s an external geodesic frame structure and then the internal fabric is put on it and its wound up. It’s a very heavy thing, it fits in one shipping container. Takes about a day to set up. There are eight projectors in there and it does true 4K resolution, which means that it’s 4096 pixels across in either direction and about 10,000 pixels around the circumference. And this was funded by this ARC, Australian Research Council, infrastructure grant by Sarah Kenderdine. It’s a super high resolution system, much higher resolution than 4K TVs, for instance.
Now, it was initially installed in the Michael Crouch Innovation Centre at UNSW. This is a pre-visualisation I did in 2015, so that’s two years ago now, and the idea here was to show it installed in the Michael Crouch Centre and a couple of people standing there. This is to demonstrate what it would look like in situ looking at scientific data because the premise behind the dome project was that it would be used for scientific data visualisation and here we’ve got a couple of people looking at connectome modelling from the human brain. Now I developed, with Sarah and some other people, a kind of animated interface for navigating through different types of content on the dome. So you would see this projected over your head at the Michael Crouch Innovation Centre, and it would zoom in to fulldome video content, and the premise behind this is that we’d be able to navigate through different sorts of spaces, and provide an interesting user experience for interacting with the dome. So it’s not just a passive movie watching experience.
This is an example of a couple of works. This is by Sarah Kenderdine looking at architectural ceilings in India. Now, here’s a few computer rendering of the DomeLab installed in different places. The dome is capable of being a horizontal dome but also angled to about 45 degrees, so you can change the way in which audiences view the dome and the content in the dome. The orientation of the dome has a profound impact upon how you experience content in different ways and at the top, we’ve got the human Connectome model. Below that, some imagery from Antarctica, which I shot about ten years ago. And to the right, an image of it installed in the, I think, Prince of Wales Museum in India. This was a pre-vis before it went off touring to India. So that’s another important aspect of the dome is that it’s an experimental dome and it’s a travelling system so it can be packed up and tour around the country, and go overseas, and provide content and experiences to people in a wide variety of different places. It’s highly adaptable.
These are pre-visualisations I did of the dome installed in the National Museum of Australia gallery, when this project was first getting up, to get an idea of where it would be situated and its scale in relation to the rest of the gallery space. It’s a very large structure as you can appreciate, so you need lots of ceiling space and room around it, so there’s quite a lot of logistical overview that needs to be taken into account when you’re planning to install it and show content for it.
So, as I mentioned I’ve been working in fulldome content for over ten years now. This is a movie that I shot in Antarctica in 2007, so exactly ten years ago. And this was, at the time, super high resolution content. And this is work that I’ve done with the Mawson’s Huts Foundation: sailing over the seas to Antarctica, taking lots of cameras with me and documenting the Mawson’s Huts heritage site. This is about a seven-minute movie I can’t play all these movies to you unfortunately, but it will give you an idea of the range of content that we can generate for these dome systems. So with this you really feel like you are visiting Antarctica and the huts and so on.
Other content that we can generate for this is another work that I did in 2009 that was generated from global ocean data. What we’ve done here is use satellite data from what’s known as the GEBCO dataset, the general bathometric chart of the oceans. Normally you’d see this with the Earth underneath it. So what we’ve done here … I was working with a colleague of mine, Paul Burke, who I’ve worked with for many years in visualisation, and we extracted or subtracted the Earth from this GEBCO dataset, and we end up with the global ocean. So this enables us to appreciate that the ocean is, in fact, one ocean around the world. All of these notions of the Atlantic and the Pacific and so on are just human fictions, really, and this dome system enables us to see this in an interesting new way.
Earlier work that I’ve done also concerning Aboriginal astronomy … I’ll let you listen to this.
[plays video]
NARRATOR: The wind, a lovely breeze blows through the trees, and blows around me. You hear the old ancestors whisper through you, telling you stories. And then at night he’d look up at the sky, he’d show us, even Dad used to show us. You’ll see a shape of an emu. And when it’s just about coming close to laying eggs, the emu head would be sticking up and he’d be feeding. You sit on your own and you look at the circle of things, and a story comes to you. When you go to sleep, you’ll hear the old people whispering to you, telling you about that story, about that thing that you’ve seen.
PETER MORSE: So that was a movie I made for a group, an Aboriginal community in Carnarvon in Western Australia, in 2012. And that was installed in a museum there running on a permanent loop in a cultural visiting centre. And that was really the first work that I did becoming aware of the nature of Aboriginal astronomy and working with groups there and so on, and getting into rock art sites. [points to slide] So this is an early panorama I did of Mandu Mandu cave, which interestingly enough my sister, who is an archaeologist, discovered a 30,000 year old shell bead necklace in there. One of the oldest shell bead necklaces from the Pleistocene era in Western Australia, so there’s a bit of a family connection there with looking at this historical material.
Other work that I’ve worked on is a fulldome movie looking at astrophysical visualisation, working with astrophysicists, where Paul Burke worked on computational visualisations of dark matter, and we worked with Alan Duffy, the astrophysicist, to create the fulldome movie Dark which plays in hundreds of planeteria around the world. It’s been playing for years now, in 15 different languages, in all the different countries that you see there. That’s something that I would hope that will happen with the fulldome movies that you’ve seen here in the exhibition. That these are full, super high resolution planeterium movies that can play in planeteria around the world so you can reach hundreds of thousands of people, which is an interesting proposition. You can hear Alan speaking here [audio plays].
Now, besides working with fulldome, I became very interested with revisiting all the photographic material that I’d created, and working in the area of photogrammetry, and this is where we talk about computational photography. Photogrammetry is a way of looking or taking thousands of photographs, and processing them by computer using special algorithms called SFM algorithms (structure from motion), and you can generate 3D geometry. So the photographs are not just 2D flat images anymore, but they enable you to create 3D scenes in which you can then put virtual cameras and fly around. So, for instance, I’ve got a movie that I’m working on currently, revisiting material that I shot in Antarctica, where I can fly around Mawson’s huts three dimensionally with virtual cameras, and zoom in on details, and reconstruct artefacts from the environment. It’s a way of revisiting material that you may have shot years ago and discovering new types of content and ways of using that.
So let’s talk about fulldome, how that works in relation to the human vision system. The dome is a hemisphere, as you’re aware. It is 180 degrees by 180 degrees across, 360 degrees around the circumference. Human field of vision, as indicated there: 220 horizontally, approximately 150 degrees vertically. So it’s larger vertically than your horizontal field of view. So, it’s designed to completely immerse you so you can look around in the dome and it provides that sensation of being in places, and it can also provide strong feelings of motion sickness if things move too rapidly. So you’ve got to compensate with these perceptual characteristics of the human vision system, the vestibular system, all this sort of stuff, to make sure that your audiences don’t feel ill or fall over. So that means you tend to make content that’s more stately and it moves in a slower way, and this creates complexities for things where people say ‘Well, I want to suddenly zoom in on this part, or zoom out of that part of an image,’ and so on. There are all these characteristics that you learn about through experience of working with the medium.
As I’ve mentioned, the fulldome really derives from planeterium systems. It was based in astronomy, looking up at the night sky, so it’s perfect for that. But when you apply it to looking at other types of content, you’ve got what I think of is the horizon problem, which is … if you imagine that you’re looking straight up at something, and you’re doing it in a naturalistic way where you’re matching one for one between the dome and the environment that you’re looking at, then your horizon is going to sit exactly on the spring line of the dome. We compensate for that by either disregarding naturalism and saying ‘Well, let’s tilt cameras such that the horizon appears across the dome in some way.’ Or we can shoot with cameras with very wide fish-eye lenses, so more than 180 degrees. Like 220 degree lenses. We’re up to 250 degree lenses, which actually see behind themselves, so that’s a way of bringing horizon content into the dome. You can synthesise lenses through software. All this sort of stuff. And, of course, you can then tilt domes too, so that’s another way of compensating for this sort of horizon issue, and how to get all of this content available to the viewer.
To understand the fisheye view, it’s a bit confusing initially, when you start working in it as a medium. You have a fisheye lens, which is this 180 degree lens, and when you take a photograph like this you don’t think ‘Well, what’s in the centre of the photograph is right before the viewer’s eyes.’ What’s before the viewer’s eyes, generally, if they’re lying down with their feet towards the bottom of the image here, is actually at the bottom of the image there. And what’s at the centre of the image there is right above their head, at the centre of the dome if it’s a horizontal dome. If it’s an angled dome, it will be more towards their field of view. So you’ve got to get your head around how you think about working with images and that has implications for panning movements across scenes or how you might zoom into something. Technically, in a sense, you can’t really zoom in on domes. You can move cameras back and forward but if you zoom you change from 180 degrees to more constrained angles, which are not technically correct in relation to dome projection.
So we end up working in a spherical space. You’re all familiar with Mercator’s map of the world? This is one solution for unwrapping a sphere. How do you make something spherical into a rectilinear image to work with, or how do you get rectilinear images to map onto spheres. These are quite complex mathematical problems, and they’re solved through projection processes in geometry. With a full sphere, you have a Mercatorian or an equirectangular image like this. You can see the image appears to be distorted at the top and appears to be distorted at the bottom. It’s not really distortion, it’s just projection. It’s how the pixels would unwrap in this sort of space. With a dome, you basically have the top half of that image there, or the bottom half of the image, depending on what you’re looking at. You only capture half the sphere.
When we work with dome systems, we start off with what are known as fulldome masters, 4K images as PNG image files, and you end up with thousands and thousands of those because you’re playing at 30 frames per second, and these days increasingly at 60 frames per second, so that gives you an enormous data rate with these images. We’re looking at about ten minutes is 200 gigabytes of data. 200 gigs used to be a prodigious amount. Of course, these days, you can just buy it at the local post office. A 200 gig drive is not terribly expensive. But, the key thing is, it’s a lot of data to transfer between software and over networks and all this sort of stuff. Storage is getting more amenable.
Then, you take your dome masters, which are these thousands of single images and for the DomeLab system, we have to chop that up into eight image streams at this WQXGA resolution, or 2560 x 1600 resolution, and that amplifies the amount of data again, so you get ten minutes that’s suddenly turned into 350 gigabytes of data. What we end up with is these different sorts of eight videos chopped up and projected through the eight projectors and then they’re blended together on the dome so you get this seamless, super high resolution image projected on the hemisphere in front of you. And of course, my job is to make this all appear invisible to you.
We need lots and lots of computer resources to work with this, so we have things called ‘render farms’, which is, basically, you have lots of networked computers together. Many terabytes of storage space. You need fast computers. You need super high speed networking to transfer all this data around. It really takes a lot of effort to work in this as a medium. It’s very different to conventional film production or video production. Audio is done in a 5.1 standard … these various settings, and so on, to create a surround sound environment.
Now, when you’ve shot all your material, or you’ve got your animation content, and so on, you want to bring this together and this is where it gets kind of tricky because most editing packages are made to work in a rectilinear space with an X,Y coordinate system. But, we’re working in a spherical space, which is quite different and quite challenging. It’s only really in the last few years, since the commercial advent of 360 degree video and VR becoming more popular, that software packages have become capable of working with spherically mapped content. And that means that how the images map in the space, and how you can do dissolves and special effects and lighting tricks, all this sort of stuff, is becoming gradually easier, but it’s still not easy by any means.
So let’s move on now to Walinynga, Cave Hill where we were tasked – when I say ‘We’ it was Paul Burke – came along. There was a whole team of people. Margo Neale and Christiane Keller, who are in the audience today, were there organising that amazing effort to get all the people out there, and all the gear, because you travel with an enormous amount of equipment.
I was with my colleague Chris Henderson, who I’ve worked with for a number of years doing fulldome time-lapse stuff, astrophotography specifically, and we flew from Tassie to eventually get to Alice Springs, and then out to Cave Hill. Which is, as indicated here, right in the middle of nowhere. Which is, as somebody who enjoyed working in Antarctica, the kind of place I like to work. It’s wonderful working in these remote areas. This is a bit of scenery from where we were. You can see, in the bottom left there. A lot of gear, driving a long way, setting up tents, and then carrying heavy camera gear and time-lapse equipment across these fairly rugged kind of landscapes. And Chris and I had to separate ourselves from the rest of the crews out there because we were shooting astrophotography, so any illumination is going to destroy the material that you shoot. Fortunately, there’s not much occupation out there, and streetlights and cars and all this sort of stuff, so we managed to get some really nice material shot.
I was up at dawn and dusk shooting content on my trusty 5D Mark IV camera. This is a 30 megapixel camera with fisheye lens and with time-lapse, you can’t rush this sort of stuff. It takes time. It’s documentary. You’ve got to just be out there and what you get is what you get. You hope the weather’s going to be good, that you’re not attacked by wild animals or things like that occurring, and that your batteries last, and there’s a million different parameters to worry about in these beautiful, remote landscapes. And of course, you can’t rush back to go and get bits which you forgot to take with you, so you’ve got to be very prepared.
We generated a lot of astrophotographic time-lapse sequences. There’s a few technical details on the left hand side there. You’re looking to, say, 2000 to 6000 frames over a night, depending upon the exposure times that you’re taking these images at. And if you look at the two left hand circles there, it’s the same sequence but just processed differently. A big part of the process that you go through is post-processing these images and making them into smooth, seamless movie sequences. We use a whole range of different steps. Custom software, custom workflows, and so on, to get this into suitable fulldome projection.
The other important part of the assets for these movies were panoramas. So, we take these equirectangular panoramas. These are either shot with multiple fisheye shots or lots of rectilinear shots stitched together to form full 360 degrees, and each one is about 12,000 x 6000 pixels, so they’re about 72 megapixel shots, so super high resolution. Everything has to be super-duper high resolution for a 4K dome system. There’s just no way of working with low resolution material. Paul was flying a drone around as well, shooting the landscape and stitching all those images together. And then, the sky you see above Cave Hill there on the top right hand side is a synthetic sky put in. Otherwise, you’d have rotor blades of the drone showing in the dome. So these are, in a sense they look naturalistic and immersive, but they’re kind of synthetic as well. Computer driven content.
The other thing that Paul worked on were these tjanpi figures, which you would have seen in the exhibition, which are these large spinifex grass and wool sculptures, beautiful things, that Paul photographed and constructed these 3D models. You can see on the top right hand there, that’s an example of a photogrammetric model, so it’s something that exists only in the computer. It has the texture applied to it so it looks kind of realistic, but it’s this sort of grey computer mesh, that we then worked with an animation company called Zero One, Brad May, to articulate and animate these characters. And that in itself is a whole saga of complexity.
With the photogrammetry, and what we have here to reconstruct the cave, I’m talking about the first Cave Hill movie now, is hundreds and hundreds of photographs. If you look at the top right hand side, the blue stuff you see there indicates a camera point of view. And so we have 788 cameras taking photographs and generating 2.8 million points of data. Then when that’s all constructed together into the photogrammetric model, we have about ten million faces and five million vertices. So these are enormous computer models with enormous amounts of data, and you need really fast computers to sift through them and make sure that they’re accurate and that the textures are looking good and things aren’t out of focus and all this sort of stuff. Again, this is why you need to be extremely finicky and particular about it looking okay. Because anything you show on the dome, if it’s got a flaw in it, it will just stand out like a sore thumb. So you have to be very particular about it.
Now, then. This is just making a photogrammetric model. We then take that into a 3D visualisation package. Here I’m working with Cinema 4D 3D visualisation package. This is a 3D animation piece of software. You can take a model in, then you can put virtual cameras in there and animate where they move around like they would in the real world, and you put a bunch of other stuff in there. If you look at the two large, rectangular, black bits for instance, they are floating black bits of fog which I stuck down the back of the cave because the photogrammetric model didn’t completely cover the cave and so, if I put a sun in this world, you’d have shafts of light shining through, which is not like the real cave at all. So basically, it’s a bit of stage management to construct a cave-like scene, and then we render this out with a virtual fisheye lens.
So, my world. You can see me having a nervous breakdown here, where I’ve got a whole bunch of people. There’s a lot of politics, and people management, and upward management and sideways management, and this sort of thing going on where we’ve got the traditional owners. If there’s an issue, it’s their story that we’re telling. We have to be respectful towards that. An issue of cultural sensitivity, the real world, the Museum are concerned about archaeological accuracy, and so on, and maintaining good relations with the traditional owners. Sarah, the head honcho of the dome project. I suppose these arrows might indicate, ‘It’s all your fault if it’s not looking good’, so small arrows indicate things which are less stressful for me. There’s a whole lot of things going on there, to give you an idea.
So now we’ve got the Cave Hill movie, and I’ll just quickly run through the different scenes in this movie here. [shows series of images] We start off with the circles that you see on the right hand side of the screen there, we start off with what’s known as a ‘planet shot.’ This is something that I actually wanted to do in a fulldome show for years, where we start off with … it’s kind of an inverse fisheye. If you look at what’s happening here, we’re flying down towards Cave Hill, but then it mysteriously everts and changes perspective and, using a process of mathematical remapping, we’re changing the perspective of an equirectangular image from looking down at the ground to looking up at the sky, and then mapping that into a hemispherical dome environment. And that gives us a lovely transition from an overview to a ground-based view of the environment.
And then composited into that is a time-lapse Milky Way sequence that I shot. These time-lapses obviously run all night, and it’s the luck of the draw, depending on how they came out. But, because of the wonderfully clear skies around Cave Hill, we got this absolutely magnificent shot of the Milky Way, and Venus, and the Moon. And then, gratifyingly, Stanley Douglas could talk to us about this in his native language. So we’ve got Stanley’s voice mixed into the soundtrack here just giving us the Aboriginal names of these particular celestial artefacts, which was great.
Then, I’ll just narrate the circles, actually, because it would take too long to play through the movie. This is one of Paul’s panoramic shots of the external Cave Hill. This is shot with the drone but processed here with a 230 degree virtual fisheye lens. So it’s completely synthetic, the sun is fake, and the birds are fake as well. And then, in this scene for instance, we’ve got all these nice, sparkly motes and sunbeams, and all these things are synthetic. Straight photographs can look a bit boring on the dome. You want to animate them and bring them alive a bit, so one of my aesthetic jobs was to put little touches of animation here and there, to bring them to life.
Here, we’re now inside the cave. We transition into the cave and we can do a kind of zoom here. This is, strictly speaking, not mapped correctly for the dome but an interesting thing there is that the human sensorium is quite forgiving of these sorts of mappings in the dome. But you’ve got to be careful of exactly what you’re doing.
And then we move on to a series of stills, fisheye stills, but all these are heavily processed as well to elicit data. It’s quite a dark environment, the cave. We shot in what’s known as HDR, High Dynamic Range, so you do multiple exposures. So you’re not just taking one photograph. For instance, in this shot it could be five photographs, which are composited together and then multiple layers of those HDR images are re-composited together to elicit fine detail showing on the cave walls. So it’s all very complicated stuff that looks deceptively simple, like a straightforward photograph.
And then from this part here, all these shots were also working very closely with the NMA archaeologists saying ‘There’s this particular wall that we really want to look at. This is part of the story,’ and so on, so there was this constant to-ing and fro-ing between the material that I could produce and the material that they needed to have in the dome. And also Sarah’s concerns about the overall aesthetic look of it.
Here we’re in a photogrammetric model now, so we can fly around the cave with a virtual camera, with a virtual fisheye lens. The trees and bushes and sky that you see outside at the bottom are completely fake and computer generated, but from photographs that were taken at the site so they look realistic. And again, the detail that you see on the cave surfaces here, this is all elicited by heavy visual graphics processing to get that sort of detail out but still maintain a naturalistic sense to the whole thing.
And here we see the image in the centre, Wati Nyiru himself, now somewhat faded after being touched so many times. Many fingers apparently eroding the paint surfaces and so on. What’s exciting about this kind of stuff is that these create archive quality recordings of environments that are exposed to the elements. So it’s a wonderful way of revisiting sites and seeing how they change over time, and I think they’re important cultural assets for museums to be collecting in any historically significant site.
We move back now to still photographs with a gentle zoom in, and some glowy sunbeams and motes and things like that, and then … so I will end up narrating this whole movie for you, a time-lapse sequence of an ancient Illy tree, I believe that’s correct, that was nearby Cave Hill. Chris and I were camping away from other people, so we just wandered around the landscape and responded to it. You know, what was there, what was interesting to look at.
And then, this was a shot I was particularly happy with because this is an experiment where I took two opposing fisheye time-lapse cameras, 180 degrees each, and they’re stitched together into a fully spherical, 360 degree, time-lapse image moving through the desert, to give you a sense of what the desert environment was like. It’s sort of this beautiful garden-like place, but still harsh. And then we finally ascend into the heavens at the end.
Now the other movie that was a major part of this. Sarah’s initial idea behind this was to look at these artworks, these paintings, and figure out a way of putting them onto the dome so you could really get intimate with these images and see what’s going on in them, and understanding the iconography behind them because to a naïve viewer, they seem very abstract, but obviously within their communities of context they carry a lot of highly culturally specific information in their stories and their ways of understanding the world. So the dome is an interesting way of approaching that as a form of knowledge and translating it, I guess, for the viewer. If you see the funny, white, blobby thing, it’s not terribly clear what it is there, it’s a sort of iconographic explanation provided to me by Christiane so this is saying ‘This is this part of the painting, this is this, this signifies this, this is this rock hole,’ and so on. So that was an important part of the process, to get these things as accurate as possible.
Now, we have the interesting question here: How do you get a circular painting onto a dome? Because a circle is not a hemisphere, it’s a different shape. It’s kind of round, but it’s two dimensional as opposed to three dimensional, so there’s an issue there. If you put a circle onto a dome, it gets all stretchy. You’d have to pull it down, like a hat over your head or a disc over your head, and all these bits get smeared out. So I had the wonderful and nightmarish opportunity to look at how to reproject these sorts of images into an equirectangular projection. They were provided to me by the NMA very generously at this super duper ultra-high resolution, so you’re working with four gigabyte images. One of them was 36,500 pixels square, which is just enormous. I mean, most software packages are not designed to work with images of this resolution, and graphics cards choke on them. And then, I had to scale them up to equirectangular mappings and then downsample them and do all this stuff to them. And why do you need to do that? Well, you need to do that so you can then project it into a spherical space and put virtual cameras in there and move them around and they look okay so we can move from point to point with a narrative.
Similarly, when we work with square paintings, then we enter what I describe as ‘A world of pain’. Working with equirectangular projections is nightmarishly complicated stuff. Not only because of the nature of the projection that you’re working with, but the fact that you have to tesselate, or tile, the painting to extend it out to a spherical co-ordinate system seamlessly. You want to retain the integrity of the original image, you want to be able to move around that image. There are all sorts of issues going on. Then, to make it even more exciting for me, we need to animate that and put multiple layers on it with indications and highlights of what’s occurring in the painting as we move through the narrative.
This is an example where part of the story is Wati Nyiru turns into a carpet python, and it raises the interesting question ‘What does a wooden carpet python look like when it flies?’ Well, I’ve no idea, and the animator had no idea, so we looked for carpet pythons on the internet, that kind of thing, movies. You don’t see flying snakes terribly often when you work with computers in a room somewhere. So it was working this out so it looks reasonably good, and then of course you had to animate that snake in a spherical coordinate system and composite it over the top. So, these are extra layers of complexity that one is working with, matching it between what was provided to me by Zero One, and we were constantly on video conferencing or by email, and so on, to work this kind of stuff out.
Another important thing was to highlight elements of the iconography of the paintings as we moved through them. So I went through many different iterations of styles in order to do this. You could either have a kind of boring spotlight moving across something, or ‘is it okay to change the colour of the paintings?’ These are all examples of various iterations that I went through working out what would look best. Because it’s not just a question of ‘This looks okay,’ but it’s a question of ‘Does this look okay in the dome as well?’
Because one of the issues with the dome is a thing called cross bounce, where you’ve got multiple projectors projecting onto a hemispherical surface, and if for instance, you’re looking at something that’s quite dark, but you’ve got something quite light behind you, all this light will bounce off behind you and wash out the scene in front of you. So there’s constant management of lighting issues in the dome environment as well, and that’s why I went through processes of adjusting the illumination of these images, or the colour of them, and so on. To try and manage these cross bounce effects and enable us to attend to particular parts of the scene. And I think we resolved it fairly happily with the highlighted points and dropping the paintings back in levels of luminosity.
So, what we have here now, we’re talking about the art film, we start off with the tjanpi figures. That, in itself, was an enormously complicated process with the photogrammetric figures, which are rigged with what are known as inverse kinematic structures, or skeletons, which enable you to animate them. So you’ve got to put a little skeleton inside each figure, which has bones going up its arms and elbows and shoulders, and joints, and things like that, so you can make these characters move in particular ways. And that was achieved, also, through motion control work where Zero One team filmed a dancer who did a series of dances, and then those videos were translated into motion control points, and then mapped onto the figures such that they move like human figures. So there was a whole background process occurring there.
Then, in the art movie, we cut to the scene where we’re flying across Australia, and visualising where all the songlines are. And this was provided to me as Google Maps data, with a series of points showing specific songlines, and points of particular relevance to these songlines, that I would then map onto a high resolution image of Australia, which was a digital elevation model that I created specially for this project with data supplied to me by the Australian National University VisLab. A colleague of mine, Drew Whitehouse, and I’m very grateful for him providing me with this ultra-high resolution data from the Geophysical Survey of Australia.
So this is a highly accurate model of Australia that we can fly closer into than you’ll see on the dome, with textured satellite data draped over the top. So in the future, I can imagine revisiting the model and flying along different songlines and looking at different places. And then I had the interesting challenge there of matching that with the painting of the songlines from 1995, I’ve forgotten the painting’s name, that was to be overlaid with this accurate digital elevation model of Australia. And as you can imagine, a hand-painted painting is not accurate like a computer or satellite is, so there was a lot of gentle nudging to get these things to work together and create the revolution of the songlines stretching across the continent. And I can see lots of interesting future possibilities for doing some things with this for the dome, including perhaps some real-time content.
And then, the bunch of other scenes that I’ll cover. Here, we move through various paintings and we have Wati Nyiru’s eyeballs watching the sisters as they travel across the landscape. And then if you look at the bottom left hand side here, this is an interesting shot for me because this demonstrates where we’ve reached the limits of resolution of the scans that were provided to me on the dome. Even though they’re super high-res, because you’re working with a super high-res system, I couldn’t actually get in closer than this to the paintings themselves without them starting to pixelate or break up and so we’re talking about these massive, massive scans here.
So there are a number of tricks involved there where we flew across the landscape and this waterhole actually detaches from the painting and flies up towards the viewer and fades out, and we transition into the final scene there with Wati Nyiru, the snake wrapping around the scene and then behind that, the astrophotographic time-lapse sequence, that I shot at Cave Hill, and then the sisters finally flying away into the sky. Which, again, was a challenging animation experience, and there was a lot of to-ing and fro-ing between Sarah saying ‘I want them to be more fly-y like this, and off in this direction,’ and Brad constrained by what he could do, and then the context of the dome. All this sort of stuff. So it’s really a team effort, these kinds of shots, to pull them together. A lot of discussion involved, and then also coming back and talking with the NMA crew to see how they think this is going. So I think we came up with a happy result. And then, this is the final movie.
[plays movie]
NARRATIVE: A story of the seven sisters. The sisters are travelling.
PETER MORSE: At this point we’ve got the narrative being spoken by Shelley Morris, which was recorded in a recording studio in Sydney and then this was mixed into the ambient soundscape by Cedric Maridet, who is a colleague of Sarah’s, who I think did a wonderful job of bringing these movies to life.
NARRATIVE: From west, to east. Along their journey, the sisters create several sons …
PETER MORSE: Okay, I won’t let that play, because you’ve probably seen this, and I think that’s the conclusion of my talk. So, all the people I need to thank are there, and I think that pretty much covers it. Okay, so thank you very much. And if anyone has any questions, I have microphones here for you.
QUESTION: Thank you so much. That was fascinating, and I’d be happy to sit and watch the movie again. I’m sure everyone would.
PETER MORSE: I can pop it on with no sound. How about I do that? There we go.
QUESTION: I’m curious about … you were talking about the sort of chain of command of how this thing’s being received, and when do the owners see it? When do the owners, and how involved were they? And how much bounce back had to go there?
PETER MORSE: They were very involved. I mean, probably Christiane and Margo can talk about this in more detail. I met them out there. There was a lot of discussion, obviously, and approval of the content. As far as I’m aware, we would work on stuff and go back to the NMA people, who would go back to the traditional owners and there was this constant communication there. And then the great thing that happened was that they came to the DomeLab in Sydney, and saw it in situ for the first time, and I think they were thrilled with the content. So, we were very conscious in maintaining good and clear relationships through that entire process.
MARGO NEALE: We expected more to-ing and fro-ing but they just loved everything, with no issues at all. In fact, they were fascinated mostly because they saw things in the cave they’d never seen before, right? Because of the higher degree of resolution, as opposed to the dark, dark cave.
CHRISTIANE KELLER: Other than the extensive community consultation that went on beforehand. So, it took us almost two years to actually get out to Cave Hill to film, and it was really touch and go in the week before. So, all the communities with rights to Cave Hill were consulted, were part of our consultation process, and everybody had to say ‘Yes, that’s okay.’ I think we prepared it really well, because we had an explanation of what the dome should look like, what the dome actually is. So, in that we had a little animatic of the dome that we could show them this is how, this is the story that we want to tell, and we gave them all this material and then also gave them [inaudible], so that they could visualise what it would look like.
MARGO NEALE: On particular countries, like the APY lands, for example, it’s not the traditional owners we’re talking about consulting here. The APY council has to conform to the Native Title Act and the Cultural Heritage Act. So any activity whatsoever, whether it end up being a ten minute thing like this, or clearing a waterhole, or building a road, has to go through consultation of every one of the APY community councils along the route, regardless of it being this. That’s one level.
The second level, because the songlines are all connected, even though Stanley [Douglas] and them are the traditional owners of Cave Hill, they can’t speak for what sites before and what sites are after. But those people who are custodians of the sites before and after need to know what’s happening next. So that’s kind of what we’re talking about. Not consultation in the normal sense you’d understand it. So these meetings that occurred repeatedly and would have all the problems: some turn up, some don’t, not the right people, not enough, and getting them all to certain sites. There was all of that kind of stuff and that’s not the consultation of looking in the dome and seeing whether that’s what you want.
For that part that Peter’s involved with, there were never any issues that we were particularly aware of. Except that it got stalled towards the end, and people assumed it was for cultural reasons, but it wasn’t. I just bit the bullet and said, ‘We’re going anyway now,’ because I don’t believe it’s got anything to do with cultural reasons, what was holding it up. And it wasn’t, it was a cultural tourism operator who didn’t like a big mob taking flash films because then it might impinge on his business. That’s what it was, right? So we just got on a plane and went anyway, and once you get there you sort it out. So it’s all of that, but in terms of what Peter did in that dome, there were never anything but joy and glee and great sense of privilege.
PENNY VAILE: Does anyone else have any questions? Yes, lovely.
QUESTION: Thanks for a great presentation. In the exhibition, it’s talked about the fact that the songline just doesn’t end in Central Australia, but goes right across to the east coast. In the future, is it possible to do more of the songline story across to the east coast, and then hook it up to the original central west coast story?
PETER MORSE: Yes, certainly. That’s where I can see some of this stuff going. That’s why I was referring to that with the Australian model and so on. If you got the data, we could really visualise anything that you want, but the question there is, obviously, things like budget and time and this sort of stuff. All this story is very labour intensive to do it. But having said that, you can have real-time systems using not pre-rendered movies, but you could approach it using computer game engines, and using databases of information. It would be lower resolution, but you could navigate through much more complex, interactive environments. Like a VR environment, or something like that. Or you could show it on the dome. And that would be a way of bringing more and more detail together. Because if you think about it, there’s probably dozens or hundreds of hours of potential content about the songlines and contexts and things like that. It would be, you know, how would one go about consolidating that all together? And I’d suggest that a kind of interactive model would be an interesting way to approach that.
PENNY VAILE: Hang on, just let me pass you the microphone.
QUESTION: Will this exhibition travel around Australia?
PETER MORSE: I hope so. I think the Museum people know more about this, so Margo …
MARGO NEALE: It’s actually going to travel, for sure. First overseas, though. You can only get it appreciated in your own country when it’s been overseas. You know that. The cultural cringe is alive and well.
AUDIENCE MEMBER: A prophet’s not honoured in his own land.
MARGO NEALE: Yes, a prophet’s not honoured in his own land. Anyway, so yes. Resoundingly yes. You said 788 cameras. What does that mean?
PETER MORSE: Well, it means there’s 788 photographs. So, the software refers to them as ‘cameras’ but they’re actually 788 images. It refers to them as ‘cameras’ because it’s calculating where a camera would be in that scene and so therefore, what the real thing is that a camera would see, rather than, thinking of it as a still.
MARGO NEALE: They’re scanners, aren’t they? They scan, rather than …
PETER MORSE: No, they just take photographs. They use algorithms, using structure from motion algorithms, which means that I take one photograph with a particular view. I take another photograph of the same thing from a different perspective, and using computational processing, I can calculate what that thing in the real world, the shape of it is, from these two photographs. So it’s very sophisticated stuff.
CHRISTIANE KELLER: That would have just been my question too. How do you figure out, of the 790 photographs, how do you stitch them together that you actually get the true form of the cave, with all its …
PETER MORSE: Well, I’m fortunate. I don’t do it, the software does it for me. Some very clever people have clearly worked out, mathematically, how to do this. Photogrammetry is something that’s really been evolving, probably the last 20 years, 15 years, something like that, until it became commercially viable. The other thing that’s really crucial behind all this stuff is that computers were not fast enough to do this until about ten years ago. You could do photogrammetry that would complete processing within your lifetime. I remember looking at computers that would say ‘estimated rendering time for this – one year, six months.’ You know, this sort of thing. So, nowadays we can do it within a reasonable amount of time.
CHRISTIANE KELLER: And does it require a particular system in photographing the cave to ensure you actually don’t have any holes or gaps? PETER MORSE: Yes. All these things require techniques of taking photographs, so you could, in theory, hold a camera and just point it randomly around a place and hope to produce a photogrammetric model from it, but you want about 30 to 50 per cent overlap between each image. It’s really about this degree of overlap between the images that enables you to algorithmically reconstruct the geometry behind it. And you’ve got to be aware of things like lighting, variation in lighting. This is why we look at things like using HDR, high dynamic range, such that you get good compensation for overexposure in highlights. You get shadow detail. You get good mid-range detail, things like that, because you’ve got to remember that computer algorithms are not intelligent. They can’t make aesthetic or intellectual judgements like we do. They’re purely based upon the data there, so if the data is garbage then you’ll get garbage results from it. And that comes from experience working in these different contexts.
CHRISTIANE KELLER: So in some ways, a super low range use of that particular programming would be when you have a camera and you do a panoramic shot, and you do the three photos and it kind of stitches them together?
PETER MORSE: No, that’s different. Panoramas are not photogrammetric in nature. You could make a photogrammetric panorama. A panorama is really just a 2D photograph stitched together saying, ‘These bits of the image match. Let’s blend them together to make a larger photograph.’ So that’s what’s occurring there.
QUESTION: This is not a question. It’s just to say I was exhausted listening to you, and I can’t imagine what it must’ve been like.
PETER MORSE: Well, it was exhausting for me too, I can tell you.
QUESTION: How long were you there?
PETER MORSE: Well, the trip was only about five days, to Cave Hill, and then because I’d never been in that area before, I took another five days and drove around looking at Uluru and places like that. There’s so much wonderful stuff there, I was drooling with the sense of possibility of looking at the Henbury [meteorite] craters and all the stories that relate to the landscape there, so I could imagine going back there and producing a lot more exciting new content about this sort of stuff, and looking at different ways of telling these sorts of stories. But the actual work itself, producing these two movies, is six months of full-time work, of really long hours working with lots and lots of computers, and rendering, and hundreds of emails and hundreds of phone calls, and all this kind of thing. It really was a prodigious amount of work behind making these. And that’s the funny irony of it, is that you end up with 15 minutes of movie at the end of it that people enjoy, but they don’t really understand the suffering that went on behind it. But that’s all part of the plan, I guess. Thank you very much.
MARGO NEALE: If you weren’t so good, you wouldn’t make it look so easy.
PENNY VAILE: Well, I think that’s a really good point that Margo has made: ‘If he wasn’t so good at it, it wouldn’t look so easy.’ Can I just get everybody to just put their hands together and say thank you very much to Dr Peter Morse?
PETER MORSE: Thank you very much. It’s been a pleasure.
Disclaimer and copyright notice
This is an edited transcript typed from an audio recording.
The National Museum of Australia cannot guarantee its complete accuracy. Some older pages on the Museum website contain images and terms now considered outdated and inappropriate. They are a reflection of the time when the material was created and do not necessarily reflect the views of the Museum.
© National Museum of Australia 2007–24. This transcript is copyright and is intended for your general use and information. You may download, display, print and reproduce it in unaltered form only for your personal, non-commercial use or for use within your organisation. Apart from any use as permitted under the Copyright Act 1968 (Cth) all other rights are reserved.
Date published: 01 January 2018