Friday 30 August 2013

Busybusybusy

I know it's been a bit of a while since I last posted an update.  You can blame that on two things: lots to do, and very little 'new' stuff being done.  I'm going to write today about my project for this half of the course.

After the typewriter we moved into the second half of the course in which we will be repeating a similar process to that we went through to make the typewriter but with moving video.  I've already written about video tracking.

First off, what am I making?  I am making this:

Middlesbrough Transporter Bridge.
Source: http://commons.wikimedia.org/wiki/File:Middlesbrough_Transporter_Bridge,_stockton_side.jpg

Which is a really rather huge bridge in Middlesbrough, North-East England.  Construction began in 1907 (at a cost of £68,026) and was finished in 1911.  The bridge is 259m long and 69m tall.  It is a 'transporter bridge' in that rather than allowing passage over itself, the bridge itself transports goods or people.  This sort of bridge is put in place where there is a requirement for the passage of (big) ships along the river.

I am putting the Middlesbrough Transporter Bridge on the river Thames, straddling the Financial Times building.  I am compositing the bridge into a short, eight-second video taken from a boat travelling along the Thames past the Financial Times building and toward Southwark Bridge

Source: http://www.geograph.org.uk/photo/175689
This is a bit of an undertaking and, if I'm honest, one which is quite daunting.  The model of the bridge is complete. The model of the 'gondola' which hangs below the bridge, and on which people are transported, is complete.  Today I am hoping to start working on texturing the bridge and gondola.  I will give the model of the bridge and gondola their own update because I think it will be interesting to look in some detail at the construction method.

And that's it for today, just a short update.  We haven't covered many new topics in the last few weeks (aside from video tracking) but have been reinforcing things from the first half of the course.  We are looking into some pretty clever stuff soon so I will be able to write a proper update.

Tuesday 20 August 2013

Tracking video part i

This is a complicated, complicated topic which involves a lot of deep maths.  I don't understand any of the underlying maths behind it but thankfully the magnificent software 3D Equalizer (3DE) does a lot of the heavy lifting.  I have quoted from Wikipedia and other sources where need be (i.e. where I don't understand) but I will try to put things into my own, simpler terms, for my own benefit as much as the reader's.  I'm not being patronising when I say that, as I could include sentences such as:

"Because the value of XYi has been determined for all frames that the feature is tracked through by the tracking program, we can solve the reverse projection function between any two frames as long as P'(camerai, XYi) ∩ P'(cameraj, XYj) is a small set. Set of possible camera vectors that solve the equation at i and j (denoted Cij).
Cij = {(camerai,cameraj):P'(camerai, XYi) ∩ P'(cameraj, XYj) ≠ {})" Wiki

The 3D Equalizer interface.  Source: http://www.kopona.net/soft/multimedia/28725-3d-equalizer-v4r1b9.html

match mov·ing  
1. In cinematography, match moving is a visual-effects technique that allows the insertion of computer graphics into live-action footage.

3DE is a piece of match moving software.  Match moving is a general term which encompasses a few different disciplines, all of which have the same end goal: simply, the matching of the movement of a camera in a piece of video so CGI can be added.

The problem is thus:  I have a piece of CGI I would like to add to a piece of video footage.  I render out my piece of CGI and put it into place at the start of my video (as per the typewriter).  I play the video, the CGI does not move with the video.  It doesn't move because the render is not animated - the camera in Maya through which I rendered did not move, so the CGI remains still.

The get over this problem I need to match the movements of the camera in 3D space.  All of the information about the camera's movement can be calculated from 2D video file.  This process uses much maths, and is basically triangulation.  It is the same fundamental maths as that behind GPS.

tri·an·gu·la·tion  
1. In trigonometry and geometry, triangulation is the process of determining the location of a point by measuring angles to it from known points at either end of a fixed baseline.

The first step is to track features in the video (a tracking point).  These are normally points of contrast or distinctive shapes.  The most important, fundamental, absolutely undeniably vital point to take away from this is that the features being tracked must be stationary within the scene.  The purpose of tracking these features is to allow the software to calculate the position of the camera relative to the scene.  If the features being tracked are moving objects such as leaves blowing in the wind, people, vehicles, etc. then the software will be basing its calculation on incorrect data and will return an incorrect result.  This is similar to trying to work out the location of a sound without realising you're listening to the echo.  

So the video is played through 3DE and distinctive features are tracked.  To track a feature one zooms in on it, marks it as a feature to be tracked, and plays through the video.  The software moves from frame to frame tracking the feature and the user adjusts the definition of the feature's size and shape to ensure the tracking point stays in the same position.  It must not waver by a pixel.  Without a human there to tell the software what should be tracked it will not work.  There are automatic solutions but we are discussing the manual process so they can be put to one side.  The human, in this case, is vital.  Adjusting the contrastsaturation and brightness of the image may help the software track the feature.

It is important to build up a good spread of tracking points.  The software cannot make triangulation calculations based upon only one point.  It may be able to with only ten points, but upwards of 20 are normally required.  Theoretically there is no upper limit to the number of tracking points.  There must be tracking points close to the camera, far away, and in the middle distance.  This selection is required to make it easier for the software to see parallax in the video.  

par·al·lax
1. An apparent change in the direction of an object, caused by a change in observational position that provides a new line of sight.

Parallax is what we understand as perspective.  If you are travelling in a car, objects close to you whiz by while the horizon moves very slowly.  This magnificent GIF explains it perfectly.

Source: http://en.wikipedia.org/wiki/File:Parallax.gif
We all implicitly understand this.  The software understands this: things close move quickly, things further away move slowly.  By tracking points in the video (for example, the corner of each cube in the GIF) the software is able to establish what relative distance objects are from the camera.  From that, the software is able to calculate the position of the camera.

I will end this update here.  There is more to say about video tracking but I will split it up.  I will talk a little more about the software side of it, and then explain some of the problems we have in transferring the data generated by the tracking software into Maya.

The main points to take away are:

- To put CGI into a video we need to track the camera's movements.
- We do that by match moving - tracking specific points within the footage.
- The software takes that and triangulates the position of the points in 3D space.

Friday 16 August 2013

Typewriter finished, and beyond!

The typewriter is finished.  It's not perfect and it needs more work but it is done.  Part of the reason for the gulf of time between my last post and this one was the frantic rush to finish it on time.  Since then the next part of the course, focusing on tracking video (which really needs about fifty updates but which will get one, soon) has been quite intense so I haven't had the energy.

But now, this morning, armed with tea, I will show off the finished product and talk a bit about what went right, and what went wrong.

First things first, the final render:



Though it's stating the obvious in the extreme; the typewriter isn't real, it wasn't in the photograph.  Through a relatively simple process of modelling, matching lighting, texturing and tweaking the render, the end result was achieved.

At the end of the first six weeks there was a presentation of sorts, in which the last six weeks' work of all of the students was gone through in the 'breakout space' on a large screen, for the other students to admire and the main tutors to critique.  This is a slightly odd process but is designed to simulate the rushes/dailies of visual effects production, in which the whole VFX crew and the director etc. will sit down and review the work to date, and the senior folk will critique it.  The critique can be very biting but is aimed at the work, rather than the artist. (That's the theory, anyway...)

The critique I received was definitely positive: great model, great render and lighting, the model sits well in the scene; nice materials.  The textures are the thing which are lacking: it looks to clean, too uniform.  I will be doing more work on it at some point, and that will be to add dirt and staining to the surface, to make it look more real.  In particular the area under the keys is too clean, and has none of the accumulated detritus of age.

It was a good experience to go through and it was great to see the sort of work which we will produce by the end of the next six weeks.

I'm now going to briefly break down the render above into its layers.  The final render is rendered in parts to make the compositing easier and to give more control over the look of the final image: if five minutes before delivery someone decides the typewriter should be blue, it's easier to change if you only have to re-render the beauty layer (or, alternatively, render colour out on a completely separate layer and adjust in post).

The beauty layer.
  We start the render with the 'beauty layer'.  This is the final render of the typewriter, the model is complete, the texturing is complete and the materials are done.  This layer has in it the typewriter alone and - in our case - any shadows it is casting on itself. (This is not always the case.)  I think this layer's pretty cool.

The backplate.
The beauty layer is laid over the backplate which is just the photograph or scene into which you want to place your model.  The lighting of the backplate is matched through means of an image-based lighting (IBL) sphere.  IBL is using an image to cast light into a scene, and is remarkably simple to do on a still image in a controlled environment.


The mirrored sphere above can be used to generate an IBL. This is an easy concept to understand: in the image above, not much can be seen out of the windows.  This isn't ideal for an IBL as you are trying to match the light coming from the windows; there is more out there than a meaningless slightly-blue blur.  So we expose a range of photos exposing all of the dark and light areas correctly.  These are merged together using a marvellous piece of software called HDR Shop.   The final solution is made up of seven or so photos at different exposures to have a wide dynamic range.  This is wrapped around the image (imagine the typewriter was veryvery small and inside the sphere, now imagine the surface of the sphere exactly as it is but pointed inwards).  That allows us to match most of the lighting pretty easily.  Some secondary lights are needed to build up the shadows.  

The alpha channel.
The next layer is just an outline of the typewriter, to make compositing easier.  This is called an alpha channel, so named because the black and white image above is stored in the alpha channel of the image file.  The alpha channel is, put simply, a measurement of how opaque the image is: black is transparent, white is opaque.  The image above, overlaid correctly on top of the beauty layer, would perfectly remove the typewriter from its surroundings.  Ignore the black bit in the middle: it's a mistake :)

The ambient occlusion.
Our old friend the ambient occlusion layer! About which I wrote in a previous update.  This layer helps us bring out the fine, small details.

Next come two shadow layers.  One in which the shadows are only those cast by the IBL solution.  (These shadows simulate the shadows coming from global illumination: light bouncing around the room.)

IBL shadow layer.

And one layer with only those shadows cast by the direct lights.  (These shadows simulate the shadows coming from the sunlight through the window.)

Direct shadow layer.
 These two layers are call the soft and hard shadow layers.  The shadow information is stored within the alpha channel of these images and has been made visible for the sake of this blog: usually this image would be totally black with the alpha channel defining areas that were in shadow or otherwise.

And last, but not least, reflections.

Reflection layer.
This one's quite self-explanatory.  The table on which the typewriter will sit is shiny, a flat surface was placed below the typewriter and made equally shiny so the reflections would match.  Somewhat ironically, the reflections aren't visible in the final image because the typewriter is on top of them.

From there it's a relatively simple process of stacking the layers together to make the final image.  This can be done using Photoshop or a marvellous piece of software I don't understand called Nuke; I'm not going into Nuke at all.

Et voila.  The above process is the basic principle by which all VFX are produced, from films to adverts and TV shows: take a backplate, build a model, match the camera angle and lighting, sit the model in the scene and add materials and textures, render.  The same can be said for moving images.  I mentioned that camera tracking was my next update and in which will explain this process, as best as I can.

Sunday 4 August 2013

UV Mapping

This update has been delayed by a lazy weekend and a busy last typewriter week; nevertheless, here it is. I have lots of posts to write, and still need to cover linear vs non-linear workflow, more detail on rendering to match a background, and texturing.  So far this blog has followed the weeks of the course in some order but I think as we progress that may breakdown somewhat.

However, I intended to only glance at UV mapping but have ended up with quite a long post about it.  Though tedious, UV mapping is an extremely important part of 3D modelling - it's also almost unique as being universally despised by artists.  It isn't complicated but can seem it.  I'll try to make it clear.

UV mapping is the process of flattening a model out, in the same way you may unfold an empty cereal box.  This is done so that text and images can be easily applied to the model's surface without distortion.  This is a fairly simple but laborious process, and the results are never perfect.

3D models are defined by the position of their vertices.  The position of vertices is recorded using cartesian co-ordinates on three axes. You will probably be familiar with the two axes version from basic X, Y graphs such as:



A familiar graph with vertical and horizontal axes.  Using the above graph any of us could quickly and easily find the location of X = -4, Y = 3.  3D modelling uses exactly the same system but includes one further axis: Z.




Hopefully the above image is relatively clear, we have the same height and width axes but have included one further axis for depth.  To find the location of X = -4, Y = 3 and Z = 5 would be no more difficult than finding X = -4, Y = 3 on a two-dimensional graph.  Vertices on a 3D model are positioned and recorded in just such a way.

UV map·ping 
1. the 3D modelling processing of making a 2D representation of a 3D model.

An example of UV mapping a cube: the faces are unfolded and the
3D co-ordinates of the vertices (X, Y, Z) are translated into 2D co-ordinates (U, V).
The term UV mapping refers to the translation of the X, Y, Z co-ordinates of vertices into U, V co-ordinates.  The co-ordinates are labelled differently to signify their existence in different spaces.  When a model has been UV mapped completely it has been 'unwrapped' and is ready for texturing.


A UV map and a UV checker. (Also visible is the UV Texture Editor, on the left.)

The UV checker, above, is applied to a model to ensure it has been accurately unwrapped.  It is made up of regular squares subdivided with further regular squares, and numbers.  The squares help to ensure a UV map is even while the numbers help ensure the orientation is correct.  

The UV map above is the flattened out version of the front casing of my typewriter:

The 3D  mapped model, as above.
Of note:

- The 2D UV map looks misshapen - it looks as if the UV checker will look a mess on the 3D model.

- The checker applied to the 3D model is in fact  extremely even.  The orientation of the numbers isn't quite right but that is not a practical problem.

The problem arrives when trying to unwrap more complicated objects.  It is usually impossible to lay a complex object out completely flat.  As you lie certain faces flat, other faces will crease and buckle, because they are no longer flat.  You are trying to translate 3D faces into 2D, keeping the same relative size for them all while reducing distortion.  Inevitably there will be distortion which must be kept to a minimum.

To reduce distortion you must cut the model into sections to have everything lie flat: this is the compromise of UV mapping: seams.  The culmination of translating 3D co-ordinates to 2D is seams - it is an imperfect system.  Seams do not match up neatly and so will show up on the texture.  The trick is to hide the seams, whether inside the model or behind it.

I have a lot more topics to post about but the gap between posts may be larger than normal because of the increased pace. I'll keep trying. The typewriter is going excellently! A big post about that at the end of this week, hopefully.

Thursday 1 August 2013

Typewriter update iii

The modelling of the typewriter is all-but entirely complete.  There are only some very minor parts to create.  We have moved onto texturing, as well as lighting the scene and placing the model into it.  I'll write more about that in the update at the end of this week.


This is roughly the angle from which the typewriter will be seen in the final render.  The keyboard and paraphernalia are complete. There may be a few extra things to model as rendering progresses.  The key hammers are complete (thankfully!) and though not quite exactly as they are on the actual typewriter, they're close enough for this render.  

All that's left to model is the hinge which connects the handle on the left of the image to the typewriter, and some 'filler' in the space visible through the gap in the casing, on the top. 

A close-up of the keys.  The supports for the keys and some of the paraphenalia are visible.  The hammers are visible in the background.