Sunday 22 September 2013

Endings and beginnings

I have been remiss in posting for the last few weeks.  This can be blamed entirely on the intensity of the course, which has now finished.

This is a sad time for us.  The eight of us who went through the full 12 weeks (originally nine, one of whom sadly had to drop out for personal reasons) have become quite close, particularly in the last week or two.  Though we will all continue to stay in touch and, hopefully, work together at some point in the future, the daily routine into which we had fallen will be missed.  To arrive every day and sit together in the room learning and working, helping each other out and offering advice, was a blessed thing.  I hope that these friendships become long lasting - some are already very special.

Not only is this a sad time, it is also a scary time.  I don't think any of us have jobs to go back to.  (Technically I do but I don't think that will be something to which I return.)  We are all effectively out of work.  The world of freelance VFX feels very big, very scary and very inaccessible.  The previous 12 weeks have been a lovely, cosseted, safe environment which have felt very unreal; it was as if we had stepped through the back of the wardrobe and have just been rudely expelled.

Of course we will all be fine.  We are all talented and capable, and have a solid qualification on which to rely.  But it is scary, and we are all scared.  It's all about networking which is something that seems to be the single most important part of finding work while also being singularly detested by all.  It's as if the system has specifically evolved to force upon people that which they most dislike.

I should mention that having finished the course I have of course finished building the bridge.  The short, eight second video, needs some refining which I hope to do this week.  Once it is done I will show it to my devoted fans and will write a post - probably long - going into some detail about it.  I am extremely proud of it and will admit that it is actually a bit excellent.  I can post a still image from the video:

My bridge.  It's not finished (which is obvious to me but probably isn't to anyone
who doesn't see things like a VFX nerd).
As may be startlingly obvious it's still a work in progress.  I will write much, much more about it when it's finished.  I think it will be a really interesting post.  (Briefly - the bridge has too much contrast above the glass building; it's not lit strongly enough over the river; the back of it should be behind the glass building; there are no windows in the hanging cars; the runners for the hanging cars are not textured; the glass building reflects where there is no glass; the glass building's reflections are undistorted; the shadows go mad later in the video.)

In a few days we are taking a trip to a VFX studio to have a look round and - horror of horrors - have our work critiqued by them.  Terrifying.

Saturday 7 September 2013

Render Layers

Update: I've just had a quick look at this post the morning after I wrote it and I realise I forgot to enable anti-aliasing while I was rendering these out: the rendered car is very grainy.  It shouldn't be and I will correct that when I post the render layers in full.

I have written previously about render layers in my post on the finished typewriter.  That was a very simplistic overview of render layers.  Today I will write more about the concept and practice of render layers, and why they are used in VFX.  This is a long one. Make tea/coffee/beverage.

Render layers are, as Wikipedia succinctly says, "multiple images designed to be put together through digital compositing to form a completed frame".  The final image is separated out into these separate images to make it easier to change things further down the pipeline.  If everything is rendered into one image then re-rendering that shot may take days.  This means that making small changes such as the colour of objects, or the intensity of shadows is not possible near the deadline.  By rendering in layers and separating those elements into parts we can make those changes without incurring massive time delays.

There are actually two different forms: render layers and render passes.  Ultimately these two things lead to the same end.  Render layers are slightly more 'old school' and now we would usually use render passes.  For the purposes of this blog the two are exactly the same - the differences are more procedural.   I will use the term render layers because I think it makes more sense to the lay reader.

Render layers are similar to layering pieces of cut paper over each other to form an image:

This beautiful image is made by Carlos Meira - http://www.carlosmeira.com.br/
His site is a treasure trove.

In the lovely picture above, the sky is made up of layers of light blue paper; the clouds are off-white paper; the ship, brown; the sea, green and so-on.  Each of these elements individually do not amount to much but when they are layered up properly, in the right order, they make the final image.

The same principle is true with render layers.  The layers are created and added on top of each other, finally creating the final scene.  In this update I will go through the process of adding each stage to a simple render.  I will write a short, separate update detailing the individual layers; I intended to add them to the end of this one but am having some problems.  Hopefully separating them out will allow the process to be made clear without convolution, but an idea of the scale of VFX can be had at the end.

All of the image and assets in this update were created by Escape Studios; until now I haven't used any of the stuff they've actually provided us with but today's exercise is particularly useful for explaining this idea.  The piece of work I will be writing about today was to composite a Porsche 911 (one of the old, nice ones) into this scene:

Image courtesy of Escape Studios - http://www.escapestudios.com/
who agreed I could use it etc. etc.
The laborious work of matching the camera angle, modelling the car and placing it in the scene had already been done for us, which was a bonus.

Rough plate.
This first render is basically what you see upon loading the file.  At the stage of this render I have actually already matched the lighting in this scene to the backplate; however, I've turned off shadows so it's basically, it's a brand new scene.  The first thing we do, though, is match the lighting so let's pretend I haven't already done it.  

This scene has one direct light - the sun, and one indirect light - the sky.  I have written a little more about this distinction in a previous post on lighting.  For this scene matching the lighting is done as follows: match the colour of the object in shadow (indirect light) to the backplate; match the angle of shadows from the sun (direct light) then match the colour and brightness of the sunlight falling on the object.  The angle of shadows is matched by eye.  To do this we usually use white and grey spheres placed in the scene.  Failing that we use the backplate and helpfully this backplate has a white object - the flatbed van - relatively near where our object will sit. 

Final Gather, indirect light matched.
Above is the object with shadows - indirect light - roughly matched to the backplate.  This was done in a very simple, crude way, by making the object white and removing all reflections.  The colour of the front bonnet which I happen to know will be in shade was matched to the tailgate of the truck behind it - the truck being white, also.  Once the colour and intensity of the indirect light was matched, a direct light was added to simulate the sun.

Final Gather and direct light.
The angle of the shadows - and so the position of the light - now roughly matches that of the backplate; certainly close enough for this example.  The colour of the shadows doesn't matter because we will correct it in post.  Shadow colour and density (how light or dark shadows are: how much can you see) is normally corrected in comp just because it's easier and often quicker.  It also leaves open the option of tweaking at the last minute.   Again the colour of the light was matched by roughly matching the colour of the side of the car to the side of the truck behind.  

It's quite striking how effectively the object sits in the scene, now.  There is nothing more complicated than a backplate, Final Gather, and a direct light.  Disregarding the shadow colour/density, at a cursory glance you may already not consider that the Porsche was CGI.  

Final Gather, direct light and reflections.
Above we have added the reflections.  Again, another small layer added to the realism, and reflections really sell it.  The whole purpose is not to deceive the keen-eyed viewer but to achieve an effect such that it never occurs to look for the CGI.  That is the goal - make it so good no-one even considers it.  The reflections are cleverly done: instead of reflecting a mirrored sphere as per my typewriter we have just wrapped the backplate, exactly as it is, around the object.  The eye sees reflections and is satisfied: no-one really checks what's being reflected. (Except film or VFX nerds... like me.)

Reflection of the background!
In this quick render I've made the car 100% reflective - a mirror.  It's a little difficult to make out but on the car's front-right wheel arch the reflection is obviously that of the top-right corner of the casino.  These sort of tricks are used all of the time, usually to make up for lack of useful data.

re·frac·tion
1. The fact or phenomenon of light, radio waves etc., being deflected in passing obliquely through the interface between one medium and another.

Next, the glass.

Final gather, direct light, reflections and refractions.
Another layer that sells it to the eye.  In this layer the glass is transparent and refracting with the correct refractive index - it bends light by the right amount.  To further convince the eye the car is really there, the backplate is visible through the glass.  This is an extremely simple thing to do - it doesn't happen automatically - but it pushes things another notch.  The interior of the car is extremely simple and has barely been modelled.  It would be ideal to have more detail in there but it's barely visible so we get away with it.

That's really the final layer.  There is a lot of behind-the-scenes stuff which isn't visible here.  The next quick update will just have all of the proper render layers on their own with a caption explaining each one.  I haven't gone over the Ambient Occlusion again as we saw it previously but it is in there, doing its bit.  

In the context of the course, we covered this whole process - and a lot of other layers that I haven't written about - in the morning and early afternoon.  In many ways today was one of the most intense because it was an astonishing volume.  We were warned in the morning that, "this is the lesson that people usually really struggle with".  I'm pleased to say that the class as a whole kept up, and we all understood what we were doing.  Until we moved into the software Nuke, which I'm still not going to talk about because I don't understand it. 

Friday 30 August 2013

Busybusybusy

I know it's been a bit of a while since I last posted an update.  You can blame that on two things: lots to do, and very little 'new' stuff being done.  I'm going to write today about my project for this half of the course.

After the typewriter we moved into the second half of the course in which we will be repeating a similar process to that we went through to make the typewriter but with moving video.  I've already written about video tracking.

First off, what am I making?  I am making this:

Middlesbrough Transporter Bridge.
Source: http://commons.wikimedia.org/wiki/File:Middlesbrough_Transporter_Bridge,_stockton_side.jpg

Which is a really rather huge bridge in Middlesbrough, North-East England.  Construction began in 1907 (at a cost of £68,026) and was finished in 1911.  The bridge is 259m long and 69m tall.  It is a 'transporter bridge' in that rather than allowing passage over itself, the bridge itself transports goods or people.  This sort of bridge is put in place where there is a requirement for the passage of (big) ships along the river.

I am putting the Middlesbrough Transporter Bridge on the river Thames, straddling the Financial Times building.  I am compositing the bridge into a short, eight-second video taken from a boat travelling along the Thames past the Financial Times building and toward Southwark Bridge

Source: http://www.geograph.org.uk/photo/175689
This is a bit of an undertaking and, if I'm honest, one which is quite daunting.  The model of the bridge is complete. The model of the 'gondola' which hangs below the bridge, and on which people are transported, is complete.  Today I am hoping to start working on texturing the bridge and gondola.  I will give the model of the bridge and gondola their own update because I think it will be interesting to look in some detail at the construction method.

And that's it for today, just a short update.  We haven't covered many new topics in the last few weeks (aside from video tracking) but have been reinforcing things from the first half of the course.  We are looking into some pretty clever stuff soon so I will be able to write a proper update.

Tuesday 20 August 2013

Tracking video part i

This is a complicated, complicated topic which involves a lot of deep maths.  I don't understand any of the underlying maths behind it but thankfully the magnificent software 3D Equalizer (3DE) does a lot of the heavy lifting.  I have quoted from Wikipedia and other sources where need be (i.e. where I don't understand) but I will try to put things into my own, simpler terms, for my own benefit as much as the reader's.  I'm not being patronising when I say that, as I could include sentences such as:

"Because the value of XYi has been determined for all frames that the feature is tracked through by the tracking program, we can solve the reverse projection function between any two frames as long as P'(camerai, XYi) ∩ P'(cameraj, XYj) is a small set. Set of possible camera vectors that solve the equation at i and j (denoted Cij).
Cij = {(camerai,cameraj):P'(camerai, XYi) ∩ P'(cameraj, XYj) ≠ {})" Wiki

The 3D Equalizer interface.  Source: http://www.kopona.net/soft/multimedia/28725-3d-equalizer-v4r1b9.html

match mov·ing  
1. In cinematography, match moving is a visual-effects technique that allows the insertion of computer graphics into live-action footage.

3DE is a piece of match moving software.  Match moving is a general term which encompasses a few different disciplines, all of which have the same end goal: simply, the matching of the movement of a camera in a piece of video so CGI can be added.

The problem is thus:  I have a piece of CGI I would like to add to a piece of video footage.  I render out my piece of CGI and put it into place at the start of my video (as per the typewriter).  I play the video, the CGI does not move with the video.  It doesn't move because the render is not animated - the camera in Maya through which I rendered did not move, so the CGI remains still.

The get over this problem I need to match the movements of the camera in 3D space.  All of the information about the camera's movement can be calculated from 2D video file.  This process uses much maths, and is basically triangulation.  It is the same fundamental maths as that behind GPS.

tri·an·gu·la·tion  
1. In trigonometry and geometry, triangulation is the process of determining the location of a point by measuring angles to it from known points at either end of a fixed baseline.

The first step is to track features in the video (a tracking point).  These are normally points of contrast or distinctive shapes.  The most important, fundamental, absolutely undeniably vital point to take away from this is that the features being tracked must be stationary within the scene.  The purpose of tracking these features is to allow the software to calculate the position of the camera relative to the scene.  If the features being tracked are moving objects such as leaves blowing in the wind, people, vehicles, etc. then the software will be basing its calculation on incorrect data and will return an incorrect result.  This is similar to trying to work out the location of a sound without realising you're listening to the echo.  

So the video is played through 3DE and distinctive features are tracked.  To track a feature one zooms in on it, marks it as a feature to be tracked, and plays through the video.  The software moves from frame to frame tracking the feature and the user adjusts the definition of the feature's size and shape to ensure the tracking point stays in the same position.  It must not waver by a pixel.  Without a human there to tell the software what should be tracked it will not work.  There are automatic solutions but we are discussing the manual process so they can be put to one side.  The human, in this case, is vital.  Adjusting the contrastsaturation and brightness of the image may help the software track the feature.

It is important to build up a good spread of tracking points.  The software cannot make triangulation calculations based upon only one point.  It may be able to with only ten points, but upwards of 20 are normally required.  Theoretically there is no upper limit to the number of tracking points.  There must be tracking points close to the camera, far away, and in the middle distance.  This selection is required to make it easier for the software to see parallax in the video.  

par·al·lax
1. An apparent change in the direction of an object, caused by a change in observational position that provides a new line of sight.

Parallax is what we understand as perspective.  If you are travelling in a car, objects close to you whiz by while the horizon moves very slowly.  This magnificent GIF explains it perfectly.

Source: http://en.wikipedia.org/wiki/File:Parallax.gif
We all implicitly understand this.  The software understands this: things close move quickly, things further away move slowly.  By tracking points in the video (for example, the corner of each cube in the GIF) the software is able to establish what relative distance objects are from the camera.  From that, the software is able to calculate the position of the camera.

I will end this update here.  There is more to say about video tracking but I will split it up.  I will talk a little more about the software side of it, and then explain some of the problems we have in transferring the data generated by the tracking software into Maya.

The main points to take away are:

- To put CGI into a video we need to track the camera's movements.
- We do that by match moving - tracking specific points within the footage.
- The software takes that and triangulates the position of the points in 3D space.

Friday 16 August 2013

Typewriter finished, and beyond!

The typewriter is finished.  It's not perfect and it needs more work but it is done.  Part of the reason for the gulf of time between my last post and this one was the frantic rush to finish it on time.  Since then the next part of the course, focusing on tracking video (which really needs about fifty updates but which will get one, soon) has been quite intense so I haven't had the energy.

But now, this morning, armed with tea, I will show off the finished product and talk a bit about what went right, and what went wrong.

First things first, the final render:



Though it's stating the obvious in the extreme; the typewriter isn't real, it wasn't in the photograph.  Through a relatively simple process of modelling, matching lighting, texturing and tweaking the render, the end result was achieved.

At the end of the first six weeks there was a presentation of sorts, in which the last six weeks' work of all of the students was gone through in the 'breakout space' on a large screen, for the other students to admire and the main tutors to critique.  This is a slightly odd process but is designed to simulate the rushes/dailies of visual effects production, in which the whole VFX crew and the director etc. will sit down and review the work to date, and the senior folk will critique it.  The critique can be very biting but is aimed at the work, rather than the artist. (That's the theory, anyway...)

The critique I received was definitely positive: great model, great render and lighting, the model sits well in the scene; nice materials.  The textures are the thing which are lacking: it looks to clean, too uniform.  I will be doing more work on it at some point, and that will be to add dirt and staining to the surface, to make it look more real.  In particular the area under the keys is too clean, and has none of the accumulated detritus of age.

It was a good experience to go through and it was great to see the sort of work which we will produce by the end of the next six weeks.

I'm now going to briefly break down the render above into its layers.  The final render is rendered in parts to make the compositing easier and to give more control over the look of the final image: if five minutes before delivery someone decides the typewriter should be blue, it's easier to change if you only have to re-render the beauty layer (or, alternatively, render colour out on a completely separate layer and adjust in post).

The beauty layer.
  We start the render with the 'beauty layer'.  This is the final render of the typewriter, the model is complete, the texturing is complete and the materials are done.  This layer has in it the typewriter alone and - in our case - any shadows it is casting on itself. (This is not always the case.)  I think this layer's pretty cool.

The backplate.
The beauty layer is laid over the backplate which is just the photograph or scene into which you want to place your model.  The lighting of the backplate is matched through means of an image-based lighting (IBL) sphere.  IBL is using an image to cast light into a scene, and is remarkably simple to do on a still image in a controlled environment.


The mirrored sphere above can be used to generate an IBL. This is an easy concept to understand: in the image above, not much can be seen out of the windows.  This isn't ideal for an IBL as you are trying to match the light coming from the windows; there is more out there than a meaningless slightly-blue blur.  So we expose a range of photos exposing all of the dark and light areas correctly.  These are merged together using a marvellous piece of software called HDR Shop.   The final solution is made up of seven or so photos at different exposures to have a wide dynamic range.  This is wrapped around the image (imagine the typewriter was veryvery small and inside the sphere, now imagine the surface of the sphere exactly as it is but pointed inwards).  That allows us to match most of the lighting pretty easily.  Some secondary lights are needed to build up the shadows.  

The alpha channel.
The next layer is just an outline of the typewriter, to make compositing easier.  This is called an alpha channel, so named because the black and white image above is stored in the alpha channel of the image file.  The alpha channel is, put simply, a measurement of how opaque the image is: black is transparent, white is opaque.  The image above, overlaid correctly on top of the beauty layer, would perfectly remove the typewriter from its surroundings.  Ignore the black bit in the middle: it's a mistake :)

The ambient occlusion.
Our old friend the ambient occlusion layer! About which I wrote in a previous update.  This layer helps us bring out the fine, small details.

Next come two shadow layers.  One in which the shadows are only those cast by the IBL solution.  (These shadows simulate the shadows coming from global illumination: light bouncing around the room.)

IBL shadow layer.

And one layer with only those shadows cast by the direct lights.  (These shadows simulate the shadows coming from the sunlight through the window.)

Direct shadow layer.
 These two layers are call the soft and hard shadow layers.  The shadow information is stored within the alpha channel of these images and has been made visible for the sake of this blog: usually this image would be totally black with the alpha channel defining areas that were in shadow or otherwise.

And last, but not least, reflections.

Reflection layer.
This one's quite self-explanatory.  The table on which the typewriter will sit is shiny, a flat surface was placed below the typewriter and made equally shiny so the reflections would match.  Somewhat ironically, the reflections aren't visible in the final image because the typewriter is on top of them.

From there it's a relatively simple process of stacking the layers together to make the final image.  This can be done using Photoshop or a marvellous piece of software I don't understand called Nuke; I'm not going into Nuke at all.

Et voila.  The above process is the basic principle by which all VFX are produced, from films to adverts and TV shows: take a backplate, build a model, match the camera angle and lighting, sit the model in the scene and add materials and textures, render.  The same can be said for moving images.  I mentioned that camera tracking was my next update and in which will explain this process, as best as I can.

Sunday 4 August 2013

UV Mapping

This update has been delayed by a lazy weekend and a busy last typewriter week; nevertheless, here it is. I have lots of posts to write, and still need to cover linear vs non-linear workflow, more detail on rendering to match a background, and texturing.  So far this blog has followed the weeks of the course in some order but I think as we progress that may breakdown somewhat.

However, I intended to only glance at UV mapping but have ended up with quite a long post about it.  Though tedious, UV mapping is an extremely important part of 3D modelling - it's also almost unique as being universally despised by artists.  It isn't complicated but can seem it.  I'll try to make it clear.

UV mapping is the process of flattening a model out, in the same way you may unfold an empty cereal box.  This is done so that text and images can be easily applied to the model's surface without distortion.  This is a fairly simple but laborious process, and the results are never perfect.

3D models are defined by the position of their vertices.  The position of vertices is recorded using cartesian co-ordinates on three axes. You will probably be familiar with the two axes version from basic X, Y graphs such as:



A familiar graph with vertical and horizontal axes.  Using the above graph any of us could quickly and easily find the location of X = -4, Y = 3.  3D modelling uses exactly the same system but includes one further axis: Z.




Hopefully the above image is relatively clear, we have the same height and width axes but have included one further axis for depth.  To find the location of X = -4, Y = 3 and Z = 5 would be no more difficult than finding X = -4, Y = 3 on a two-dimensional graph.  Vertices on a 3D model are positioned and recorded in just such a way.

UV map·ping 
1. the 3D modelling processing of making a 2D representation of a 3D model.

An example of UV mapping a cube: the faces are unfolded and the
3D co-ordinates of the vertices (X, Y, Z) are translated into 2D co-ordinates (U, V).
The term UV mapping refers to the translation of the X, Y, Z co-ordinates of vertices into U, V co-ordinates.  The co-ordinates are labelled differently to signify their existence in different spaces.  When a model has been UV mapped completely it has been 'unwrapped' and is ready for texturing.


A UV map and a UV checker. (Also visible is the UV Texture Editor, on the left.)

The UV checker, above, is applied to a model to ensure it has been accurately unwrapped.  It is made up of regular squares subdivided with further regular squares, and numbers.  The squares help to ensure a UV map is even while the numbers help ensure the orientation is correct.  

The UV map above is the flattened out version of the front casing of my typewriter:

The 3D  mapped model, as above.
Of note:

- The 2D UV map looks misshapen - it looks as if the UV checker will look a mess on the 3D model.

- The checker applied to the 3D model is in fact  extremely even.  The orientation of the numbers isn't quite right but that is not a practical problem.

The problem arrives when trying to unwrap more complicated objects.  It is usually impossible to lay a complex object out completely flat.  As you lie certain faces flat, other faces will crease and buckle, because they are no longer flat.  You are trying to translate 3D faces into 2D, keeping the same relative size for them all while reducing distortion.  Inevitably there will be distortion which must be kept to a minimum.

To reduce distortion you must cut the model into sections to have everything lie flat: this is the compromise of UV mapping: seams.  The culmination of translating 3D co-ordinates to 2D is seams - it is an imperfect system.  Seams do not match up neatly and so will show up on the texture.  The trick is to hide the seams, whether inside the model or behind it.

I have a lot more topics to post about but the gap between posts may be larger than normal because of the increased pace. I'll keep trying. The typewriter is going excellently! A big post about that at the end of this week, hopefully.

Thursday 1 August 2013

Typewriter update iii

The modelling of the typewriter is all-but entirely complete.  There are only some very minor parts to create.  We have moved onto texturing, as well as lighting the scene and placing the model into it.  I'll write more about that in the update at the end of this week.


This is roughly the angle from which the typewriter will be seen in the final render.  The keyboard and paraphernalia are complete. There may be a few extra things to model as rendering progresses.  The key hammers are complete (thankfully!) and though not quite exactly as they are on the actual typewriter, they're close enough for this render.  

All that's left to model is the hinge which connects the handle on the left of the image to the typewriter, and some 'filler' in the space visible through the gap in the casing, on the top. 

A close-up of the keys.  The supports for the keys and some of the paraphenalia are visible.  The hammers are visible in the background.

Saturday 27 July 2013

Week Four

A full week, this week, which covered a broad range of topics.  This post will focus on lighting.

A lot of the lighting within 3D software is calculated using the same physics which governs the movement of light in the real world; however, I am not a physicist, nor is my teacher.  This post may be fatally flawed with regards to real physics.  I will be talking solely about the use and behaviour of light in the 3D world, not the real one.  That out of the way, lighting is a fascinating part of 3D.  It's complex and can be highly technical; the physical laws can be followed strictly, or disabled and ignored.

The first tool I'm going to write about is Final Gather, which is a render technique by which ambient light is added to a scene.  Ambient light is light which is not being emitted from a direct light source such as a light bulb: it is light which has bounced off walls, the floor, surfaces, or has been diffused by cloudy sky.


dif·fuse  [v. dih-fyooz; adj. dih-fyoos] 
1. to spread or scatter widely or thinly; disseminate. 

Both scenes below contain just one light, casting a shadow.  In the first render light is emitted, lands, and does nothing more.  In the second render Final Gather is enabled; the emitted light lands, bounces and does nothing more.  Maya uses the bouncing light rays to calculate how much ambient light is in a particular area, and subsequently how much ambient light (or how little shadow) there should be.  That's roughly how it works.


Direct light.

Direct light and ambient light.

Similar but very different is ambient occlusion which, to quote Wikipedia, "attempts to approximate the way light radiates in real life, especially off what are normally considered non-reflective surfaces".  In our context, ambient occlusion is a tool used to pick out fine detail on an object when rendering the final image.  The term ambient occlusion means - not too helpfully - the occlusion of ambient light.


An ambient occlusion render pass.

In the image above, there is no direct light.  Final Gather is not enabled.  Only Ambient Occlusion is being used to 'light' this scene.  The back of the typewriter contains a lot of fine detail - switches, dials, layers of casing - which are picked out cleanly and clearly by the ambient occlusion render.  That image can be composited over another render of the typewriter, lit with a direct light.  The Ambient Occlusion pass will help to pick out the fine detail.  It may be that an Ambient Occlusion pass is required on top of a Final Gather pass, but that will depend on the situation. 

As opposed to Final Gather which calculates how much light there should be in a scene, Ambient Occlusion emits 'shadow rays' from surfaces and based upon how far they travel calculates how much shadow there should be.  In the fine cracks of the typewriter the shadow rays travelled an extremely short distance and so there is a lot of shadow.  On the top of the typewriter the shadow rays travelled much further, so there is no shadow.  

In 3D ambient light is different from direct light.  The two are classed as two distinct parts of lighting and may be setup and controlled independently of each other.  In the real world ambient light is just there; in 3D it's an option.  Direct light, Final Gather and Ambient Occlusion can be used in conjunction to achieve the best results: direct light to light the scene, Final Gather to simulate ambient, bounced light and Ambient Occlusion to ensure the fine detail is not lost.  

On Monday we will be looking at non-linear workflows in lighting.  That is, setting up light in a real-world manner such that it decays in the same way light does.  That will probably get quite complicated but I'll write again if I understand it.  For homework, you can all look up the inverse square law.

Wednesday 24 July 2013

Typewriter update ii

A quick update on the typewriter.

I've built the keyboard keys and the struts that support them.  Oddly - to my eyes - there's no '1' key!  This seems to have been a flagrant oversight.  These have all been extremely simple, the keys almost all being exactly the same, while their supporting struts are broadly similar throughout.

The black colour is just to make the keys easier to distinguish.
The hammers which stamp the ribbon to produce a letter on the page have, on the other hand, been a battle:


It's not immediately obvious, but this structure is extremely complex.  The 'base' the hammers sit in describes a curve.  The hammers themselves each rotate slightly as they get to to the periphery; lastly, the keyheads themselves rotate.

All of this together is incredibly complicated to construct.  It's only thanks to my wife that I was able to crack the nut - I had been trying to work out a way to build this as you see it now - from above, taking into account all of the angles.  Wife - brilliant, as always - pointed out that the hammers are not as complex as I'd first though they were and, instead of building them on a slant, I should build them as if they were.  That led to my finally working out how to make this structure.  Nearly finished:



In the image above I have accidentally orientated the hammers the wrong way round; however, I hope it's relatively clear.  I need to figure out how to rotate the hammer-heads correctly and I will be able to slot this into place.  Each hammer is an individual piece meaning that, in theory, I could animate the typewriter such that a keypress triggered a hammer-press.  Fun stuff!

Tomorrow we are taking the photographs into which we will be compositing our objects.  This means I am bringing the typewriter in, to get a good reference photo of it in situ.  A fellow student is modelling old camera of mine, so I will be heavily laden down!  

I'll post another update soon detailing this week's lessons - on Light.

Friday 19 July 2013

Typewriter update i

"How's the typewriter going", you ask?  Well, I'll tell you!

The short answer is, "very well".  Starting yesterday afternoon, with some work this morning and more today, I've worked on it for around 8 hours.  The boxy blob has transformed into something resembling my typewriter and I'm very pleased.

Low-poly mesh

High-poly mesh
The basic structure is in place and I have moved toward a final shape (adding detail) on the front section of the case; this is a double-edged sword, making it easier to get a feel for the finished product but changes will take longer. Right now the shape is slightly more clear on the low-poly mesh thanks to the wireframe, but with the high-poly mesh you can make out the curves.  This weekend I will be working on the back of the machine where the paper feed mechanism - and letters - are housed.  I have worked largely from reference pictures to date but for those complex parts I really need the machine in front of me.

Of particular note is the sloping, curving section at the front of the machine, which surrounds the keyboard.  The inner and outer edges go through some extremely abrupt changes from sharp to smooth to sharp.  Realising them on the model has been a chore so far but I've nearly got it.

When I started work on this, delayed by shoddy pictures (my fault), I was worried that I had over-reached and this might be too much to achieve in the relatively short time available.  I feel much calmer about things now, having made good progress already.

Otherwise, we have begun to look at lights and their uses.  The whole of next will will be devoted to lighting.  I'm looking forward to this a lot as it's one of the parts of 3D I know the least about.  The next week will hopefully leave us prepared to render our objects in a few weeks.

Thursday 18 July 2013

Projects

I am late in posting this and can only apologise to the millions of readers left hanging.

The course has hit its next milestone: The (First) Project.  I have previously mentioned that the projects require us to model, texture, light and position an object of our choosing in a photograph; this will be achieved such that its true nature as CGI is hidden.  Some of us knew that projects were part of the structure of the course, while it has come as something of a surprise to others.

The purpose of the project is to act as an equivalent for coursework in school.  As we go through the next three and a half weeks learning more about the tools of Maya for modelling, texturing, lighting and compositing we will work on our projects at the same time, using the same skills.  Once we finish the first six week section of the course we will be more familiar with the workflow from start to finish.

The object I have chosen is a typewriter! A nod back to my years of working as a typist.

A Smith-Corona De Luxe typewriter.  
I bought this typewriter in Spitalfields Market for thirty-odd quid and do really rather like it.  It is in perfect working order, the only thing it needs is a ribbon; ribbons are, to my surprise, still very much available and I will be buying one as soon as possible.

We started the projects at the start of this week (on the 15th of July), but haven't got much further than taking reference photos and setting up our scenes.


Above is the scene as I work on it - the images are photographs I took, all of them with the camera as far away as possible from the typewriter, using a zoom lens to remove perspective.  The boxy blob in the middle is a rough 'sketch' of the model which will be kept as simple as possible until I'm ready to start adding more detail to it.  The goal is that I will have most of the important shapes and curves fleshed out in something as simple as this before I begin to detail the model: as soon as detail is added, working on the mesh and making small changes can become a nightmare so it's best to realise mistakes and problems early on.

I'm pleased that the project has started because it will give me something to structure this blog around.  Instead of simply running through the last week's learning, I will post regular updates in the form of 'how the typewriter's going' and fill in with other, extraneous, details.

The completion of this project will mark the end of the first half of the course, and so will be at the six week mark: three weeks, two days.

Friday 12 July 2013

Week Two

Another week finished - only ten to go!  The course is moving fast and some people are struggling: the first few weeks of learning 3D software, and learning to work in a 3D environment, are extremely difficult.  Most of the ground has been covered now so the next four weeks are to reinforce the concepts and ideas, continue to learn new tools, and work on our projects (more on those, soon).

This week was devoted entirely to polygonal modelling and was a real blast for me.  I love poly modelling and already know a lot of the tools intimately.  As from the start, it continues to be eye-opening how much of my time has been mis-spent on previous projects, and I am cursing my lack of knowledge!

The week started with a detailed look at the basics of modelling a human face.  This is an important skill and also showcases the tools better than many other things.  The head we made was very basic and extremely ill-proportioned but could be used as a foundation for any future projects: it's a good rough outline from which to start.  This exercise allowed us to go through most of the poly modelling tools in one way or another.

Looking quite alien this may look like a experiment in bad modelling
but the basic structure is the same as any head.  

The rest of the week was spent modelling a P-51 Mustang.  This exercise brought the 'theory' from the human head exercise into practice, using many of the same tools in a more refined, planned manner.  Working from basic schematics - front, side and top - over the course of a day and a half we modelled a simple but true-to-form P51.  This was great fun and really rewarding.  The model took shape quickly and the form of the plane is a pleasant one.  

Low-poly.

High-poly - subdivided low-poly model.
The two images above are formed from the same mesh - the top image is the low-poly mesh.  That mesh began as a cylinder, just behind the nose cone: that cylinder was reshaped and extruded back, with further reshaping, to form the fuselage; the wings were extruded out from the fuselage along with the rudder and tail fins.  Looking at the low-poly version the loops of lines that flow down the length, and across the width of the plane are obvious.  Those are edge loops and their (relatively even) distribution is key to the plane having a smooth surface.  That they line up on the wings, fuselage and cockpit highlight that the plane was built as one piece. 

Next week: UV mapping and texturing.  UV Mapping is one of the truly hated parts of 3D modelling - reviled by all.  The premise is the same as drawing a map of the globe: flattening a 3D structure into two dimensions.  It's difficult, impossible to get perfect, and slow.  We will also be making a start on our projects - which I outlined briefly in my post last week and which I'll write more about on Monday.  

Coalescing

Nine days in and the group of people flung together to make up our class is beginning to coalesce into a friendly state of being.  It's nice that everyone's relaxing into the social side of the group because over the next ten intense weeks I think we'll really need the support from each other.  Simply learning the software and the concepts of 3D modelling is stretching some people and having the support from other, more experienced, members of the class will continue to be a real boon.

One of the most popular topics for discussion has been the reason for attending, and general hopes and fears about the course and the time after it.  That's been particularly good because it seems to have made everyone - certainly me - realise that their motive for attending is the same as everyone else's; hence, I think, the ice being broken.  There is still a sense of everyone sounding each other out but the safe conversations about film, TV, music etc. are bouncing about.  I hope that in the next week or so things will relax even further and solid banter will develop.  When we get to the stage that we can casually insult each other I'm sure an unbreakable bond will have been made; or broken...

I'm also becoming more comfortable in talking with other students in the school who have been there for many weeks more than we have and from who I've kept a nervous distance since now - God knows why.  Naturally, they are entirely friendly and full of advice.

This is all a Good Thing.

Thursday 27 June 2013

Goodbyes

Today was the last day of my admin career, hopefully.

I said fond goodbyes to those I care about at work.  Thoughtful presents were handed over and we had excellent leaving drinks.

I am leaving after six-or-so years in the NHS.  I am extremely proud of my time there and will miss it deeply.  I have genuinely helped many, many people and will always remember that. 

I have received some abusive behaviour but it is far outweighed by the positive, appreciative, genuine and heartfelt thanks I have received - sometimes in the form of chocolate!

It's an unusual transition: in the NHS you are fundamentally trusted by those with whom you are dealing, simply by being part of that organisation.  People offer deeply personal information to you on request without a second's thought, implicitly understanding that you will use it to the best of your ability to help them; that it will remain confidential is never discussed (at that moment) but is always understood.  It is a privilege and an honour.  It is something I will miss.

The patients are the whole point and they are what makes the job worth doing... though they are sometimes a curse!  I will miss regular interaction with them, helping them understand the monolith that is the NHS and trying to guide their treatment pathway through it as smoothly as possible.  I care about them and I think part of me will always think and feel like an NHS employee.

But that is not the point of this blog!

On Monday morning it begins. Hopefully the first stage of a smooth, painless transition into the world of VFX. It won't be quite that easy, of course, but I can dream.

Sunday 16 June 2013

Two weeks and counting.

In two weeks, I will probably be panicking; in two weeks I should be asleep but I will probably be panicking.

In two weeks, I will be nine hours from the start of a hopefully life-changing three months.  I will have left my admin job.  In two weeks and nine hours, I will be starting a 12-week course in Visual Effects Production at Escape Studios.

It feels like a year since I made the final decision to do this.  It's closer to three months.  What felt like an ocean of time has drained away to be the barest stream.  An ignore-able, casually-long period of time has suddenly become a distinctly short, distinctly disappearing period of time.

I am nervous.  Nervous that it will all go wrong - that phrase, "all go wrong" is one whose incidence in my life has increased drastically in the last month but I don't entirely know what I fear will "all go wrong".  The nerves stem from two things:

1. A period of time during which I am entirely without paid work.  I will be training from 10am (luxuriously late start) to 5pm.  There will be no time for work, I think.

2. After the course has finished, I need to find work.  That work will probably not be familiar, safe, sign-a-contract, monthly-paycheck work but freelance work.  Freelance work about which I know very little.  

Point 1 is an irrelevance, really.  I have savings and my partner has a job; nevertheless, it might "all go wrong".

Point 2 is more understandable but I hope that once the course begins the Studio will have resources into which I can tap.  If nothing else, they will have information: information as to how the world I'm entering works.  That might, also, "all go wrong".  

(Still no clearer to establishing what would all go wrong, or how.)

But I am not just nervous.  I am excited, elated, enthused and effervescent;  I am tumultuously thrilled and thankful for the opportunity.  I am at the very edge of something which I have wanted for quite some time and I have taken the first step.  I am on the tightrope, I have removed the stabilisers, I have jumped from the plane.  I am buzzing with anticipation and elation.  

I don't really think it will "all go wrong".  I expect it will all go fine.  I hope I  end up happier, working on things I enjoy and am passionate about.  I hope I end up proud of my job, proud of what I do and of my work.  If I just end up happy and in regular work then it definitely has not "all gone wrong".  We can only wait and see.