The Making of Synthesis – Models
Alright, part two! In this post, I’ll be talking about the various model and texture projects spawned by the cancelled SFM short Synthesis. Just a heads up, I’ll be digging deep into the new Quixel tools and dDo workflow; this project was a case study in producing results in as little time as possible.
Before I really dig in, I just want to underline a few restrictions that drove every design decision. This was designed to be a Saxxy entry, and making a film short for that competition has a few massive asterisks attached to it when referring to content used, namely, content from non-valve games are a non-starter. There are exceptions to a rule, but existing content you might have access to is on an unwritten whitelist, assume everything is banned unless explicitly stated as allowed or you made it yourself. I suppose valve made this rule for simple and obvious legal reasons, and to encourage the creation of fresh, original content. What it has done in reality has stifle entries in the past, locking them mostly to TF2, with occasional other Valve IP works. Our goal was to sidestep this rule by mostly making new content from scratch and reusing what we could when it wouldn’t be overly obvious.
The second thing to consider was that we were on a very tight deadline. Most of the design decisions I made as a content creator were to make the best looking assets I could in the shortest amount of time possible. When the decision was made to give up on the saxxy deadline (and eventually a total cancellation), the assets were about 90% complete. With that much done, from my perspective, there was no reason not to just finish my part of it. Sadly, I’m not much of an animator, so the film wasn’t going to happen, but what I am capable of doing is making completed assets for source, so once i had the go-ahead from my partner, I released all of the finished content.
Character design – Flight Lieutenant Keller
Nothing encapsulates all of my design and project goals quite like the development of the main character. I’m not much of a character modeler (actually, I’m not a character modeler at all, it’s on my list of things to learn). I am, however, quite apt at putting together existing models and assets and recombining them into something new. In the garrysmod modscene, this is referred to as ‘hacking.’ (As in hacking bits and pieces off of things to make new things). The core assets for Keller were sourced, but all of the textures were modified and in some cases completely reworked. Let’s go through them.
The head mesh and all of its flexes started as Zoey, from Left 4 Dead. True to for for anyone fighting a tight deadline, I did not rule out previous work of mine, and in 2012, I had co-released a model that fit our billing to the letter. Zoey has very nice facial flexes and mesh, there was very little I had to do to make the head work. The skin I did in 2012 had the extra bonus of substantially changing her appearance, below is a shot from the 2012 release of the AVP Synthetic Pilot (coincidence???)
The hair hails from Fuse. It was not my first choice, I’ve seen and used much better options sourced from games in the past, but once again, Saxxy rules and we didn’t want to push our luck. I don’t currently have the skillset to model hair, nor did I have time to learn it for this project. We ended up with choosing two hairstyles that worked.
When we made the decision to take more time and there was still a glimmer of hope for completion, the first thing I did was to spend extra time with the meshes and textures. I scrapped the old hair texture completely and hand drew the new one, and I supplemented that with additional geometry, luckily 3ds max’s ‘flow connect’ tool was quite useful here.
Source’s handling of transparency has never been ideal. As with all the projects I have that use hair, I used the trick of mesh duplication to get the best effect. This allows for two co-existing meshes with duplicate shaders save for one setting – one gets $alphatest for the correct z-pass sorting, the second gets $translucent to produce the softer edges of a greyscale alpha. The end result I came up with is still not the best hair ever, but at least it’s respectable. Also I added jigglebones for the larger hairstyle, because why not.
Here’s a silly stress test I made to see how the jigglebones would react in a running animation. I may or may not have gotten carried away.
The body is from my one of my partner’s previous project. It was custom made for that film, but the backing company that funded it seemed to have little interest tight asset control, and it fit our needs perfectly. He did add the condition that I retexture it, I had no problem with this request. The author of the model did a very nice job creating the mesh and rigging – the unwrap was a little wasteful, it used three 4096×4096 textures, but there was little stretching and the bakes were solid. The same could not be said for the textures – they had the issue of to much high frequency detail and not enough large shape definition. Although at 4096×3 you could texture every fiber of a jacket, the overall design at a glance was a formless grey mass.
With the source AO and normalmap bakes in hand, I remasked everything and fed the texture into dDo. After some work and custom smart layers, I ended up with a ‘mid point’ I liked.
Quixel suite does not directly support the source engine. It is a dream for UE4, they have profiles built that port directly over with accurate calibrations that’ll look very similar to the 3do previewer. If you want to make things for source, you need to create custom maps and give up on 3do giving you an accurate image. This was the first asset I really spent time to convert to source, the workflow was less then ideal. By the end of the project, I had something far more efficient worked out, I’ll save it for my corridor section.
I did do some modifications of the mesh for the shuttle scene, I decided to just have the model follow the Alien(s) style of synthetic, so I pulled up some reference images from Alien and Aliens and blew an arm off. I just used some basic shapes that looked appropriate for robo-guts and baked out normals and AO. I unwrapped it on the texture where the arm and hand were, so I skipped quixel and just did the texturing by hand in one afternoon.
Revisiting Old Projects
I made some small changes to the shuttle for this project, namely creating alternate skingroups for flickering holograms, appropriate decal sets, and modeling out a blow out panel. Nothing much more to say, just some simple contextual adjustments to fit the needs of the film.
The VR headset was an addon for a previously made and released model. I actually made (modeled, unwrapped, and baked) the main mesh for the VR portion months ago when I first thought of the idea for the script. It sat, untextured, until the project became a reality. I took it as a challenge to match dDo output to my previous hand painted work. I’ll dig into the dDo workflow in the next section, but I think it turned out relatively seamless. Also, I propagated the neat little feature that the object can be colored at run time. Not exactly a needed for the project, but I think someone might come up with some use for it when it’s out in the wild.
Yes, I know it looks like an Oculus Rift Dev kit, seeing as it was obvious inspiration for the design, but please tell me that it reminds you of one next time you talk to me.
Modeling the Corridor
I’m going to defer the bulk of the design discussion to the previous post where I talked about the influences of the corridor, but I thought I could expand a little on how I generated the textures, and how I pushed a lot of the grunt work to dDo.
Modeling wise, I used a few simple tricks to make things appear more seamless – namlely, I tried to break up seam points where one module met another with girders. This turned out to be a good decision – as I discovered, SFM doesn’t allow precise rotation, so, if you look closely at the scene, you’ll see some of the girder connection points don’t quite line up. Words cannot describe how much this annoys me, but I agreed to just let it go for the sake of completing things. When I unwrapped the model and prepared it for AO baking, I encountered another issue – all of the textured surfaces are concave. Simply putting a single skylight in the scene would yield bad results, so I ended up using three omni-lights with two lighting passes and combined the shadow maps in photoshop. To get per-object AO (and correct normals) for the scene, I ended up exploding out everything and baking. It took more then one try to get something that looked okay.
The coffin above illustrates simple color material separation on the pre-turbosmooth high poly to split different surfaces. I used the baked diffuse map with additional color layers in photoshop to feed the correct color-ID map for dDo. Another thing you may notice in the low poly is lack of collapsed edges in the final export – this was purely a time saving measure to use the same mesh for unwrapping and as a base for the high poly. Normally, I model the base shape as all quads and keep my loops as intact as possible and duplicate it twice – object_low and object_high. From there, the high poly gets control loops and additional geometry for baking, and the low poly gets collapsed edges, partial triangualtion, n-gon removal, and a UV map. Here I used the uvmapped ‘mid’ pass as my low pass.
This was the first object that built the ground rules for my current dDo integration pipeline. Let’s go over it.
1. Map Cleanup
As discussed earlier, I generate my color-ID map with a combination of diffuse baking and manual editing. Depending on complexity, I may do all of it in the bake, or all of it in photoshop, with the decision maker usually being map splits on non-UV seams. The more precise the selection needs to be, the easier it is to bake.
AO is straight forward, I usually just have to correct little cage-miss related errors and possibly a despeckle/noise removal/blur pass if needed. Normals are also straightforward, I do despeckle/cage miss cleanup if needed. Source uses (-Y) normals (flipped green channel), and dDo takes both if you specify, so now would be the time to make them game ready here or at the last step. I find that sometimes dDo doesn’t quite take the hint that the normals need to be inverted occasionally, so I generally flip them to make it happy.
Before I import the maps into dDo, I’ll do a little initial texturing, mostly of shapes and patterns that need to exist on the nomralmap, as well as first pass decal placement (for reference). I generally call this a ‘grey pass’ or a ‘height pass’ or ‘that thing I do before I feed it to the beast.’ The important thing to remember here is that all we’re doing is defining additional ‘geometry’ that would otherwise be on the model. This could be anything from surface inlays or panels to simple engraving to grates or grip patterns. Things that are easy to texture on but hard to model. You don’t want to add cloth or brick height -or any kind of material definition- at this stage, though.
Most professional modelers do small surface details like this in zbrush or max, but I do them in photoshop mostly because I am pretty good at texturing on additional detail like this and sometimes I want to have those areas easily masked or otherwise easily available for selection during the later texture phases. The main detractor of doing it in photoshop is that you don’t get seamless transition of a contiguous shape along UV seams, but with enough practice, you can learn to work around that limitation, especially if your unwrap is decent.
Once the heightmap is completed, I’ll throw it in Crazybump (I’m still using it, although nDo is viable as an alternative here), and get my overlay normals. The trick to combining them with the baked normals is that the overlay normal you just created has to have a neutral blue channel. Select just the blue channel and run two brightness/contrast passes on it. Switch each to legacy, and darken it first by 100 and again by 27, if you did it right, what was previously white in the blue channel will be 127 grey. Go back to your RGB view and you should now see your normals looking a little like the ‘Normal Overlay’ in the next image.
From here, all you have to do is set the blend more to hard light and you won’t loose any data in the blue channel – your normals will retain the depth they should have. Make sure that if you did/did not flip the green channel on your baked normals you do the same here.
AO is another thing you can modify here. I propagate a soft drop or inner shadow to the appropriate layers in the AO. It helps dDo with cavity wear and inlay dirt buildup.
3. dDo pass
Now that I’ve done everything that I need to do with the shape of the model, it’s time to define the materials. dDo does a great job of this, for the most part. I’m not going to make this a step-by-step tutorial on how to use dDo since those tutorials exist. For source, calibration for a specific profile isn’t really all that important. What is important is choosing aging and texture overlays that match the material you want. Dig into the dynamask editor and make something unique. I also make sure that regardless of how ‘new’ the object is, I create a custom dirt layer, starting from the ‘sandy’ dynamic material. I can adjust it by hand in later steps. Since source is ‘last gen,’ I also generate a base definition layer from the ‘stylized’ column.
4. Manual Submap Generation
Once you’re generally happy with what you got from dDo, it’s time make the masks needed for source. You have two options here – generate then in dDo as custom masks, or make them yourself in photoshop. I did both ways for this project, but I found the latter to be far easier given my existing workflow, plus it allows for much more latitude if you want to actually, you know, texture it. If you go the dDo route, you’ll still need to crack open each psd it generates and combine some per channel anyway, so it’s not like using it fully is a huge time saver, and the 3Do previewer really looses its value as an accurate preview at this point.
I’m a layer comp guy, so I like everything in one easy to use psd, but that’s just me. Once again, I could spend 5,000 words breaking down my take on source’s texture pipeline (and spoiler, I have another post that I’ve been writing for a long while that covers exactly that) so I won’t be taking about that here.
5. Import to Source
With the maps complete, all that’s left is to combine them correctly, write up your vmts, and bring it all into source. Once again, doing the submaps on your own is faster for small tweaks when you discover what you thought would look good in photoshop doesn’t actually look good ingame. Don’t forget to flip your normals if you did that in the first step! Remember, on a source friendly normal, an object poking ‘out’ should have a black outline on top and white outline on bottom on your green channel.
With my workflow locked down and an ever approaching deadline, the visual design for norad was equal parts creativity and pragmatism. I’ve already talked about the design side, but I wanted to share a little about the models.
Norad breaks down to 7 models. Thanks in part to dDo, I was able to replicate material and surface properties with relative consistency across all of the models.
The core of the design for the more stylized objects was a spline workflow. Instead of starting out with a box or another primitive, I started with a max line and created simple geometry from it. I tweaked the shape and flow before collapsing to an editable poly, and once they were done, for the girders especially, I used edge and face bridging extensively.
The one object that defines the workfllow and styling of the Norad environment on the whole is the chair. Built from three primitives and two splines, it is simple but stylish, and when you take a close look at things like the connector between the legs and seat, you can see I made some time saving decisions for areas I knew would not be heavily scrutinized in the final product. It also has a full set of LODs; since it was spline based, reducing the polycount was simply removing existing loops.
How-To: I made the fitted leather cushions by selecting appropriate faces from the back piece, detaching them them as a new object, and doing a slight inlay and a dramatic bevel I then deleted the inlay geometry. That gave me square edges, so from there I applied a single (or double?) iteration turbosmooth modifier that gave it a cushion-y look. I collapsed that back to edit poly, removed the unneeded edge loops, and reattached it to the base model. From there, I pushed it down into the wood back piece just enough to cover seams and treated it the same as the rest of the low poly geometry.
The real meat of the Norad modeling process was the desk rows, however. The script called for a wide open space filled with people doing VR stuff. Easily said, hard to execute. To fulfill this, I could have simply duplicated the desks/chairs and filled each one with a different character. However, this would have meant 250+ ragdolls and 500 single bone props, plus 250 VR visors. Not ideal. SFM may not be a real time renderer, but ideally, you still want a decent framerate in the preview, and things like that add hours to render time.
My solution? Low level of detail, highly contextual models. An entire row of already populated desks with simplified biped skeletons. Simple character variety via skingroups (3 heads per body). Model bodygroups for the characters if the animator wants high detail characters instead.
I created a second LOD for the desk and chair, cutting polygons buy about half. I also reduced the polycout for the character, culling entire sections I knew would never be seen.
In all, there around 250 bones in the 17 desk variant, but given the idea was to save as many resources as possible, I decided to make a model for each row – the largest hit the bone limit, and each after is less resource intensive. I crunched the numbers and there was a substantial savings in both bones (thus bone animation data) and polygons. For the largest case of 17 desks in one row, the optimized triangle count is 93,621 vs unoptimized of 267,376, a reduction to 35% of normal. For bones, it’s 242 vs 1224 – 19% of normal. If you were to propagate to all 272 desks needed to fill the scene, the results are quite dramatic: 4.27 Million Polygons compresses to 1.5 Million Polygons. This doesn’t even compute overhead for things like vertex animation of facial flexes or calculation of two eye shaders per model. Bottom line, it was worth it to do it. Even if time were infinite for the project, the easy to navigate animation set viewer is worth the trouble.
The Satellite and Story Holograms
One of the key assets for the project ended up being a satellite that I had modeled as a favor to a friend a week before we decided to do the film.
It was a quick model that utilized a lot of duplication to create the final shape. At the time, dDo wasn’t working very well, so I ended up hand painting it, and giving it a second pass after I got the Quixel suite working again. I also made a holographic version similar to the treatment I gave the shuttle, I’m glad we got to use it.
Speaking of holograms, I decided to make a small number of story-specific ones for this project. Besides the floating briefing set pictured below, I also made a few for Norad.
The Norad screens turned out pretty cool. I’ve wanted to do a galaxy map that kind of hints to the deeper story behind the environment for quite a while. A flat texture wan’t my first choice, and I may end up redoing it with a more 3D one in the future, but it works.
I also convinced my partner to use a hologram for the opening title. Here’s a shot of the initial texture I made up for the ‘pitch.’
This project pushed my ability to produce whole scenes. While no one thing was particularly hard to make, the volume of content I made for this project in the time I had really makes me proud. It is unfortunate that we were not able to finish it in time for the 2014 Saxxy awards, but seeing as it was another year where the only category to win was TF2 (again), I’m not as unhappy about it as I was before the winners were announced.
Putting so much effort into a project that dies can be soul crushing. I’m not happy that all of this highly contextual art will never realistically see the purpose for which it was made, but I take solace in the fact that I was able to use this as a way to test my limits and release the content to the world for free use anyway.
Thanks for reading. I hope something here has been worth the time it took for me to write it, and it was at least a little educational.