SHOKUDAI Temple Breakdown

 

In February of this year, I told myself that after having made a lot of game development related things (from programing to 3D modeling) it was the time for me to do a small project from the beginning to the end in order to take profit of the experience learned in the past three years and hopefully make a pretty portfolio piece with a strong focus on the PBR workflow and the lighting.

What wasn’t easy however was to find something interesting an appealing to make. I have always been fascinated by the Japanese culture especially by this sense of aesthetic from the architecture to everyday clothes so I imagined a post-feodal Japanese temple mainly because is easily recognizable by everyone but I made something a little different in term of purpose.

In fact instead of replicate a religious temple like it was the main use of these buildings at that time, I created a temple like it was the resident of one of the big family which ruled of the japan empire. As a result, the indoor have some residential furniture and a small amount of military objects.
This was motivated by two things, first for avoiding being too stereotypical and also because since I’m not really a 3D modeler making statues and other complex models is not really in my skillset.

At this point, I was a clear idea on what I wanted to make: a residential and slightly military ancient Japan style temple with a small lake in front of it and likely at a day time where the aesthetic was more important than the historical accuracy.

References & philosophy 

Now I needed to find some references but references for the individual models not for the whole environment. In fact I don’t wanted to recreate a concept art or a photo because even if this can be really challenging; at the end of the day you replicate something that already exist and this mechanically constrain your creativity.

By following this philosophy, I have adopted a fairly simple creative workflow which helps to create everything in a more organic manner by building piece per piece the whole picture by always thinking what objects should logically be here. This helped me to be closed to the iterative processes of making a real game level and also contribute to final composition.

My point is everything needs to have a sense. That sounds obvious say like that but having this in mind every time you make and place an object, a light or whatever and even if sometimes it going against the aesthetic (there are of course a balance that can be made here) overall it really help for the believability.

Modeling and terrain

First off, I began by modeling the main temple which is of course the most important asset and will drive the rest of the environment around it (and it also was the most time consuming asset to make).

For all the modeling tasks, I used Maya as my main modeling package because it fit well with my way of working and I’m also more comfortable with the Maya > CRYENGINE workflow than the Max one.
However I also used quite a lot the CRYENGINE Designer tool for white-boxing some assets before sending them to Maya.

Modeling the Temple wasn’t an easy task for me and in fact, 3D modeling is not my specialty so I tried to find ways to simplify this process as much as possible.

So after having found some good references of a real Japanese temple, I made some practical changes to make it easier to create. One of them was to make the temple in a perfect square shape allowing me to mirror each part of it in 360 degrees.

When the temple geometry was finally completed, these UVs correctly unwrapped and already imported in the engine, I needed to create a location on where to place it.

So I began to create the terrain. Word Machine was the software I used for making both the heightmap and the base terrain texture which was imported easily into the engine. After that, I selected an area of the terrain that I found interesting and I used the CRYENGINE build-in tools for adapting the terrain to my needs and finally place the temple at this final location.

Having the temple placed and the correct terrain topology around it at a so early stage really helped me to visualize what assets I needed to create and gives me a better idea of the scale of the entire scene.

For all flowers on the ground however, I “painted” them with the vegetation tool, this makes the process a lot easier but I term of performance a real decal might be a better choice.

At the end I created 64 objects which was definitely the most time consuming part of the project but I have learn a lot in this domain and it’s a great thing even if this wasn’t the core part of it.

Texturing 

I don’t think there is a “perfect” way to texture a model; there are however a lot of techniques more or less suited depending of the object and the type of work that need to be put into it.

For most of my assets, especially the one which have tileable textures, I created the diffuse, spec and gloss directly in Photoshop based on real photos.
And the normal maps, I used the good old Crazy Bump for all those “photo based” textures. However when I tried NDO that was only at the end of the project and it was a mistake to haven’t looking into it before because actually, the texture conversion presets in NDO are surprisingly pretty good!

However for some key objects, (e.g. the Katana) I also used DDO (and the new DDO when the beta come out) as a texturing tool in Photoshop. For this tool I have not a really define workflow, I just try thing and adjust them to archive something that I’m happy with.

By working on the textures, I quickly realize that the Photoshop-to-CRYENGINE workflow wasn’t ideal especially when dealing with multiple textures at once or when working outside of Photoshop.
That why I have created my own tool called “Importer Hub for CRYENGINE” which allow the import of multiples textures in multiples formats (including PSD and TGA) at the same time via drag-and-drop as well as importing a texture in one click from the clipboard which as you can imagine have been a huge time saver.

Another major feature I added at the end of the project is a live connection between Quixel Suite DDO and the CRYENGINE which allow to preview any changes directly in the engine.

In a perfect world, since the CRYENGINE and DDO have a PBR render, there shouldn't be a big difference between the two for a same texture but in fact, the CRYENGINE Resource Compiler make some automatic adjustments (like the gamma) to the textures in order to make sure they looks the best as possible after been compressed in DXT1 so that why this tool has been a huge help in that regard.

CEI_Katana_texturing_small.jpg 

Finally, let’s talk about de detail terrain textures. Those textures have a little different setup because the diffuse is multiplied by the base terrain texture (or the brush color if used). This prompts the diffuse map to be high-passed but in some cases, I found that manually adding a white layer on top of the texture with an opacity around 20% gives better results.

Materials and moving to PBR 

When I started this project, I knew that the PBR will be introduced in CRYENGINE before I finish it. So I tried to do my best to ensure that the PBR conversion will be as smooth as possible with thing like not too much contrasted diffuses, no or almost no prebaked AO in the diffuse and the spec map and so on. At that time I wanted to go a little further but there was a lot of unknown on how the PBR will work on CRYENGINE (reflectance or metalness? Gloss map still in the spec map alpha?).

Another thing that helped me a before moving to BPR (and also on the lighting side) was the Crytek’s “Shining the Light on Crysis 3” presentation who is definitely a must read even after the PBR transition!

Then in May, the CRYENGINE 3.6 has been released along with the introduction of the PBR.

Before that, I have read a lot of stuff about PBR on how that works in term of the Art workflow, how that works in term or replicating the reality and even how the surfaces and the lighting interactions works in real life so I was quite ready to do the transition.

When I opened up the 3.6 CRYENGINE editor for the first time, I have load one of the reference materials and I have been blown away by the metals! The metals in PBR look so cool that I have made some smalls changes in my assets to use them!

Immediately after I load my environment level in 3.6 without changing anything and (like expected) it looked terrible! The lighting was off, the materials looked bad, and a lot of other small things that needed to be reworked.

Since the lighting was broken and not calibrated at all, I have created a small level with a correct lighting to rework the materials and the textures of each asset individually.

With the precautions I have taken, the diffuse textures haven’t suffered too much in the transition to PBR but it was not the case for the specular maps that of course need to have physically based values.

So I have recreated from scratch all the specular maps by following the Crytek PBS sheet and as a result, the spec textures have only flats colors on it and no details. The only exception to this is for the painted metal materials where I have added some details in the specular map essentially because the small scratches on this type of materials reveal the metal behind them (the same apply for rust).

Unsurprisingly one of the most important textures in PBR is now the gloss map and here is where all the micro details are located along with the most contrasted glossiness values on the macro scale.
One cool example is bricks (the ones at the base of the temple) where I made glossiness variations for each individual brick as well as micro variations for adding details. There is nothing exceptional here but is interesting to see that is possible to archive this look only with the gloss map.

img_1.jpg 

Finally, the normal map has become even more important in PBR because the lighting is now way more sensible to it especially with the IBL enhancements but in my case, I don’t needed to rework this maps but I have reimported most of them because the gloss map is now located in the normal map alpha to reduce the shading aliasing.

After all the assets correctly ported to PBR, my environment finally looked good but of course a lot of work on the materials has been made bit by bit until the very end of the project.

Beyond PBR, there are two little tips I used in term of materials and I believe worth the share.

The first one is the white panels on the temple. In order to make the lights (and the shadows) scattering through them, I used the vegetation shader with the “leaves” option on this two-sided plane and I also disabled the shadows casted by this geometry which gives a pretty nice effect.

The other one is the flowers decals on the lake. Since this water has small waves I replicated those with the vertex deformation in the material settings and coupled with a mesh which have a good amount of subdivisions gives a really believable effect.

 

Lighting 

The lighting was definitely the main focus of the project and this has been an iterative process from the beginning to the very end and not to mention the move to PBR.

Right from the start, I wanted to avoid the cliché moonlight or the pink sunrise lighting that is commonly associated with the Asiatic architecture but instead I wanted to create a vibrant color palette with a very bright sun that shine the entire environment and make a clean final image.

For archiving that, I early setup a basic lighting with the time of day tool with the most important values like the sun position, the color of it and the color of the sky.
This helped me to have an accurate preview of what the lighting will ultimately look like which is pretty useful when dealing with choosing the color of the textures and placing assets in the level in order to get the most interesting shadows. This also gives a preview of the mood of the environment that drives the rest of the creation process.

Like I briefly explained, the sun position is one of the most important lighting elements for an outdoor environment, it affect the shadows, the GI, the AO, the fog and a lot of other stuff. So it needs to be chosen carefully by looking on the interests points of your level in order to get the best angle possible.

A common mistake that I see sometimes is placing the sun in order to have the largest illuminated surface and in my opinion that not a good idea. It fact is really important to have a balance between shadowed areas and illuminated ones both for an artistic perspective and to make a good contrast where the human eye is really sensible.

The most challenging part of this environment what the mix between the outdoor and the indoor environment and in fact, I realize that this architecture have been especially designed to protect from the sun. This prompted me to place the sun really low in order to have just a small amount of sun rays in the temple.
And I was lucky, in a “normal” environment having a so bright sun so low in the horizon will not be so realistic but in my case since the temple is in altitude (you can see the mountains around it) is more believable.
Another thing to consider is more the sun is low more this shadows are blurry, this can be adjusted with the shadow jittering parameter in the time of day editor but like always there are a balance to find here, especially on environments where there are a small amount of vegetation.

Now let’s talk about PBR, when I moved to CRYENGINE 3.6, I definitely expected more work to “convert” the lighting than I actually made.
In fact one of the major improvements in 3.6 is the IBL enhancement. In 3.5 I had a dozen of HDR environment probes but now there are only four of them! There are a global one for the entire level, anther one for the temple and the environment around it, and finally some local ones for the lake and inside the temple.

Like I said, the temple indoor lighting was pretty challenging but the thing that makes the most contribution is the environment probe.
Since the indoor is mostly filled by shadows, I have captured the probe “texture” by disabling the shadows and the vegetation because some trees was reflected in the incorrect angle. Also this is the only environment probe where I used the box projection method that gives way more accurate reflections on this type of shapes (rectangular/cubic). However the box projection is relative to the word origin point in term of orientation, this means that the objects who receive the projection need to have the same orientation or a perpendicular one (hopefully you don’t need to be 100% precise on most of the cases).
The real-time SSR also contribute to the reflections and it blend really nicely with the cubemap but is also quite limited especially in term of the distance in the reflection which is even smaller than what there are on the screen at a given time.

 

Finally, the last layer of the lighting in the temple (both indoor and outdoor) is the dozens of ambient lights and bounce lights.
Additionally to the hand-placed bounce lights, I’m using the real-time global illumination system with glossy reflections. This gives some really nice results but in orders to reduce some flickering, I increased the view distance with e_GIMaxDistance 150.

One of the most crucial parts of lighting in the actuals game engines is the HDR because it handles everything from the contrast to simulate the eye adaptation.
What’s really important here is to have a correct ratio between the dark and the light on the final picture and if the setup of the HDR parameters was made quite easily, I have made a lot of tiny changes during the project that probably no one will notice but at the end all these small changes really make a difference.

Regarding the shadows, there not much to say about the sun, I just tweaked the bias of the first cascades to reduce some artifacts but what is interesting however, is for all the other direct lights sources.
Since the real-time shadows are pretty expensive, I tried to avoid them as much as possible. For instance, the lights coming from the two fires in the temple doesn’t interact with any objects nearby but only with the really close fire support so instead of using “real” shadows, I manually created a texture applied to the lights as a wide angle projector which illuminate the surfaces from the top.

Another thing (who can be also associated with the post processing) is the color grading which allow making any colors changes directly in Photoshop and export them to the engine via a color helper texture.
In my case since most of the colors are handled by the time of day settings and the materials, I’m using the color grading for doing just some small alterations on the colors and their intensity/saturation.
I also setup a small Flowgraph script that dynamically change smoothly the color grading whatever the camera is in the outdoor or the indoor of the temple.

A thing that makes the creation of the color grading a little difficult at that time was the fact that my monitor is not correctly calibrated. I have made my best to calibrate it the best as possible but I know that the color temperature is slightly too high so that something I was always in mind when working on colors. At the end of the project, I also put some screenshots on a variety of devices (laptop, TV, phones …) in order to see how it looks on those screens.

Regarding the color himself, I wanted to archive a very vibrant look but not to make them over saturated. I’m also pretty happy with the colors mix I made especially the blue/pink, the blue/ brown and the green/blue ones with some of them respect the color theory and some others intentionally don’t in order to create a good balance between the cold and the warm colors.

To finish this section, let’s talk about the lens flares! This is a tricky subject because there have been misused so many times in both movies and games that some people just hate them.
But the fact is, the lens flares can have a real artistic value if used correctly and definitely help to add a sense of interactivity with the environment.

And for using them correctly, you need to think about the type of environment and this setting (a sci-fi corridor will likely don’t have the same style and amount of lens flares than a XIV century church) and most of everything else, a lens flare need to suit the light from where it come from.
That sound basic but it’s a mistake that I see a lot where for instance a light torch shine a giant lens flare with lens dirt everywhere on the screen and in the same time a car headlight just do a small elliptic glow.

An other good way to use them is to have an in-engine physically accurate/procedural solution but you lose the artistic control (and please avoid a bloom based lens dirt solution, this look pretty bad in most of the cases, distract too much the eyes and hides a lot of the work made by the environment artists).

In the case of my environment, the sun is the only lens flare that has dynamics part on it, and by dynamic I mean parts like chromatic rings and things like that. All the others lens flares are more or less complex glows for the various lamps, fires and candles.

The only exception is the exteriors lights where when the camera is close enough, there are some hexagonal camera orbs effects on them who gives a cool sense of illumination and gives some quite interesting colors variations to the environment.

 

Fog volumes everywhere! 

I have to admit, I have an addition to the use of fog volumes and the fog in general. In the temple indoor alone, there are three different layers of them with each ones have their own opacity and color.

The fog has the power to add a depth; a volume to the lighting and the environment but it also a key part to define the mood of a scene and in my case; I used the fog volumes both as a lighting tool and as a FX effect.

On the lighting side, I used them for adding some subtle volumetric color changes to key part of the environment like the waterfall and the sun highlights over the hills. Another way I used the fog volumes is for faking the AO at distance because it turned out that for the grass (the 3D one not the texture on the terrain) it gives better results than just using an ambient light.
However all the others lighting volumetric effects like the lightshafts in the temple are real geometries with a specific shader.

By looking at the screenshots, you will notice a quite large white outline between the blue sky and the ground. I deliberately increased the size of this natural effect for two reasons. First, this makes sense because the environment is in altitude but also when you see this, you see the atmosphere more than just the sky and you understand that there is something else beyond.

On the other hand, I also used a lot the fog volumes for archiving various effects; the most obvious one being the lake where a fog volume (with the help of some particles) gives a pretty volumetric feel of the water and define most of this color.

FX 

After the fog volumes, let’s continue with the particles and some of the other special effects.

First, I never really worked on particles before and I discovered that working on this type of effects is actually pretty enjoyable!
Aside the water particles and the fire sprite texture (who are the one included in the SDK), I made all of them including the candles fire that I animated the sprite by hand in Photoshop.

One of the most important thing I learned by creating these particles is how to make them “physically accurate” in order to make them reusable whatever the lighting conditions.
Due to the fact that I don’t need to reuse the particles I created you can argue this wasn’t important to think about but the fact is; it’s definitely a good practice whatever the case!

So, for all the particles who don’t emit light or glow (basically for all the smoke and dust particles), I applied the “receive shadows” parameter. For instance this allows to have only one dust particle for both the interior and the indoor of an environment.

However for performance reasons, the shadows are applied per vertex on the particle sprites this means that if you want to use this feature for archiving a more accurate volumetric effect where the sunlight scatters the particle effect, you need to make smaller sprites (the particle geometry not the texture) or enable the tessellation.

It’s the solution I adopted but for having an even better look (at the cost of a performance hit of course), I decreased the tessellation triangle size with r_ParticlesTessellationTriSize = 11.

On some of these particles, I also applied the “global illumination” flag and increased a little de dynamic GI contribution with the r_ParticlesAmountGI cvar.

Another FX I added is for simulating the shadows casted by the clouds on the ground (locally). For doing that, I added a number of meshes with a texture which repeat itself manually relative to the wind direction.

 

Post processing, Antialiasing and the Sky 

Often overlooked in an artistic standpoint, the post processing effects have a key role on the look of the final image and for me; this is where I spend a lot of time to make the look as clean as possible!

But before talking about the post process specifically, let’s talk about the sky.
In an outdoor environment the sky is extremely important; it can represent from 40 to 60 % of the screen and it’s have a huge incidence on the indirect lighting of a whole environment.

In my case, I was lucky that one of the skybox included in the SDK was exactly the type of sky that I looked for but I wanted to add some dynamic on it to simulate the wind on these high altitude clouds.
In an ideal world, there will be a shader to handle this type of effect but since there wasn't, I used a hemispheric mesh (similar to a sky dome) rotating slowly on the Z axis with the glass shader on it just below the skybox. There, a perlin noise normal map with a high tilling creates the deformation effect with a low opacity.

However, for avoiding a lot of issues with glass shader especially for the particles rendering, I needed to set the “render far” flag on the geometry but this can be done only on brush objects and no on GeomEntities. GeomEntities could have been easier to use in order to set up the rotation (with the Flowgraph) but instead; I used a brush object and I then set up the rotation with the oscillation parameters of the material. But in order to archive a rotation with this technique, the material need a diffuse map so I created one with the lowest alpha value possible (but not 0).

 

The sky I used have a lot of “empty” area where there are only blue shades so in order to break this felling of seeing just flat colors with no details I added a small amount of filmic grain via the time of day editor. For those interested, the actual value is 1.7 which is pretty low and quite hard to see in the screenshots but make a difference in-engine.

One of the most efficient ways to make the final image cleaner is of course the antialiasing. For this task, I used the SMAA 2TX which gives some great results for a quite low performance impact.

However there is a post process effect that creates a sort of aliasing effect and it’s the motion blur. The problem with the motion blur is the algorithm which reconstruct the background is by default not temporally stable and this create a lot of jitters around the edges.

Thankfully, this can be improved by messing with the r_MotionBlurShutterSpeed console variable and what I have found is the fact that is relative to the framerate so for a 60 FPS target, I set the value at 45 and more the framerate is high more high this value needs to be.

To finish with the antialiasing, I realize that the SMAA 2TX tend to blur a little the image so I added a small amount of sharpening filter (0.18 to be exact).

Now let’s talks about some various unrelated things!

First, I was pretty happy to see that the bloom in CRYENGINE 3.6 was largely improved especially in term of precision. Before that (in 3.5) the color “leaked” everywhere on the screen so beyond the PBR this is something that really helped to improve the visuals.

Another thing that pushed the visual fidelity is the hardware tessellation. I used it on a lot of assets with smooth edges but only as a mesh refinement technique with the PN triangles technology.
On the other hand, all the displacement I used was handled by the POM for performance reasons.

Finally, let’s have a look to the AO, the real-time one!
In CRYENGINE 3.5, the SSDO used was way too contrasted but was also perfect for the “real” 3D grass.
In CRYENGINE 3.6 however, the SSDO behave the opposite way. Is better for most of the case but my grass have been quite affected in the wrong way.
This explain the use of the fog volumes for faking the AO on the grass and beyond that, I used all the common techniques for faking the AO like the ambient lights and some decals.

Conclusion 

Creating this environment was everything I like, this was challenging, this was an opportunity to work on a whole range of disciplines and most of everything, I have learn a lot!

This project was also amazing in the simple way to create a virtual world with all the creativity this task can imply and it’s a great feeling to having an idea in your head and by able to make it and see it evolving day after day even if you constantly questioning yourself on everything you do.

But beside that, there are still thing that I believe can be improved especially in term of modeling. Like I said 3D modeling is not my specialty and this wasn’t really the focus of this project but still;
I will love to have some sculpture knowledge to make statues, armors and things like that and also create more variety in the vegetation.

Oh and after a dozen of attempts and alterations, I’m definitely not a good pillow modeler…

But overall I’m pretty happy with the work I accomplish: the things I learned and all the experience gained makes me feel pretty enthusiastic for my future and I hope you find this document interesting to read :)

Guillaume Puyal