While people are often curious about the “craft” involved in CGI work, it’s almost impossible to give a brief explanation to anyone who is not already familiar with the field. There are a lot of basic principles and practices that you simply don’t commonly encounter anywhere else, and it’s not practical to detail them here in meaningful depth. I’ll try for a very simplistic overview.
Each image on this site was generated entirely on my computer. There is no studio and no live person showing up to be photographed. Instead, I have the digital equivalent of a lifelike mannequin that I can place and pose within a virtual space in my computer. A virtual Barbie, so to speak.
To make things wonderfully confusing, 3D artists use the term “model” to refer to any 3D CGI object so in this context I’m using a “human model” which is a 3D object that has a human appearance, not an actual person.
I can alter the model’s appearance in any way I wish (including subtle and not-so-subtle shape adjustments, hair length/style, etc.) as well as having complete control over such things as skin tone, eye colour, and so on. It’s like having an entire casting agency at my creative fingertips.
There are no lights, no camera, no lens…it is all done with their virtual digital equivalents. I create, pose, light, compose and “render” (generate the final image file) without anything ever existing outside of my computer. Nothing is “real.” It all exists purely as 0’s and 1’s within sophisticated computer programs.
Once the extreme number-crunching of the render is complete, I employ the same sort of digital post-processing methods that most photographers use for their digital camera output.
Ignoring the mechanics of it, though, the overall technique is remarkably similar to shooting a live model in real studio so my experience of doing so in the past has played a vital role in preparing me to replicate it within my virtual studio.
For more detailed specifics you could Google “CGI pipeline” or “3D digital art workflow” to get you started down the rabbit hole.
I’m hesitant to go into too much depth about the software and digital assets I use. There are dozens of very viable options, each with slight advantages or disadvantages for various applications, and no “one size fits all” solution.
I use whatever my experience tells me will likely produce the best result for whatever idea I have, and this can vary quite widely from project to project. In general, though, the majority of the images on this site were done in Blender (an open source 3D computer graphics package) using its native Cycles render engine because that’s what I’m most comfortable and familiar with.
The 3D assets are a mixture of my own creations and those licensed from other CGI artists which I’ll often then modify to suit my specific needs. Skin and hair are a constant “work in progress” as I try to mimic their real-world counterparts by taking advantage of some of the latest PBR shader innovations.
Lighting is typically a HDRi base with emissive mesh surfaces and/or spotlights added as needed (usually ones I’ve created). As an added bonus Blender now supports using IES profiles for even more creative control in beam-shaping.
I usually do my compositing and initial postwork work in Blender (which allows me to work with 32-bit precision for as long as possible and export as .exr) and then shift to Adobe Photoshop for the finishing touches.
My hardware is in a constant state of flux but boils down to a workstation with a very good CPU, lots of RAM, and the best graphics card(s) I can afford (Blender now supports GPU rendering).