While people are often curious about the “craft” involved in CGI work, it’s almost impossible to give a brief explanation to anyone who is not already familiar with the field. There are a lot of basic principles and practices that you simply don’t commonly encounter anywhere else, and it’s not practical to detail them here in meaningful depth. I’ll try for a very simplistic overview.
Each image on this site was generated entirely on my computer. There is no studio and no live person showing up to be photographed. Instead, I have the digital equivalent of a lifelike mannequin that I can place and pose within a virtual space in my computer. A virtual Barbie, so to speak.
To make things wonderfully confusing, 3D artists use the term “model” to refer to any 3D CGI object so in this context I’m using a “human model” which is a 3D object that has a human appearance, not an actual person.
I can alter the model’s appearance in any way I wish (including subtle and not-so-subtle shape adjustments, hair length/style, etc.) as well as having complete control over such things as skin tone, eye colour, and so on. It’s like having an entire casting agency at my creative fingertips.
There are no lights, no camera, no lens…it is all done with their virtual digital equivalents. I create, pose, light, compose and “render” (generate the final image file) without anything ever existing outside of my computer. Nothing is “real.” It all exists purely as 0’s and 1’s within sophisticated computer programs.
Once the extreme number-crunching of the render is complete, I employ the same sort of digital post-processing methods that most photographers use for their digital camera output.
Ignoring the mechanics of it, though, the overall technique is remarkably similar to shooting a live model in real studio so my experience of doing so in the past has played a vital role in preparing me to replicate it within my virtual studio.
For more detailed specifics you could Google “CGI pipeline” or “3D digital art workflow” to get you started down the rabbit hole.
I’m hesitant to go into detail about the software and digital assets I use. There are dozens of very viable options, each with slight advantages or disadvantages for various applications, and no “one size fits all” solution.
I am comfortable with several different 3D platforms, each with its own strengths and weaknesses, and it’s not uncommon to do some work in one before switching to another.
Also in many cases the artwork is a happy by-product of testing a lighting project I’m working on so my “platform of choice” for the final render will frequently be whichever one I’m designing the lighting solution for even if it isn’t necessarily the optimal approach.
Lighting is typically some combination of HDRi and emissive mesh surfaces, usually of my own creation (since that’s my “thing”) although I’ll occasionally supplement that with other standard types (a spotlight or two).
I usually do my compositing and initial postwork (when necessary) within the 3D software’s pipeline and then shift to Adobe Photoshop (and assortment of add-ons) for the finishing touches.
My hardware is in a constant state of flux but boils down to a workstation with a very good CPU, lots of RAM, and the best graphics card(s) I can afford.