Tag Archives: 3d art

State of my FOSS, 2015 – The digital sculptor and game maker

This is the stuff I use for 3D images and games, there are other items, but they are less in the spirit of FOSS than they are just free versions of more robust programs (Sculptris, for example.) I threw the game stuff in here, though it doesn’t strcitly fit the 3D bill, I’m just starting out on trying Blender’s Game engine.

3D Art and Game Programming – Makes fun stuff

Blender -> Replaces Maya / 3DS Max / Unity (sorta) / Adobe After Effects & Premiere (sorta)

Blender is probably my personal favorite from this list. This plucky 3D graphics render program is a lot of bang for a single piece of software. If you aren’t into 3D rendering, the video editing tools are still good enough to pull your attention. I’ve been able to do nearly all my learning about how to use this program (with a tremendously steep learning curve for those not used to 3d rendering) online for free. The resources are out there, and the community is fantastic. Add to this, the fact that you can write scripts in Python for the program, and you have me sold. The game engine is not half bad, it probably can’t compete with Unity (also free) yet, but it is open source.

MakeHuman -> Procedurally generate human models

MakeHuman exports to a format useable by all 3d rendering programs, and dramatically reduces setup time for people images. Incredibly valuable for 3D artists who aren’t great modelers, but want to make images of people.

PyGame -> Versatile game coding for python

PyGame is a great framework for creating simple games in Python. Combined with other resources, it can be a really powerful engine builder, and it plays very well with Blender.


Procedural Skin Material

Lots of little stuff in this one.

First off, if you haven’t, check out this video on a super simple setup for the SSS node in Cycles. It’s a little long winded on the setup, so you can skip to about 7:30 for the actual nodework.

For my part, it’s a lot less complicated thinking about the node set up the way pauljs does in his video. The official stance is that the RGB colors in the ‘Radius’ setting are presented in that order from top to bottom. Presumably, the ‘1’ means full color and less than that is some percentage of that color coming through. In most cases, your main color is going to line up with your SSS color. Even with skin, the SSS color isn’t going to be terribly far from your base color for the model. Assuming that is the case (and even if not you’ll find some useful stuff in here) we’re going to make Suzanne pink.


So here’s the basic setup. Black body point light at 4800K / 100 Strength and .1 Size. Panel sized x3 rotated to 90 on the X and emitting at 5. Camera set at -4 Y (640X480) from full front. Suzanne is smooth shaded and has a subsurf on rendering 2 subs. (For the eyes, I duplicated the eye mesh, shurnk it along the individual origins to .95 and then set up two other materials, which I’ll talk about in a minute.)

Doing this with a skin tone of (.991,.440,.238) gives the following:


Diffuse Only



Diffuse and Gloss (settings to the right)


Diffuse, Gloss and Subsurf (super low)

Couple of things to note here. First, the color (mentioned above) is input into both the Diffuse and the Subsurf, it also goes somewhere else, which I’ll explain next. The gloss is striaght up white. The order of the mix is important, because we want the information to fall together correctly. The Subsurf, then whatever isn’t covered by that take the Gloss, and whatever is left take the Diffuse. I’ve also tweaked the mix settings to look nice. I set the SSS up for Gausian (it’s faster) and notice that the Scale of the subsurf is super tiny. Here’s what it looks like when full 1:


so you decide how supple you want that skin. I don’t have a good method for this, other than just toying with it. [I decided on a different number by the time I was done with this post than the one I started with. (0.25)]



So where does the color go?

This is the really interesting part from the YouTube video above. Why toy with these setting independently if you don’t need to. Take your input color, flush it through a Separate RGB node, multiply each of them by a value, and then combine them back to make the Radius. Simple. The great thing about this is that you can pull the value and color out to input nodes for a group later, and if you want, you can separate the RGB value multipliers (so if you happen to want a blue blooded pink thing, you can make that happen.) Separating and recombining the values might seem like a chore, but it is necessary for the proper outcome of this method, beyond giving you the control options.

The image is still a bit flat at this point, but that might be all you want with it. Notice, you can stick a texture in rather than just a color for that input slot. Very useful if you are creating skin with a texture to go with it. But lets say you want more realism, without painting a whole bunch of textures in. Here you go, procedural bumps:


Notice the add node, this is important as it will alter the behavior of the textures interacting. Also, the conversion to B&W is important, I tried it with the factor and it didn’t come out looking near as natural. This bump gets thrown into all three shaders.

Final product:

Right now, I’m toying with a way to make a voronoi cell texture (slightly disrupted by a wave texture) put some veins in the SSS color. It would be really neat to see that kind of thing work out. Some other considerations: run a dirty vertex and place the input from that as a modifier for the bump maps to create some wrinkles (dirty vert hits the crevasses where wrinkles would normally be), use a voronoi input to the vector of a noise shader to create fingerprint like structures,  grouping and saving the node groups for use with other projects and as varied inputs.

Happy Blending.

Oh, I almost forgot the eyes. This is very basically how I model all my eyes. Grab the exterior portion already on Suzanne and duplicate, then scale along individual origins to 0.95. Next grab all the stuff that’s in the front of the eye, and pull it back. Like this (interior on left, exterior on right)


For the materials, the interior gets this treatment:

notice you can replace the white with a texture, and I recommend bump mapping your veins, it’ll really add to the realism.

then the exterior gets this tear treatment:


Boom. That’s it.

Blend available here: Skinned Suzanne.

Share your results with me, I’d like to see this material put to use!



Glowing Electric Material Without Compositing

Yesterday I shared a tutorial on making some glowing, animated cylinders. I composited some ghost blur on them, and it looks great, but it really bothered me that the cylinders were so ‘solid’ as emitters and I had to composite the glow on afterward.

So I made a new material.
First I made an emitter out of this sphere: (generic settings)


Then I mixed it with transparency:


But that did nothing that I wanted, it just dimmed the emission (and kinda made it see through) So I added the ‘Layer Weight input and stuck that in the factor for the mix:


and now, we are talking. Notice that slight blur on the edges, that’s what I’m looking for. So I cranked up the Blend setting on the ‘Layer Weight’ input and came up with this:


but now the sphere was really dim, so apparently the more transparent the less light it gives off, fair enough, just crank up the emission:


and viola, glowy sphere of light without hitting the compositor.

Try it out on some other shapes and get some interesting results:

CylinderWisp CubeWisp SuzanneWisp

All of those were with untweaked settings as follows:


Happy blending.


EDIT: forgot to add the inverted effect. If you flip the Emission shader and the Transparent shader and then crank the Layer Weight ‘Blend’ down to 0.1 you get this neat glowing bubble:


New Engine, Old Blend

Futuristic Glass Chess set displaying the Torre Attack

Created in Blender3d, rendered with the Yafaray Engine.

I’m not sure if it’s good that I feel like I’m cheating using Yafaray, but that’s about the highest compliment I can give in this case. This image rendered in 5 passes at a tremendous resolution (2120X1192) in under an hour, and that’s accounting for 3 additional anti-aliasing passes. The material setup was negligible compared to what the same image would have cost setting up in cycles, and the render time was dramatically less than something even close. Plus, flawless caustics, absorption, and jeez, just look at it.

I have to say a bit about this image before I do a Yafaray review, though. I made a version of this image very early in my time working with Blender (so about a year and a half ago.)Cycles Absorption Test It was sort of on a whim, because I had found out that you could do absorption in Cycles from BlenderDiplom and wanted to try it out. [result on the right] You can see what I came up with, it’s a little grainy (ignore the white squares, they are supposed to be frosted glass) and took over 2 hours to render. It was a fun image at the time, and it taught me a lot about the capabilities of of myself as a modeller, and the flexibility of the Cycles engine. It also taught me a lot about the frustrations of render times without a GPU, and the sheer volume of passes necessary to get a clean render. I learned a lot since then, and looking at this image, I can see there is nowhere near enough light, the bounces aren’t set high enough, and (inside the model) the geometry was fairly messy. It was more fun to revisit the image now, and see how I had grown.

Anyway, one of the problems I have with Cycles is that it is so ‘material-centric.’ Everything is determined at the level of the materials, which can make sense to some artists, I suppose, but it is counter-intuitive to me. What I’m interested in, with a background in photography, is light.

An example of the materials list, and the Shiny Diffuse Material

An example of the materials list, and the Shiny Diffuse Material

I think this is the biggest advantage that Yafaray has over Cycles, it is ‘light-centric’ in its rendering process. No more fiddling with insane amounts of settings and plugging in all varieties of material nodes / input settings/ etc. The light is going to do the work for you, and you just need to make sure that everything is set up to receive that light properly, and send it on along its way. The most intimidating thing moving from Cycles to Yafaray is the list of materials. Experienced Cycles users are going to balk at the paltry options, until they really dig in to the interface. The amount of editable content on each material is the equivalent of having 5 or 6 built in nodes on the Cycles materials, so that very little of the quantity is missed in favor of customization. Basically, the artist has to ask, ‘how is the light going to interact with this object,’ then pick the material accordingly, and modify it to precision. Once you find the material to light settings (they are hidden in the object panel in this engine) you’re golden.

The preferred render setting for most Yafaray users, 'Photon Mapping'

The preferred render setting for most Yafaray users, ‘Photon Mapping’

The rendering process is also quite a bit more intuitive, and as many who use Yafaray have found, the Photon Mapping setting does a great job of producing very detailed light effects. The documentation on Yafaray.org is very explicit in how to use every setting, and it doesn’t require much to figure out once you have your head around it. When I rendered Glass Tower Gambit, I had all the settings cranked up to 16 (ray depth, bounces, etc.) and even rendering on a crappy laptop it only took 55 minutes. I made the mistake of not reading up on anti-aliasing tests until after I had started the render, and so I probably could have shaved a little bit of time off even that.

There are some downsides to Yafaray.

  • Full support for some items isn’t integrated, yet.
  • There is no ‘node-editor’
  • There is no option to render on GPU (which I don’t miss, but some will.)
  • The engine inhales RAM, but the issue has been cornered in the code, and will soon be worked out.

In all, I’d say people cutting their teeth in Blender should stick to Internal, but then move to Yafaray when they get the interest in making much more life-like renders. The reality of Cycles is that it is overly technical, and requires much more time to set up and to render, even for experienced users with decent machines. This engine gets my hearty thumbs up.

Happy Blending.

Let me know how your experience with Yafaray goes.

The Venceslas Sword

Here's the final render which, at 500 samples with 3 passes, took around 20 minutes on my laptop.

Here’s the final Cycles render which, at 500 samples with 3 passes, took around 20 minutes on my laptop.


The original image that I was working from.

I decided that this next year I’m going to get serious about my 3D art. (I start my new years resolutions with Advent, as the start of the liturgical year for Catholics) So I picked Blender 3D back out of the closet on my computer, and started dusting it off. I grabbed the first image out of my workout bin that caught my eye and decided to get to work. I wanted a simple model that I could practice some really advanced materials and com positing on, and the Venceslas Sword that I downloaded a while back was an excellent candidate.

First thing I realized in modelling this blade is that it isn’t straight. It looks awfully straight, but it isn’t. I already knew that imperfections made for good art, but I didn’t realize that even tiny variations like this would produce such a magnificent result.

The modelling itself was easy fairly straight forward, I could probably retopo some spots for future use, and likely will. Probably the most interesting thing here is the projection of my logo onto the scabbard, I have my logo saved as a plane mesh that I shrink-wrapped to the scabbard, then applied the solidify and subsurf modifiers to.

The materials were some that I borrowed from El Brujo de la Tribu, which, if you are not familiar with his stuff you should get there. Not only does he have some amazing material nodes to use, but he does some great render tests and explanations for the nerdier artist using Blender. (that’d be me.)

For the blade, I used the Gold Material node he has set up with the following settings:
and then set up the colors for some blue-ish steel.

I used a bronze node setup like this:


Both of these are group nodes and in the case of the gold material, the only other thing affecting it is a bump map from an image texture to give it some nicks (that will probably show up better when it isn’t full on reflecting the lamp pointed at it, but I couldn’t for the life of me figure out why the blade was turning black sometimes until the render, then I realized there was nothing for it to reflect… >facepalm<) The bronze material also has an anistropic shader mixed with this second group node, but I’m not sure the amount of effect I’m getting out of it in the final render.

The blue section has a leather look to it, that is done entirely with texture editing, a color ramp, and some creative mixing of the gloss:

The nodes are really a lot more complicated looking than they are complicated. Long story short, for realism you can’t beat a well structured node tree, so play around with them often.

The final trick to any render is the compositor window. If you are not judiciously running separate passes, acquiring the z pass and separating things like the glossy direct pass for your renders, you are missing out on about 3/4 of the power of Blender. Here’s my composite tree, which is *very* simple, despite what it looks like:CompositeNodes

Fully 1/3 of that is just trying to get the glossy direct nodes to pull out alphas so that I could mix them as a glow without dulling the rest of the image.

So there you have it, the Vinceslas Sword (minus the awesome cross cutout) in a pretty decent render. Let me know if you have any questions in the comments, or connect with me on Google+.