Denoising in Blender 2.81 – Intel’s ML Compositing Node vs Cycles vs NVIDIA D-Noise


– [Instructor] Blender 2.81
makes denoising your renders dead simple. But there are a couple do’s and dont’s that I want to go over here that are gonna be really important. I’m Jonathan Lampel, from cgcookie.com and today I will take you through all of the questions you
may or may not have had about this new feature. 2.81 is not out yet, I’m
using the alpha version so if it’s not out by
the time you watch this, you can just go to
blender.org/experimental. Here I just have a very noisy image that I’ve rendered out with 50 samples. And it’s pretty complex, it’s got some transparent materials, some caustics, a lot going on. So let’s just press Shift +
a, go to filter and denoise. You just need to throw it on there and wait a second and you’re good to go. It’s basically magic. Obviously this still looks
a little bit splotchy and so to get a better
result you’d still want to increase your sample count, but now instead of having
to render out 5,000 samples maybe you’d only have to render out 700. And that time saving is huge. The way to get an even better result with the same amount of samples, is by plugging in more
information into the denoise node. First of all the HDR just
allows this to process the image with the high dynamic range and so you can see if we turn this off, what it’s going to do is, kind of crush the bright
areas a little bit and it’s not going to look as accurate. So I recommend leaving that
on basically all the time. For the normal, we are
gonna need a normal output from this render layer. So let’s go to our view
layers in our property setter that I’ve just stocked to
the left side over here. So we have a normal check
here under passes and data and to get the albedo, we could
look at under light diffuse and color. However there is also another
checkbox called denoising data and if we turn that on, you can see that we get a lot more information, including denoising normal
and denoising albedo. So let’s try both of these
and see how they differ. First I’m going to take this
denoise node, hit Shift + d and duplicate this down. This one, I’m gonna use
the image just as usual. I’ll plug the normal into the normal and diffuse color into the albedo. Let’s take a look at what
the normal looks like. You can see it’s completely
smooth so there’s no noise here and it’s just giving us the direction that these faces are pointing which is gonna be really helpful
for crisping up the edges. If we look at the diffuse color, you can see that it gives
us the, just raw color of the diffuse materials
and everything else, the glossy shaders, the emission shaders, basically everything that’s
not a diffuse is just rendered as black, however what
is the diffuse color because there’s no light
bouncing around here, it’s just the direct light. There’s no noise or whatsoever,
it’s completely smooth. And so that helps give us
the texture information. So if we look at the denoise node now, you can see that this
already looks a lot better. If we flip between the two, you may start to notice some differences
but maybe not in the video until you really zoom in. So first let’s look in
this right-hand corner where we have the wood on the table and also the cover of the sketch book. If we flip between the two,
the first thing you’ll notice, is that the texture looks
much better in the second one thanks to that diffuse color information. Before it was kind of blurred
out but now it looks great. Another thing that’s really
helpful here is the normal. So if we look at the
denoise without the normal, you can see that it’s blurred some areas, including the edging on this computer and part of the lamp on the side. But as soon as we plug in that normal, we can tell the denoising algorithm which way the faces are pointing and where their edges are,
we get a way better result. So this already looks quite good and for rendering out only 50 samples of a really noisy image,
this is very impressive. But we can actually get even better by using the denoising information. So let’s say Shift + d and I’m going to do this one more time but this time using the denoising normal and the denoising albedo. Let’s plug in the image here. And let’s look at the difference
between the regular normal and the denoising normal. Here you can see that everything
looks completely solid but if we go down to the denoising normal then the transparent materials
are actually transparent and it’s even taking the refraction into account on this glass. So you can even see the
normal of the pencils and pens inside of this cup which
is really really cool. You can also look at
the reflective surfaces and notice that there is a
lot more information here but because it actually needs
to use those traced pads to calculate the refraction and such, it’s going to be a little bit more noisy. Similarly with the denoising albedo, not only does it include
just the diffuse passes but it also includes the
color for all the other paths as well including the glossy
passes, the emission passes, all of that stuff. But again because it
has to use transparency and even refraction, it’s
going to be a little bit noisy and you can see here that
we have some artifacts around of this glass. So that’s a little bit less than ideal. Let’s look at the
difference between the two. Here’s the first denoise that
is using the regular normal and it’s kind of blurring and smearing what’s inside of this glass here. If we look at it on the second version, everything is not much more sharp but we do get some artifacts on the glass and those would go away
if we used more samples. So depending on what your needs are, you might want to use one over the other. What’s also cool about
the denoising normal is, it’s finding the normal
inside of the reflection. So if we zoom in on this
bottom of the lamp here, you can see that it’s
reflecting part of the arm. If we look at the first one, that reflection is being blurred out but because we’re actually
getting the reflected normal in here, then it’s gonna
look much more accurate. Again that could end up
being a little bit more noisy but in complex scenes it’s
going to look a lot more crisp. So it all depends on what you want more. Do you want more accuracy or
do you want more smoothness? It’s up to you. Now those are all the settings
that this denoise node has and for just that it works extremely well. I’m gonna compare this
with the other denoiser and also the NVIDIA
denoiser but first I’d like to take a look at a couple other examples. The first thing I wondered
when I was using this is, it doesn’t work with photos
because it works really well for rendered images. So what I have here is a very noisy photo. I just drag in, drop that in. And if we look at it, you
can see that it’s a picture that I took that turn out
to be a kind of a cool image but it was way too dark and I
didn’t have my settings right so I just came out very noisy and it’s not really something I could use. So let’s zoom in on
some of this noise here and let’s try plugging
in this denoise filter. It’s gonna take a second
because the image is pretty big. Once it’s done doing its thing, you probably won’t notice any difference and that’s because it hasn’t removed any of the noise whatsoever. The reason that this denoise
node does not work on images is because it’s been trained
using a neural network to target specifically
ray-traced pattern noise. Ray-tracing produces a pretty
predictable result of noise and so it’s able to focus
in on that very specifically which is why I can usually
get really good results with even fairly low sample counts. But when the noise is a
different type of pattern, for example a pattern that
would come out of a camera, it simply just doesn’t work at all. So unfortunately, this
only is going to work with rendered images. That brings into question though, what about other renders? And in this case, I’m
thinking about Eevee, well I haven’t seen setup here. That has an image that’s
been rendered out from Eevee. If we rotate around the scene itself, you can see that although we
don’t really think of Eevee as being noisy, in some
situations it can be. First for reflections if we
don’t have the sample count, turned up high enough
or for hashed shadows or hashed transparency. You can see that this
is actually quite noisy. Now this could be fixed, if we turn up the sample
count really really high. But even at a sample count of 32 which is double the view port, it’s still very noisy. We’d have to turn it up extremely high to get a completely smooth result. So why not try denoising it? Let’s hit Shift + a, filter and denoise and plug that right in. Once that does its thing, you can see that it actually looks
very successfully denoised. But that’s only for the
hashed transparency. The hashed shadow that’s below
it, actually is some kind of jagged because it’s a different pattern since it’s been stretched out
across the top of the queue. So it’s not going to work for that. But it has worked for this
transparent area for the monkey but for the reflections
it actually doesn’t work because again that’s a
different type of pattern. So the answer of whether
it works with Eevee or not is kind of and only
some certain situations. Now you might think we
could get a better result by plugging in the
normal but if we do that, you can see that it actually
returns about back to normal. There are some areas that
are smoothed out a little bit but overall it looks about as if we had not denoised it at all. And that’s because if we
look at the normal pass, it’s calculating based on
that hashed transparency, so even the normal itself is very noisy. So it’s not really any help. And we also don’t have an albedo
pass to work with in Eevee so we’re also gonna get
very blurred textures. Another thing that you
might be wondering is since this node was implemented by Intel, would it only work on Intel systems? And here I can tell you
that no it actually works on anything that can run Blender. I don’t have Intel,
anything in my computer so I have an AMD Threadripper processor, and everything else is not Intel. So while they were the
people who implemented it, they made sure that it
works for everybody. And that’s a little bit different
than the NVIDIA denoising that we’re going to look at in a bit. Now let’s check out denoising animations. When I play this really
slowly, you can see that each individual frame
by itself looks great. But when I speed this up
you can definitely tell that there is a lot of jittering going on especially in those complex
areas like the glass and the table. That’s because the node
only has one single frame to work with, because it
can’t see the noise patterns and the frames before it or after it, it has no way of blending between them. Increasing the sample count
definitely helps a lot. But even that 500 samples, there is still some jitter going on. Throw a thousand sample
at it and it’s almost gone but not quite. Even so it’s still way better
than not denoising it at all. So it works, you’re just going
to need a lot more samples than you would with the still image. One thing I tried was turning on the animate samples property
under the sampling settings and that definitely made it worse. So leave that off. I also tried changing the
sampling pattern from Sobol to correlated multi-jitter and that interestingly
enough had no effect at all. I wouldn’t go too crazy
testing all this yet though as much better animation
denoising is on the to-do list of Attila Afra. The Intel dev who implemented
node in the first place even used an exclamation mark on Twitter so you know he means business. I tried to stump it by
introducing some motion blur but that actually works great too, thanks to the fact that
it blurs the albedo and normal passes. Depth of field, no problem. What about removing fire
flies while keeping small and bright specular highlights? Not sure how that works but it looks good. It’s a life saver for end
over scenes with a lot of bounce lighting and
it even works with hair. I’m honestly having a hard
time finding an example of where it doesn’t work well. Just because something
uses machine learning, doesn’t always mean that it’s better. But in this case it absolutely
crushes the old denoiser. Not only does it look
better in every situation, it’s also significantly faster. NVIDIA also has a machine
learning based denoiser and you can use it via the D-NOISE ad on from (mumbles) graphics. Intel’s implementation into
Blender though beats that too. Both in terms of quality and the speed. You can see this most
clearly at low sample counts where Intel just does a better
job at upholding the textures and its normals. It also takes into account the fact that more samples introduces more light while NVIDIA’s remains incorrectly dark. Plus it works on every computer, not just NVIDIA’s graphic cards. I don’t want to give NVIDIA
too much of a hard time though, because they are working with
Blender to implement OptiX into cycles itself which is a
lot more than just denoising. It’s actually a whole back-end
ray tracing architecture which when implemented in
Cycles will give the ability for anybody with an RTX card
to use those special Cores to really speed things up. It may also pave the way for
real-time view port denoising which I’m super excited for. I have a little section about NVIDIA RTX and a real-time ray
tracing in my blog post about Cycles and Eevee. So if you’re curious about that, you can check the link below. I’d also recommend checking
out Andrew Price’s recent video on denoising because it
shows Neat Video in action which is a plugin for after
effects or premiere pro that is a really good
job of denoising video and I use that all the time
for both animation and film if you’ve already given denoising a shot, let me know what you
think in the comments. If you learned something helpful or just found this video interesting, click like and subscribe. That way I can see you in the next one.

Leave a Reply

Your email address will not be published. Required fields are marked *