My favorite shadow fact is that outdoor shadows are blue.
It's not an optical illusion or artistic vibe or anything. The sky is blue, shadows on a clear day are illuminated by bounced light from the sky, therefore shadows are blue.
If you look underneath cars you can see it - A sharp blue shadow where the sky is visible, that fades to true black where the car's body occludes light from the sky.
If you combine this sharp blue sun shadow with a soft and black "AO" sky shadow you can get very pretty shadows for cheap.
This is how most graphics engines work by default. Shadows are ambient light only and outdoors ambient light will get a blue tint from cubemaps, SH, GI or whatever other technique is used.
> But a good graphics rendering engine will do it.
Correlation/Causation: lots of things in graphics rendering work because of observed phenomena. A “good graphics engine” is only as good as the eyes that implemented it. Todays engines still fall short of what is in front of us, and not because of a technical limitation.
> Pretty sure I saw scenes where shadows were purple despite a green sky.
If it’s a stylistic effect then it’s a stylistic effect. But otherwise, purple shadows are literally everywhere. Shadows can have an immense amount of chroma and vibrancy to them, or they can be incredibly cool and muted. It all depends on the context.
> Correlation/Causation: lots of things in graphics rendering work because of observed phenomena.
As we are moving more and more towards physics-based rendering, engine are shifting from imitating what's observed to properly defining the world and its interactions and get realism as a byproduct.
The traditional (cheap) way of lighting stuff is to model ambiant lighting as some constant, regardless where light actually comes from, and render shadows as dark patches. It is not at all how physics work, so you need an artist with a good sense of observation to make it look right. The artist may suggest some blue tint for the shadows because it looks right.
The physical approach is to start with just the sun, no ambiant light, and simulate Rayleigh scattering, which will naturally give you a blue sky and blueish shadows. An artist can still be involved, but his job will be more about stylistic choices and evaluating corner-cutting techniques.
> As we are moving more and more towards physics-based rendering, engine are shifting from imitating what's observed to properly defining the world and its interactions and get realism as a byproduct.
Sigh. Anyone who’s done physically based rendering will tell you it’s all bullshit. Even Maxwell’s equations are a special case of a much broader set of interactions. But a few decades ago you’d think that’s all there is. You don’t get realism as a byproduct, you get “looks real, ship it” as a byproduct.
I like the variant used in modern Nintendo platformers - they use shadow maps like basically everything else nowadays, but the player characters shadow is rigged to always be cast straight down regardless of where the actual light sources are. That helps the player gauge where they're going to land after a jump like a classic blob shadow, but with the visual fidelity of a proper shadow.
IIRC in dark environments they also rig the shadow to be brighter than the ground to make sure it remains visible.
I've always thought Valorant looked like crap because the players don't cast shadows. Just found out the other day that it's because other player model positions aren't sent to the client until the server determines they are visible to reduce cheating. This would cause shadows to appear and disappear as the player model was loaded and unloaded so you would never see a shadow unless you could also see the body of the player.
I think you read my post about that, I don't know if they specifically disabled player shadows because of the anti-cheat system but it certainly makes the anti-cheat simpler. Valorant intentionally has extremely bare-bones rendering to allow for very high framerates on modest hardware, aside from no dynamic shadows it has no dynamic lighting whatsoever, and no alpha transparency anywhere. Things like smoke that would usually be transparent are stylized in ways that let them be drawn fully opaque because it's faster. Regardless of the cheating situation they probably would have skipped dynamic shadows anyway, for performance.
Spot on: "We have cast shadows only on objects in first person, like your own hands and weapon. This is to avoid cast shadows being a competitive factor in both effect (knowing where your cast shadow is) and performance (low-spec machines can’t reasonably run well with cast shadows)."
interesting decision - makes sense for performance but shadows giving away a player's position in a competitive fps adds depth (and fun!) imo. satisfying to read a bad approach and also to play around if you notice your shadow's about to give you away.
I agree it lessens the depth though I think it generally aligns with Valorant flattening out some of the complexity of CS to free up overhead for hero abilities and ultimates. Each round has as many as 40 abilities to think about across 10 players and many maps add unique mechanics on top of that (e.g. teleporters). Also in CS because of a fixed sun position the parts of the map where shadows are relevant never change and become mostly rote memorization which is another thing I think Riot tried to minimize somewhat (e.g. Brimstone and Omen smokes not requiring memorizing dozens of lineups across the map pool).
for sure, its a totally valid design decision either way. also, good point about the fixed sun in cs. i've had a lot of fun with the shadows in r6 siege - since a lot of the combat takes place indoors where you can really reshape the level on the fly it doesn't feel as static
But its not, as we see in the posts above. CS has shadows and a fog-of-war anticheat implementation, Riot could have managed it if they really wanted shadows.
Thanks for the clarification, that was the post I read. I think it would look better if they did a cheap shadow like N64 Zelda which starts from the feet instead of the centered circle which falls off to transparent before it gets out to the feet but I guess you run into problems when the shadow extends past the player model sometimes.
Surely fixable by the server checking for visibility of the shadow as well? I don’t know Valorant but I assume that could actually become a strategic element to not hide with a light source at your back.
Positioning is already a huge part of the game, and every wall, corner and box to duck behind has a strategic purpose that one must be aware of, just not based on lighting but geometry and game strategy. So adding a whole other mechanic to it wouldn't be interesting.
Great post. There are lots of nostalgic game references here. I still remember being blown away by the shadows in the N64 Zelda many years ago.
I expect area lights and soft shadows to become the norm as ray-traced techniques are adopted. If you have the hardware, it's worth checking out Quake 2 RTX to see what the future might look like.
Agreed. Even with a top-end GPU I almost always turn off RT features. I expect we will continue to see hybrid RT approaches for the foreseeable future.
The PowerVR PCX1 had hardware support for shadow volumes, which were implemented more efficiently than standard stencil shadows. Rather than drawing the scene multiple times, it basically did a depth-only pre-pass (in hardware, to an on-chip depth/stencil buffer) to determine visible pixels and test the shadow volumes to determine what pixels are in shadow, then it preformed texture sampling and shading afterwards, with lighting brightness adjusted by shadow volume results. It would only shade visible pixels, overdraw would not waste bandwidth on unnecessary texture fetches.
The Dreamcast, based on the successor to the PCX1, also had many games with shadow volumes. The Dreamcast's implementation was more flexible, and its volumes could adjust more than lighting, such as what texture is used, UV mapping, or even what blending equation is used for transparent polygons.
I've managed to get soft shadows on the DC (https://imgur.com/a/DyaqzZD at the end), although it's pretty fill rate heavy, since it falls back to a more standard stencil method and redraws the shadow multiple times.
Shadows do get "darker" when they overlap and there's more than a single light source.
3 people illuminated by 2 lamps will project 6 shadows. Where the 6 shadows all overlap, that will be "black" (or only picking ambient light). In other places where less shadows overlap, you will get a gradient of illumination.
that's not true, it will be black (or ambient light) when any two shadows, one from each light, intersect. that's because if it is in shadow from light A and in shadow from light B, where would it get any illumination from? only from the ambient light, or if there is none, it would be black.
No. Consider two objects, two light sources, and an ant.
An ant outside any shadow can see both light sources.
An ant in a non-overlapping shadow cast from one object will have one light source blocked out but the other light source will be visible to the ant.
An ant in overlapping shadows from two objects will have both light sources blocked out. (Geometrically in this case it is necessary the overlapping shadows be cast from two separated points each from distinct light sources)
When one light source is visible to the ant that area must be lighter than when no sources are visible. This is the scenario presented by the OP.
edit: Since people seem to not believe this you can find a representation in part E in this diagram
We aren’t questioning your logic, just your reading comprehension in this case :) OP was correctly describing the case where both lights are blocked from view.
Just to summarize, because someone is missing something:
otikik pointed out a case of multiple light sources where overlapping shadows will be darker (otikik is correct)
fregus says "that's not correct" and argues a true case (shadows from one light source will not be darker) which is a bad argument because it tries to overgeneralize. The case from fregus is for the same light source and they cannot use this to argue otikik is incorrect because otikik's argument explicitly requires multiple light sources.
I point out, responding to fregus, how otikik is correct and how you need to consider multiple sources as well as including examples and physical evidence.
You question my reading comprehension for some reason
> ... when any two shadows, one from each light, intersect. that's because if it is in shadow from light A and in shadow from light B ...
They actually reference two light sources TWICE in their comment: ("any two shadows, one from each light", "from light A and in shadow from light B"). Hence the question of reading comprehension.
Fregus point is that otikik seems to suggest that the darkest shadow will be the combination of all 6 shadows; thats obviously wrong, any shadow blocked from both lights will be as dark as any other shadow blocked from both lights, no matter the amount of "overlapping" shadows. You then respond to Fregus by unnecesarily just explaining shadows again, hence the reading comprehension comment.
Let’s all write long explanations of one another’s long explanations, then the people who couldn’t comprehend the simple point in the first place will definitely understand
You are right. fregus leaves it at "ambient light" and calls it a day, but ambient light is just an empirical construct, not grounded in physics. In reality, light bounces around, and areas will appear darker the more the light paths otherwise converging on them are blocked by occluders.
I think "overlapping shadows get darker" is just not a very intuitive way to think about it because it disregards the stuff that actually matters, which is the light sources and how the scene may block their light paths.
For early games that could not afford advanced methods of global illumination, making overlapping shadows get darker seems like a reasonable, though not necessarily correct, way of faking it.
And the other thing ignored in these comments is perception. For the case of the two businessmen looking at their shadows under a bridge, it is easy to show with a diagram that some areas of overlapping shadows are, in fact, darker. But in such a poorly lit scene, people are likely to conclude that there is no difference, which isn't to say that there isn't one but, rather, that they don't perceive one.
>> Some early flight simulators draw a top-down flat shadow when on a runway. During my research I expected to see examples where the shadow is also seen when in flight but couldn’t find any.
F-29 RETAL aka F29 Retaliator aka F29
That shadow was another small tidbit what gave this game the enormous feel of speed.
For me the most impressive „shadow moment“ happened while playing GTA IV. I ha a quite beefy gaming PC and the real time shadows cast from the beams of the player vehicle were generally great.
The moment that’s still stuck with me happened while stealing a car in a back alley at night. Right as my player character entered the car a police man came around the corner. He „saw“ me stealing the car and pulled his gun right when the headlights of the car turned on and cast a huge shadow of the police man in motion onto a nearby wall.
There's nothing like baked global illumination if you want your game to look good forever. It was the first time I saw actual blue shadows in broad daylight... but from level geometry.
It was criminally underrated at the time; perhaps because it was shorter than some of its competitors.
One of my favourites are the shadows in PS1 game Power Shovel (aka Power Diggerz) which were interesting as they had to be projected over uneven terrain. I guess planar shadows is the closest technique in the article. https://www.youtube.com/watch?v=j_c4ZgcLTuE
Interesting. It looks like each vertex in the "shadow model" is projected onto the ground individually, which means the shadows can "peel away" from the ground.
That's right. Comprises like that were essential on PS1. I don't think it detracts from the game in any real way, as you're concentrating on other things.
Mirror's Edge is more of a example of lighting technique than shadow technique – key to the aesthetic of the game was lots and lots of baked radiosity; [1] like Quake/Source, but with a LOT more resolution (including normalmapping), much more bounces, area lights, etc. etc. etc. It was released in 2008 and runs on a potato, but real-time GI's only recently been able to match its look.
Thanks, it was an interesting read. Could have been more technical, though.
I am toying with lighting little voxel grid scene these days, targeting RP2040 and a measly 160x120 px screen and it's crazy how computationally and memory expensive this stuff is.
> Shadows do become darker when they overlap in Metal Gear Solid.
They should indeed get darker when there are multiple significant light sources, as in the Metal Gear Solid screenshot. This is because the addition of another obstruction (i.e. Solid Snake) causes more sources of light to be blocked.
I recently realized this when there was a heat-wave and I walked through a small patch of trees in the middle of the city.
The shadows of buildings were pretty light color and walking through them wasn't changing the temperature noticeably. But between the trees almost all of the sky was blocked - so the diffused light wasn't getting there - and the shadow there was much darker and it was significantly colder than every other part of the city.
So - shadows can get darker or lighter, even if there's just one light source and it's very far :).
I mean ray tracing is (probably? I'm no expert) the most physically accurate rendering method available, mapping closely to how light works in real life ('rays' of light bouncing around until they hit your retina, but then in reverse so you only simulate the rays that actually hit the camera instead of wasting every one that doesn't). But it's also the most expensive one.
We're at the point where we need new systems to represent material surfaces better, like leaves glowing from beneath when sun hits them from above, since they're thin enough for light get through. Or imagine putting a flashlight on your skin and seeing your skin glow from the inside around it. Unfortunately a much more complicated scenario than large open rooms and solid flat walls.
Games do simulate subsurface scattering, it's been a staple since the PS4 generation. They currently fake it rather than brute forcing the light paths traveling inside the surface like offline renderers do, but it still works fairly well.
In offline rendering the sky is the limit when it comes to SSS quality, if you have enough compute to throw at it. It's essential for getting skin to look right.
They don't understand or are not aware of the existing systems. UE for example has had a "two-sided" foliage shader model with normal mapping and subsurface scattering for ages.
Ray tracing algorithms are quite simple - it’s essentially just checking line intersections with geometry and doing bounce calculations - but a VAST number of rays are needed and a GPU that can do ray tracing needs to keep much more information about the scene geometry in memory. In older generations you wouldn’t store detailed collision information in GPU memory.
A GPU that can handle ray tracing, however, can do a lot of the techniques mentioned in the article (and others) more efficiently without doing what you’d consider full scene ray tracing, because the fundamental path tracing algorithms are very versatile.
Sorta, in classical single-bounce ray tracing you still need to cast explicit shadow rays from diffuse surfaces. Path tracing gives shadows for free, because it is simulating global illumination rit large: ambient occlusion, shadows, bounce light are all special cases.
In traditional Whitted-style ray tracing, the expensive part is indeed figuring out where there is no light. You know where you are, you know where the light is, but is there something inbetween?
very fun read! familiar with a lot of these old school low tech approaches but somehow never came across the mdk/winter gold style of just painting the shadow first and character second with a fixed pov, haha
My favorite shadow fact is that outdoor shadows are blue.
It's not an optical illusion or artistic vibe or anything. The sky is blue, shadows on a clear day are illuminated by bounced light from the sky, therefore shadows are blue.
If you look underneath cars you can see it - A sharp blue shadow where the sky is visible, that fades to true black where the car's body occludes light from the sky.
If you combine this sharp blue sun shadow with a soft and black "AO" sky shadow you can get very pretty shadows for cheap.
It's true, and so subtle that a lot of people don't even notice it.
But a good graphics rendering engine will do it. Shadows should carry a slight tint from the color of the sky.
Which is why some old screenshots of No Man's Sky bothered me. Pretty sure I saw scenes where shadows were purple despite a green sky.
This is how most graphics engines work by default. Shadows are ambient light only and outdoors ambient light will get a blue tint from cubemaps, SH, GI or whatever other technique is used.
> But a good graphics rendering engine will do it.
Correlation/Causation: lots of things in graphics rendering work because of observed phenomena. A “good graphics engine” is only as good as the eyes that implemented it. Todays engines still fall short of what is in front of us, and not because of a technical limitation.
> Pretty sure I saw scenes where shadows were purple despite a green sky.
If it’s a stylistic effect then it’s a stylistic effect. But otherwise, purple shadows are literally everywhere. Shadows can have an immense amount of chroma and vibrancy to them, or they can be incredibly cool and muted. It all depends on the context.
> Correlation/Causation: lots of things in graphics rendering work because of observed phenomena.
As we are moving more and more towards physics-based rendering, engine are shifting from imitating what's observed to properly defining the world and its interactions and get realism as a byproduct.
The traditional (cheap) way of lighting stuff is to model ambiant lighting as some constant, regardless where light actually comes from, and render shadows as dark patches. It is not at all how physics work, so you need an artist with a good sense of observation to make it look right. The artist may suggest some blue tint for the shadows because it looks right.
The physical approach is to start with just the sun, no ambiant light, and simulate Rayleigh scattering, which will naturally give you a blue sky and blueish shadows. An artist can still be involved, but his job will be more about stylistic choices and evaluating corner-cutting techniques.
> As we are moving more and more towards physics-based rendering, engine are shifting from imitating what's observed to properly defining the world and its interactions and get realism as a byproduct.
Sigh. Anyone who’s done physically based rendering will tell you it’s all bullshit. Even Maxwell’s equations are a special case of a much broader set of interactions. But a few decades ago you’d think that’s all there is. You don’t get realism as a byproduct, you get “looks real, ship it” as a byproduct.
It’s approximations. And that’s perfectly fine.
Artists are often taught to paint outdoor shadows a bit purple, for maximum contrast with the yellow tint of sunlight.
Just saw this on Monet’s winter paintings: https://news.artnet.com/art-world/monet-the-magpie-three-fac...
I like the variant used in modern Nintendo platformers - they use shadow maps like basically everything else nowadays, but the player characters shadow is rigged to always be cast straight down regardless of where the actual light sources are. That helps the player gauge where they're going to land after a jump like a classic blob shadow, but with the visual fidelity of a proper shadow.
IIRC in dark environments they also rig the shadow to be brighter than the ground to make sure it remains visible.
I've always thought Valorant looked like crap because the players don't cast shadows. Just found out the other day that it's because other player model positions aren't sent to the client until the server determines they are visible to reduce cheating. This would cause shadows to appear and disappear as the player model was loaded and unloaded so you would never see a shadow unless you could also see the body of the player.
I think you read my post about that, I don't know if they specifically disabled player shadows because of the anti-cheat system but it certainly makes the anti-cheat simpler. Valorant intentionally has extremely bare-bones rendering to allow for very high framerates on modest hardware, aside from no dynamic shadows it has no dynamic lighting whatsoever, and no alpha transparency anywhere. Things like smoke that would usually be transparent are stylized in ways that let them be drawn fully opaque because it's faster. Regardless of the cheating situation they probably would have skipped dynamic shadows anyway, for performance.
Spot on: "We have cast shadows only on objects in first person, like your own hands and weapon. This is to avoid cast shadows being a competitive factor in both effect (knowing where your cast shadow is) and performance (low-spec machines can’t reasonably run well with cast shadows)."
https://technology.riotgames.com/news/valorant-shaders-and-g...
interesting decision - makes sense for performance but shadows giving away a player's position in a competitive fps adds depth (and fun!) imo. satisfying to read a bad approach and also to play around if you notice your shadow's about to give you away.
I agree it lessens the depth though I think it generally aligns with Valorant flattening out some of the complexity of CS to free up overhead for hero abilities and ultimates. Each round has as many as 40 abilities to think about across 10 players and many maps add unique mechanics on top of that (e.g. teleporters). Also in CS because of a fixed sun position the parts of the map where shadows are relevant never change and become mostly rote memorization which is another thing I think Riot tried to minimize somewhat (e.g. Brimstone and Omen smokes not requiring memorizing dozens of lineups across the map pool).
for sure, its a totally valid design decision either way. also, good point about the fixed sun in cs. i've had a lot of fun with the shadows in r6 siege - since a lot of the combat takes place indoors where you can really reshape the level on the fly it doesn't feel as static
its so people using cheats do not even receive the data
But its not, as we see in the posts above. CS has shadows and a fog-of-war anticheat implementation, Riot could have managed it if they really wanted shadows.
Thanks for the clarification, that was the post I read. I think it would look better if they did a cheap shadow like N64 Zelda which starts from the feet instead of the centered circle which falls off to transparent before it gets out to the feet but I guess you run into problems when the shadow extends past the player model sometimes.
Link to your post for the curious?
https://news.ycombinator.com/item?id=41937205
Surely fixable by the server checking for visibility of the shadow as well? I don’t know Valorant but I assume that could actually become a strategic element to not hide with a light source at your back.
Positioning is already a huge part of the game, and every wall, corner and box to duck behind has a strategic purpose that one must be aware of, just not based on lighting but geometry and game strategy. So adding a whole other mechanic to it wouldn't be interesting.
Great post. There are lots of nostalgic game references here. I still remember being blown away by the shadows in the N64 Zelda many years ago.
I expect area lights and soft shadows to become the norm as ray-traced techniques are adopted. If you have the hardware, it's worth checking out Quake 2 RTX to see what the future might look like.
Lastly, I've added your blog to my growing list of graphics resources: https://github.com/aaron9000/c-game-resources
Honestly if they can get good enough shadows with smoke and mirror trickery i'd prefer they stick to smoke and mirrors for performance reasons.
Agreed. Even with a top-end GPU I almost always turn off RT features. I expect we will continue to see hybrid RT approaches for the foreseeable future.
>Possibly the earliest shipping game with stencil shadows is Severance: Blade of Darkness from 2001 whose shadows look great.
Revolte, for the PowerVR PCX1, had stencil shadows in 1996.
https://www.youtube.com/watch?v=7BvtML5dIuI
The PowerVR PCX1 had hardware support for shadow volumes, which were implemented more efficiently than standard stencil shadows. Rather than drawing the scene multiple times, it basically did a depth-only pre-pass (in hardware, to an on-chip depth/stencil buffer) to determine visible pixels and test the shadow volumes to determine what pixels are in shadow, then it preformed texture sampling and shading afterwards, with lighting brightness adjusted by shadow volume results. It would only shade visible pixels, overdraw would not waste bandwidth on unnecessary texture fetches.
The Dreamcast, based on the successor to the PCX1, also had many games with shadow volumes. The Dreamcast's implementation was more flexible, and its volumes could adjust more than lighting, such as what texture is used, UV mapping, or even what blending equation is used for transparent polygons.
I've managed to get soft shadows on the DC (https://imgur.com/a/DyaqzZD at the end), although it's pretty fill rate heavy, since it falls back to a more standard stencil method and redraws the shadow multiple times.
For an example of the state of the art in videogame lighting, check of Epic’s recent UE 5.5 MegaLights demo: https://youtu.be/p9XgF3ijVRQ?si=GcU0kP33iKQh_5Ge
Shadows do get "darker" when they overlap and there's more than a single light source.
3 people illuminated by 2 lamps will project 6 shadows. Where the 6 shadows all overlap, that will be "black" (or only picking ambient light). In other places where less shadows overlap, you will get a gradient of illumination.
that's not true, it will be black (or ambient light) when any two shadows, one from each light, intersect. that's because if it is in shadow from light A and in shadow from light B, where would it get any illumination from? only from the ambient light, or if there is none, it would be black.
No. Consider two objects, two light sources, and an ant.
An ant outside any shadow can see both light sources.
An ant in a non-overlapping shadow cast from one object will have one light source blocked out but the other light source will be visible to the ant.
An ant in overlapping shadows from two objects will have both light sources blocked out. (Geometrically in this case it is necessary the overlapping shadows be cast from two separated points each from distinct light sources)
When one light source is visible to the ant that area must be lighter than when no sources are visible. This is the scenario presented by the OP.
edit: Since people seem to not believe this you can find a representation in part E in this diagram
https://i.imgur.com/r6x6QPQ.jpeg
and a photograph of this nicely done with colored lights here
https://i.imgur.com/NUlywpb.jpeg
We aren’t questioning your logic, just your reading comprehension in this case :) OP was correctly describing the case where both lights are blocked from view.
Just to summarize, because someone is missing something:
otikik pointed out a case of multiple light sources where overlapping shadows will be darker (otikik is correct)
fregus says "that's not correct" and argues a true case (shadows from one light source will not be darker) which is a bad argument because it tries to overgeneralize. The case from fregus is for the same light source and they cannot use this to argue otikik is incorrect because otikik's argument explicitly requires multiple light sources.
I point out, responding to fregus, how otikik is correct and how you need to consider multiple sources as well as including examples and physical evidence.
You question my reading comprehension for some reason
> ... when any two shadows, one from each light, intersect. that's because if it is in shadow from light A and in shadow from light B ...
They actually reference two light sources TWICE in their comment: ("any two shadows, one from each light", "from light A and in shadow from light B"). Hence the question of reading comprehension.
> when any two shadows, one from each light
@fregus explicitly described a scene with two light sources. Thus the question about your reading comprehension.
Fregus point is that otikik seems to suggest that the darkest shadow will be the combination of all 6 shadows; thats obviously wrong, any shadow blocked from both lights will be as dark as any other shadow blocked from both lights, no matter the amount of "overlapping" shadows. You then respond to Fregus by unnecesarily just explaining shadows again, hence the reading comprehension comment.
Let’s all write long explanations of one another’s long explanations, then the people who couldn’t comprehend the simple point in the first place will definitely understand
I'm not talking to "people", I'm writing a specific response to a specific person. Do try to keep up
That would be true if lights were simple point sources with simple straight ray behavior and no diffusion and no reflection but they’re not.
You are right. fregus leaves it at "ambient light" and calls it a day, but ambient light is just an empirical construct, not grounded in physics. In reality, light bounces around, and areas will appear darker the more the light paths otherwise converging on them are blocked by occluders.
I think "overlapping shadows get darker" is just not a very intuitive way to think about it because it disregards the stuff that actually matters, which is the light sources and how the scene may block their light paths.
For early games that could not afford advanced methods of global illumination, making overlapping shadows get darker seems like a reasonable, though not necessarily correct, way of faking it.
And the other thing ignored in these comments is perception. For the case of the two businessmen looking at their shadows under a bridge, it is easy to show with a diagram that some areas of overlapping shadows are, in fact, darker. But in such a poorly lit scene, people are likely to conclude that there is no difference, which isn't to say that there isn't one but, rather, that they don't perceive one.
Yes, definitely a big difference between reality and a pretty, fast facsimile of it!
Good observation, but the article is not wrong (and quite interesting)
"There’s just one light source, and relatively far away, so the shadow is simply an absence of light."
>> Some early flight simulators draw a top-down flat shadow when on a runway. During my research I expected to see examples where the shadow is also seen when in flight but couldn’t find any.
F-29 RETAL aka F29 Retaliator aka F29
That shadow was another small tidbit what gave this game the enormous feel of speed.
https://imgur.com/a/hOgxr7a
https://www.mobygames.com/game/6233/f29-retaliator/
For me the most impressive „shadow moment“ happened while playing GTA IV. I ha a quite beefy gaming PC and the real time shadows cast from the beams of the player vehicle were generally great.
The moment that’s still stuck with me happened while stealing a car in a back alley at night. Right as my player character entered the car a police man came around the corner. He „saw“ me stealing the car and pulled his gun right when the headlights of the car turned on and cast a huge shadow of the police man in motion onto a nearby wall.
Great article, fun to read.
The shadow overlap in MGS is not completely incorrect as there's ambient light, scattering and other similar global illumination phenomena.
>Mirror’s Edge (2008, PC) is basically Lightmaps: The Game.
Lol, true. Impressive game at the time, and even nowadays.
There's nothing like baked global illumination if you want your game to look good forever. It was the first time I saw actual blue shadows in broad daylight... but from level geometry.
It was criminally underrated at the time; perhaps because it was shorter than some of its competitors.
One of my favourites are the shadows in PS1 game Power Shovel (aka Power Diggerz) which were interesting as they had to be projected over uneven terrain. I guess planar shadows is the closest technique in the article. https://www.youtube.com/watch?v=j_c4ZgcLTuE
Interesting. It looks like each vertex in the "shadow model" is projected onto the ground individually, which means the shadows can "peel away" from the ground.
That's right. Comprises like that were essential on PS1. I don't think it detracts from the game in any real way, as you're concentrating on other things.
Clicked this because it sounded interesting and was surprised to see one of my favorite movies in the introduction!
edit: really nice and nostalgic read, I played almost all of the games mentioned.
Mirror's Edge is more of a example of lighting technique than shadow technique – key to the aesthetic of the game was lots and lots of baked radiosity; [1] like Quake/Source, but with a LOT more resolution (including normalmapping), much more bounces, area lights, etc. etc. etc. It was released in 2008 and runs on a potato, but real-time GI's only recently been able to match its look.
[1] https://www.slideshare.net/slideshow/henrikgdc09-compat/3128...
Thanks, it was an interesting read. Could have been more technical, though.
I am toying with lighting little voxel grid scene these days, targeting RP2040 and a measly 160x120 px screen and it's crazy how computationally and memory expensive this stuff is.
> Shadows do become darker when they overlap in Metal Gear Solid.
They should indeed get darker when there are multiple significant light sources, as in the Metal Gear Solid screenshot. This is because the addition of another obstruction (i.e. Solid Snake) causes more sources of light to be blocked.
I recently realized this when there was a heat-wave and I walked through a small patch of trees in the middle of the city.
The shadows of buildings were pretty light color and walking through them wasn't changing the temperature noticeably. But between the trees almost all of the sky was blocked - so the diffused light wasn't getting there - and the shadow there was much darker and it was significantly colder than every other part of the city.
So - shadows can get darker or lighter, even if there's just one light source and it's very far :).
So many shadowing techniques! Interesting how using ray tracing inherently makes rendering shadows a non issue.
I mean ray tracing is (probably? I'm no expert) the most physically accurate rendering method available, mapping closely to how light works in real life ('rays' of light bouncing around until they hit your retina, but then in reverse so you only simulate the rays that actually hit the camera instead of wasting every one that doesn't). But it's also the most expensive one.
We're at the point where we need new systems to represent material surfaces better, like leaves glowing from beneath when sun hits them from above, since they're thin enough for light get through. Or imagine putting a flashlight on your skin and seeing your skin glow from the inside around it. Unfortunately a much more complicated scenario than large open rooms and solid flat walls.
Games do simulate subsurface scattering, it's been a staple since the PS4 generation. They currently fake it rather than brute forcing the light paths traveling inside the surface like offline renderers do, but it still works fairly well.
https://old.reddit.com/r/gaming/comments/4jc38z/til_in_uncha...
In offline rendering the sky is the limit when it comes to SSS quality, if you have enough compute to throw at it. It's essential for getting skin to look right.
https://x.com/HadiKarimi_Art/status/1730627284141216181
BRDF, BSSRDF.. all can simulate that and are part of the rendering equation (Kajiya) if you plug it in.
Why are the existing systems inadequate?
They don't understand or are not aware of the existing systems. UE for example has had a "two-sided" foliage shader model with normal mapping and subsurface scattering for ages.
Ray tracing algorithms are quite simple - it’s essentially just checking line intersections with geometry and doing bounce calculations - but a VAST number of rays are needed and a GPU that can do ray tracing needs to keep much more information about the scene geometry in memory. In older generations you wouldn’t store detailed collision information in GPU memory.
A GPU that can handle ray tracing, however, can do a lot of the techniques mentioned in the article (and others) more efficiently without doing what you’d consider full scene ray tracing, because the fundamental path tracing algorithms are very versatile.
Sorta, in classical single-bounce ray tracing you still need to cast explicit shadow rays from diffuse surfaces. Path tracing gives shadows for free, because it is simulating global illumination rit large: ambient occlusion, shadows, bounce light are all special cases.
"Interesting how using ray tracing inherently makes rendering shadows a non issue."
Raytracing simulates real lighting. Shadows are, where no light is.
In traditional Whitted-style ray tracing, the expensive part is indeed figuring out where there is no light. You know where you are, you know where the light is, but is there something inbetween?
SketchUp uses stencil shadows! Although we have more modern options, it is part of the look.
A great article but I can't believe they missed Third: The Dark Project from their list.
very fun read! familiar with a lot of these old school low tech approaches but somehow never came across the mdk/winter gold style of just painting the shadow first and character second with a fixed pov, haha
It is some piece of snow that falling from the opposite angle of the shadow. Nothing has changed in the pictures of our times.
Rkkrwwkgktekgle