If you didn’t know minecraft has a builtin lightmapper which it uses to decide light levels, it’s very simple and just used Flood Fill, which is why light sources form rhombus shapes, but fitting for a pixelated block game
There’s minecraft RTX, which completely overlooks it and uses very expensive realtime raytracing, although at least at a lower resolution and upscaled with an algorithm, and requires a specific type of card
But why not improve the builtin lightmapper? it should be possible to:
- Increase resolution to 1/4 block instead of 1
- Use RGB instead of grayscale
- Give each block a radiance color based on the average of their texture
- For lighting do:
- Calculate direct exposure, simple raycast, apply sun color
- Calculate sky exposure, apply sky color
- Calculate direct light sources, apply light source color
- Do light bounces using block radiance, apply resulting color
There is also a lot of methods for optimization:
- Async calculate closest chunks first
- only use this chunk and connected straight and diagonal chunks
- Recycle sky exposure, useful when only the sun is moving
- Only update sky lighting async every 2 seconds and blend inbetween
- After the lighting is baked, it is MUCH faster since it’s just coloring geometry with the stored data
Async - do the calculation over a period of time instead of immediately, reducing lag spikes
It would be much faster and still be technically raytracing, and achieve similar lighting, and would be a better option than realtime raytracing, Thoughts?
You don’t really need realtime raytracing, a big advantage is that minecraft is cube based, and the blocks cannot move, that means you can generate the lighting once, and it would have almost 0 performance cost, which still utilizes a bit of raytracing, although differently, only time you need to regenerate it is if the sun is moving, in which case you can still recycle sky exposure and possibly light bounces, and only update it every 2 seconds instead of every frame