/r/raytracing
Ray tracing articles, competition entries etc.
A place for sharing news, personal projects, academic papers, videos and anything else about raytracing.
Feel free to ask questions, but please avoid tech-support style questions. This isn't the place to ask how to get RTX working in minecraft. Try/r/pcgamingtechsupport/ or r/minecraftrtx instead.
Also check out /r/vintagecgi for CGI from the 70's-90's
/r/raytracing
Ray Tracing works fine in Star Wars : Jedi Survivor. Runs on RDNA3 + Zen4 with AMD Advancing AI technology.
There was a set of ray-tracing articles from someone (perhaps a university student at the time) who later moved to China and launched their own gaming company there.
The articles mentioned creation and processing of queues of rays.
There were at least two types of queues, each holding the rays of different kinds/levels-of-processing.
The background colour of the articles was brown(ish). There was an image representing one or both of the queues as a grid/table, and there was also a description of a step showing how a ray could be promoted from one queue to the next.
This wasn't h/w-based ray-tracing, but software-only.
There was also an image of an object similar to sphereflake (though not as extensive or deeply recursive - just a large sphere surrounded by 4-5 smaller spheres).
Thank you.
I completed the first book (Raytracing in One Weekend), and currently implementing Raytracing The Next Week in Rust.
Some how the perlin texture is bugged and repeating texture in a weird way.
I searched for bugs in the renderer, noise texture, and Perlin.h from the book, but couldn't find the problem.
Rendered image:
Source code: Raytracing_In_One_Weekend
Enjoy the rest of the day with the red car 🥰 DXR inline Ray Tracing is used here in Cyberpunk 2077 and the engine is running fine.
RDNA3 optimized Path Tracing
Does anybody have a reference to an algorithm that can efficiently render glints? Specifically lets say a point light illuminating eye glasses. I know vray can handle these cases well but I couldn’t reproduce this with PBRT for example.
In my renderer, I already implemented a cook torrance dielectric and an oren nayar diffuse, and used that as my top and bottom layer respectively (to try and make a glossy diffuse, with glass on the top).
// Structure courtesy of 14.3.2, pbrt
BSDFSample sample(const ray& r_in, HitInfo& rec, ray& scattered) const override {
HitInfo rec_manip = rec;
BSDFSample absorbed; absorbed.scatter = false;
// Sample BSDF at entrance interface to get initial direction w
bool on_top = rec_manip.front_face;
vec3 outward_normal = rec_manip.front_face ? rec_manip.normal : -rec_manip.normal;
BSDFSample bs = on_top ? top->sample(r_in, rec_manip, scattered) : bottom->sample(r_in, rec_manip, scattered);
if (!bs.scatter) { return absorbed; }
if (dot(rec_manip.normal, bs.scatter_direction) > 0) { return bs; }
vec3 w = bs.scatter_direction;
color f = bs.bsdf_value * fabs(dot(rec_manip.normal, (bs.scatter_direction)));
float pdf = bs.pdf_value;
for (int depth = 0; depth < termination; depth++) {
// Follow random walk through layers to sample layered BSDF
// Possibly terminate layered BSDF sampling with Russian Roulette
float rrBeta = fmax(fmax(f.x(), f.y()), f.z()) / bs.pdf_value;
if (depth > 3 && rrBeta < 0.25) {
float q = fmax(0, 1-rrBeta);
if (random_double() < q) { return absorbed; } // absorb light
// otherwise, account pdf for possibility of termination
pdf *= 1 - q;
}
// Initialize new surface
std::shared_ptr<material> layer = on_top ? bottom : top;
// Sample layer BSDF for determine new path direction
ray r_new = ray(r_in.origin() - w, w, 0.0);
BSDFSample bs = layer->sample(r_new, rec_manip, scattered);
if (!bs.scatter) { return absorbed; }
f = f * bs.bsdf_value;
pdf = pdf * bs.pdf_value;
w = bs.scatter_direction;
// Return sample if path has left the layers
if (bs.type == BSDF_TYPE::TRANSMISSION) {
BSDF_TYPE flag = dot(outward_normal, w) ? BSDF_TYPE::SPECULAR : BSDF_TYPE::TRANSMISSION;
BSDFSample out_sample;
out_sample.scatter = true;
out_sample.scatter_direction = w;
out_sample.bsdf_value = f;
out_sample.pdf_value = pdf;
out_sample.type = flag;
return out_sample;
}
f = f * fabs(dot(rec_manip.normal, (bs.scatter_direction)));
// Flip
on_top = !on_top;
rec_manip.front_face = !rec_manip.front_face;
rec_manip.normal = -rec_manip.normal;
}
return absorbed;
}
At 25 samples, but when its set to 100 samples it just gets darker...
Which is resulting in an absurd amount of absorption of light. I'm aware that the way layered BSDFs are usually simulated typically darkens with a loss of energy...but probably not to this extent?
For context, the setting of the `scatter` flag to false just makes the current trace return, effectively returning a blank (or black) sample.
Is this getting an Xbox series S? A computer that can handle raytracing would be much more than this no?
Hi! I am a PhD student with handson experience in real-time path tracing on VR. Besides, I am also familiar with ray tracing. From API side, I have good understanding in NVIDIA OptiX and MS DirectX. At this point, I am looking for internship. Please let me know if you know any real-time ray/path tracing/global illumination
related position.
I am running through the City and some Cops wants trouble. Path Tracing optimization for RDNA3 is activated. Ultra Quality Denoising "on". Looks beautiful and runs with 160-180fps using FSR3.1 Frame Generation and AFMF2. Super Resolution Automatic activated for High Performance Gaming level 2 (HPG lvl. 2 ~ 120-180fps). Overclocked the shaders to 3,0Ghz, advanced RDNA3 architecture on the 7900XTX is performing greatly. Power consuming about 464W for the full GPU.
https://youtu.be/lnJtmzAbj4Q?si=FHsB7pX8wEiDe4dY
RDNA3 Path Tracing optimization
RDNA3 Premium Denoising
RDNA3 FSR3 Frame Generation
RDNA3 Performance Rasterizing
RDNA3 Fluid Motion Frames 2
RDNA3 KI Super Resolution
RDNA3 Overclocking @ 3269Mhz
AMD AM5 PC USED for CPU testing:
CPU: AMD Ryzen 9 7950X 16C/32T @ 170W
GPU: AMD Radeon RX 7900 XTX @ 464W
CPU Cooler: Arctic AIO 360mm H²O
MB: Asus X670E Creator WiFi
RAM: 2 x 32GB - G.SKILL 6000Mhz CL30
SSD (Nvme): 2TB + 4TB
PSU: InterTech SamaForza 1200W+ Platinum
CASE: Cougar Blade
RDNA3 Path Tracing AfterLife - Green Room
RDNA3 Path Tracing optimization
RDNA3 Premium Denoising
RDNA3 FSR3 Frame Generation
RDNA3 Performance Rasterizing
RDNA3 Fluid Motion Frames 2
RDNA3 KI Super Resolution
RDNA3 Overclocking @ 3269Mhz
AMD AM5 PC USED for CPU testing:
CPU: AMD Ryzen 9 7950X 16C/32T @ 170W
GPU: AMD Radeon RX 7900 XTX @ 464W
CPU Cooler: Arctic AIO 360mm H²O
MB: Asus X670E Creator WiFi
RAM: 2 x 32GB - G.SKILL 6000Mhz CL30
SSD (Nvme): 2TB + 4TB
PSU: InterTech SamaForza 1200W+ Platinum
CASE: Cougar Blade
I dabbled with Povray over a decade ago, because it was free and I found it easy to use. Do people still use programs like that? Our are there any free graphic raytracing programs?
Hey everybody, this videos shows CP2077 with Radeon Path Tracing and a red car in sunshine. Looks really nice :-)
BONUS - Path Tracing - Black Car
For those who are interested in AMD Radeon Path Tracing:
Here performed with the RDNA3 architecture RX 7900 XTX:
unoptimised ~130fps
optimised ~ 220fps
Be careful of watching, because these videos including world top notch technology nr.1 in HPG by VN_VIVIDS.
I've always glossed over the fact that RT is as taxing on the CPU as it is on the GPU. It wasn't until I realized that in Cyberpunk, the highest achievable frame-rates in a scenario where the GPU isn't a limitation can decrease down to about a half of that when RT is turned off altogether. The same, of course, doesn't always apply to any other RT implementations, but the point stands that the CPU cost of enabling RT is a little over the top.
The question here is whether RT related processes/workloads on the CPU rely heavily on its Integer or Float capabilities. We've seen just about how tremendous the amount of discrepancies between Integer and FP improvements have been when moving from a generation of CPU uArch to the next, with the latter being much more of a low hanging fruit as compared to the former. Would it be that said processes/workloads do make use of Float, there may be an incentive to put SIMD extensions with the likes of AVX512 to use, bringing in quite a spacious headroom for RT.
TL;DR: Title.
Hey, I encountered a function that samples the directions of a point light. I initially suspected that the function samples directions uniformly (based on the rest of the file). However, comparing it to the popular methods of uniformly sampling a sphere I can not quite figure out if it is indeed uniform and why? Can anybody help me with this?
Function:
void genp(Ray* pr, Vec* f, int i) {
*f = Vec(2500,2500,2500)*(PI*4.0); // flux
double p=2.*PI*rand01(),t=2.*acos(sqrt(1.-rand01()));
double st=sin(t);
pr->d=Vec(cos(p)*st,cos(t),sin(p)*st);
pr->o=Vec(50,60,85);
}
It is from the following file: https://cs.uwaterloo.ca/%7Ethachisu/smallppm_exp.cpp
Thank you!
I just started Ray Tracing for some internship. I am on the book 'Ray Tracing in One Weekend' but it seems like it would take a lot longer for me. I'. coding it in C++, I get outputs same as in book but i don't understand it entirely. If someone with some experience can explain me the basics, I can continue myself later. I am on chapter 6 currently.
I've known of the basic ideas of raytracing for a while know. But what I don't understand I the math, where should a begginer like me start learning the math in a simplified form.
In the mid 90’s I was in high school and bought myself a book - one of those Sam’s Publishing style 400+ page monster books - about either VR or Graphics.
It had Polyray on a CD and tons of walkthroughs, code, and examples: including how “blob geometry” made cool internal objects (think lots of intersecting normals making complicated structures).
I remember being able to render individual images in 320x240 and stitch them into FLVs or some old animation format.
Does anyone remember this? I’d love to find the book.
Ray tracing is always modelled with straight lines projected out of the camera and then bouncing around a bunch.
That's accurate. But what if we modelled each ray as a curve instead? We could even gradually change the parameters of neighbouring curves. What if we made the ray a sine wave? A spiral/helix?
What would it look like? Trippy? An incomprehensible mess, even with the slightest curving?
I guess the answer is to build it. But I'm curious to hear your expectations :]
tl;dr Curve the bullet
was curious about what percentage of users have Ray tracing-enabled cards so I went to the newest Steam Hardware survey and counted up all of the percentages of Ray tracing capable GPUs.
I found that 55% of users have a GPU with RT. but that includes the slowest of slow cards. so I added a column telling you the percent of users with that GPU or better(in teams of RT). so you can draw the line of RT performance yourself
GPU name Ordered in RT performance | % of steam users with a specific GPU | % of people with this GPU or better |
---|---|---|
NVIDIA GeForce RTX 4090 | 0.92% | 0.92% |
NVIDIA GeForce RTX 4080 SUPER | 0.33% | 1.25% |
NVIDIA GeForce RTX 4080 | 0.75% | 2.00% |
NVIDIA GeForce RTX 4080 Laptop | 0.19% | 2.19% |
NVIDIA GeForce RTX 4070 Ti SUPER | 0.32% | 2.51% |
NVIDIA GeForce RTX 4070 Ti | 1.14% | 3.65% |
NVIDIA GeForce RTX 4070 SUPER | 0.74% | 4.39% |
NVIDIA GeForce RTX 3090 | 0.51% | 4.90% |
NVIDIA GeForce RTX 3080 Ti | 0.73% | 5.63% |
AMD Radeon RX 7900 XTX | 0.38% | 6.01% |
NVIDIA GeForce RTX 4070 | 2.31% | 8.32% |
NVIDIA GeForce RTX 4070 Laptop | 0.56% | 8.88% |
NVIDIA GeForce RTX 3080 | 2.06% | 10.94% |
NVIDIA GeForce RTX 3080 Laptop | 0.17% | 11.11% |
NVIDIA GeForce RTX 3070 Ti | 1.24% | 12.35% |
NVIDIA GeForce RTX 3070 Ti Laptop | 0.36% | 12.71% |
NVIDIA GeForce RTX 3070 | 3.26% | 15.97% |
NVIDIA GeForce RTX 3070 Laptop | 0.70% | 16.67% |
AMD Radeon RX 6900 XT | 0.21% | 16.88% |
NVIDIA GeForce RTX 4060 Ti | 2.38% | 19.26% |
NVIDIA GeForce RTX 2080 Ti | 0.34% | 19.60% |
NVIDIA GeForce RTX T10-8 | 0.15% | 19.75% |
AMD Radeon RX 6800 XT | 0.29% | 20.04% |
NVIDIA GeForce RTX 3060 Ti | 3.46% | 23.50% |
AMD Radeon RX 6800 | 0.21% | 23.71% |
NVIDIA GeForce RTX 2080 SUPER | 0.45% | 24.16% |
NVIDIA GeForce RTX 4060 | 2.92% | 27.08% |
NVIDIA GeForce RTX 4060 Laptop | 3.46% | 30.54% |
NVIDIA GeForce RTX 2080 | 0.41% | 30.95% |
NVIDIA GeForce RTX 4050 Laptop | 0.86% | 31.81% |
NVIDIA GeForce RTX 3060 | 5.50% | 37.31% |
NVIDIA GeForce RTX 3060 Laptop | 3.25% | 40.56% |
NVIDIA GeForce RTX 2070 SUPER | 1.09% | 41.65% |
AMD Radeon RX 6750 XT | 0.32% | 41.97% |
AMD Radeon RX 6750 GRE 12GB | 0.19% | 42.16% |
AMD Radeon RX 6700 XT | 0.66% | 42.82% |
NVIDIA GeForce RTX 2070 | 0.87% | 43.69% |
NVIDIA GeForce RTX 2060 SUPER | 1.21% | 44.90% |
NVIDIA GeForce RTX 2060 | 3.31% | 48.21% |
NVIDIA GeForce RTX 3050 Ti | 0.23% | 48.44% |
AMD Radeon RX 6650 XT | 0.31% | 48.75% |
NVIDIA GeForce RTX 3050 | 2.68% | 51.43% |
AMD Radeon RX 6600 XT | 0.38% | 51.81% |
AMD Radeon RX 6600 | 0.73% | 52.54% |
AMD Custom GPU 0405 (steam deck) | 0.62% | 53.16% |
NVIDIA GeForce RTX 3050 Ti Laptop | 0.96% | 54.12% |
NVIDIA GeForce RTX 3050 Laptop | 0.63% | 54.75% |
NVIDIA GeForce RTX 2050 | 0.24% | 54.99% |
AMD Radeon RX 6500 XT | 0.19% | 55.18% |
Anyone ever managed to run the nVidia Restir PT demo? It always just freezes for me :(
https://github.com/DQLin/ReSTIR_PT
After some struggle I managed to compile it, here is binary, start it with "RunReSTIRPTDemo.bat":
https://drive.google.com/file/d/1vxCMwsLDbvIJiYZiURZO3gqdyErZ438b
Basically I am trying to figure out if their "Reconnection" method gives the same performance as "Hybrid" method. In their paper they show similar duration times, but I think it's bogus. If I understand the "Hybrid" correct, for 5 reused samples they have to retrace 10 additional sub-paths on top of the 10 other reconnecting ray tests, so it should be massively slower oppose to what they claim.
Anyone knows the answer which one is faster?