Rtrace overhead

Hi,

I’d like to measure the overhead of rtrace, that is how much time it takes for actual rendering (first ray) to begin.

Does anyone have experience in this kind of measurement?

Thanks

I did a fair amount of this sort of benchmarking in the past. I think it will require you to edit and compile the source code with some added lines to print the current or elapsed time. You will find the main method in rtmain.c and the first call to trace a ray in rtrace.c.

Why not just trace a single ray into a black surface? Or trace no rays at all – I think it still goes through most of the start-up procedures.

BTW, you can overcome the overhead for all but the first call if you use the same octree and options with the -P option, at least on Unix.

Thanks Greg, I’ll try your suggestions. Regarding -P, doesn’t that exclude any kind of multiprocessing?
At least that’s what I understand from the documentation.

Thanks

Yes, but the -PP option allows you to start as many processes as you like, all identical to the first one but with different stdin and stdout streams. You either use -n or -P/-PP with rtrace.

on the off chance you are using python to control the process, I wrote some code using pybind that exposes an rtrace instance as a class in python. You can trace additional rays with no start-up cost, and even swap sources or change settings without reloading the scene. I previously used the -PP option Greg mentioned, but python has some particularities around pipes and the code was rather fragile.

documentation is here: raytraverse.renderer — raytraverse 1.2.3 documentation

If you want to give it a try, send me an email: stephen dot wasilewski at epfl dot ch

That’s an interesting idea, thanks for your suggestion!