There are too many testpoints in my project which result in too large daylight coefficient matrix(145×264060).When martix computation was performed using dctiemstep,the error “system - out of memory in cm_alloc(): Cannot allocate memory” occured.So I imagined if there is a way to let dctimestep do out-of-core computation.
There are no out-of-core matrix tools included in Radiance. Your matrix should take about 438 MBytes of RAM, so even two of them would be under a gigabyte, which is well within even a 32-bit address space. I’m a bit puzzled by the failure. What system are you using?
I have run it on windows sytem and WSL(Win10 subsystem linux),The error occured on both system.
below is my command:
oconv skyVmtx.sky material_GlareImage.rad GlareImage.rad > GlareImage.oct
mkpmap -apC contribnew.pm 1m GlareImage.oct
rcontrib -I+ -ap contribnew.pm 80 -y 264060 -n 8 -dp 64 -ds 0.5 -ar 300 -aa 0.1 -ad 1000 -lw 1.0e-3 -dc 0.25 -faf -e MF:1 -f reinhart.cal -b rbin -bn Nrbins -m sky_mat GlareImage.oct < GlareStudy_0.pts > cds.mtx
dctimestep cds.mtx hangzhou.smx | rmtxop -fa -t -c 47.4 119.9 11.6 - > R.ill
Maybe you could post your matrix files and I could give it a try to see if I run into similar problems under Unix. I suspect you have some memory allocation limit you are running up against. Have you looked into the “ulimit” command and similar?
There may also be issues piping the output of dctimestep to rmtxop, which is not very memory-efficient if the matrix is large.
The link for matrix files:
I also tried the command “ulimit -m unlimited” on WSL system ,it didn’t work.
Yes, the problem is that you are trying to multiply a 264060x146 matrix by a 146x8760 matrix, which results in a 264060x8760 matrix, which would require 25 GBytes of RAM in dctimestep then 50 GBytes to load into rmtxop (which converts everything to double precision).
I think there may also be a problem with overflow on 32-bit integers even specifying the sizes to malloc(), so the sign bit wraps and it doesn’t even make the request correctly. I will look into that, but it won’t solve your memory problem. Do you really need to create such a large matrix on output, or could you possibly do it a day at a time?
I didn’t realize the output matrix is so larger before.Now I am considering do it a month at a time ,and only calulate daytime illuminanation.In that case,the output matrix is about 264060×360.
OK, there was a bug in dctimestep (also in rmtxop) that was causing an illegal call to malloc() for very large matrices – thanks for helping me identify this! The new code should update this weekend.
I have started the following command:
dctimestep -of cds.mtx hangzhou.smx > t.mtx
Which seems to be running so far. We’ll see if it finishes successfully. If it does, then I recommend the following command to transpose the results rather than relying on rmtxop:
rcollate -t t.mtx | getinfo -c rcalc -if3 -e ‘$1=47.4*$1 + 119.9*$2 + 11.6*$3’ > R.ill
Using rcollate rather than rmtxop should save quite a bit of time. You may wish to edit the output file header to change the NCOMP=3 line to NCOMP=1 and the FORMAT=float line to FORMAT=ascii if you wish to do further processing on it. If you don’t care about the header at all, you can take it out with:
rcollate -ho -t t.mtx | rcalc -if3 -e ‘$1=47.4*$1 + 119.9*$2 + 11.6*$3’ > R.ill
Also, the above commands will work much better under Unix, where rcollate should be able to map the file into memory rather than loading the whole thing. If you are under Windows, then you’ll need to use double-quotes rather than single-quotes in the rcalc -e argument.