sharing indirect values forparallel processing?

I have installed nfs over cygwin(windows xp) in a notebook and a workstation
and i�ve tried a 4000 x 4000 resolution render with rpiece and a syncfile
without any problem, this is the command and the results in each computer:
echo 3 3 >syncfile
echo -F syncfile -x 4000 -y 4000 -t 10 -vf myview -o simple.pic simple.oct

args

rpiece -v @args

workstation 3ghz

chroma@chr-xeon /usr/local/radiance/ray/obj/rprueba
$ rpiece -v @args
FRAME 1: -vtv -vp 2.25 0.375 1 -vd -0.816497 0.408248 -0.408248 -vu 0 0
1 -vh 15.7224 -vv 15.7224 -vo 0 -va 0 -vs 1 -vl 1
rpict: 0 rays, 0.00% after 0.000u 0.000s 0.000r hours on chr-xeon
2 2 begun
2 2 done
rpict: 629893 rays, 100.00% after 0.002u 0.000s 0.002r hours on chr-xeon
FRAME 2: -vtv -vp 2.25 0.375 1 -vd -0.816497 0.408248 -0.408248 -vu 0 0
1 -vh 15.7224 -vv 15.7224 -vo 0 -va 0 -vs 1 -vl -1
rpict: 629893 rays, 0.00% after 0.002u 0.000s 0.002r hours on chr-xeon
2 0 begun
2 0 done
rpict: 1344740 rays, 100.00% after 0.004u 0.000s 0.004r hours on chr-xeon
FRAME 3: -vtv -vp 2.25 0.375 1 -vd -0.816497 0.408248 -0.408248 -vu 0 0
1 -vh 15.7224 -vv 15.7224 -vo 0 -va 0 -vs 0 -vl 0
rpict: 1344740 rays, 0.00% after 0.004u 0.000s 0.004r hours on chr-xeon
1 1 begun
1 1 done
rpict: 2142578 rays, 100.00% after 0.006u 0.000s 0.007r hours on chr-xeon
FRAME 4: -vtv -vp 2.25 0.375 1 -vd -0.816497 0.408248 -0.408248 -vu 0 0
1 -vh 15.7224 -vv 15.7224 -vo 0 -va 0 -vs 0 -vl -1
rpict: 2142578 rays, 0.00% after 0.006u 0.000s 0.007r hours on chr-xeon
1 0 begun
1 0 done
rpict: 2880210 rays, 100.00% after 0.008u 0.000s 0.009r hours on chr-xeon
FRAME 5: -vtv -vp 2.25 0.375 1 -vd -0.816497 0.408248 -0.408248 -vu 0 0
1 -vh 15.7224 -vv 15.7224 -vo 0 -va 0 -vs -1 -vl 1
rpict: 2880210 rays, 0.00% after 0.008u 0.000s 0.009r hours on chr-xeon
0 2 begun
0 2 done
rpict: 3331412 rays, 100.00% after 0.009u 0.000s 0.010r hours on chr-xeon
FRAME 6: -vtv -vp 2.25 0.375 1 -vd -0.816497 0.408248 -0.408248 -vu 0 0
1 -vh 15.7224 -vv 15.7224 -vo 0 -va 0 -vs -1 -vl -1
rpict: 3331412 rays, 0.00% after 0.009u 0.000s 0.010r hours on chr-xeon
0 0 begun
0 0 done
rpict: 4049724 rays, 100.00% after 0.011u 0.000s 0.012r hours on chr-xeon

notebook 1.7ghz

ignacio@pnac /mnt/xeon
$ rpiece -v @args
FRAME 1: -vtv -vp 2.25 0.375 1 -vd -0.816497 0.408248 -0.408248 -vu 0 0
1 -vh 15
.7224 -vv 15.7224 -vo 0 -va 0 -vs 1 -vl 0
rpict: 5296 rays, 0.00% after 0.000u 0.000s 0.000r hours on pnac
2 1 begun
rpict: 486848 rays, 81.02% after 0.002u 0.000s 0.003r hours on pnac
rpict: 601680 rays, 100.00% after 0.003u 0.000s 0.003r hours on pnac
FRAME 2: -vtv -vp 2.25 0.375 1 -vd -0.816497 0.408248 -0.408248 -vu 0 0
1 -vh 15
.7224 -vv 15.7224 -vo 0 -va 0 -vs 0 -vl 1
rpict: 601680 rays, 0.00% after 0.003u 0.000s 0.003r hours on pnac
1 2 begun
2 1 done
rpict: 1438741 rays, 67.29% after 0.005u 0.000s 0.006r hours on pnac
rpict: 2099130 rays, 100.00% after 0.008u 0.000s 0.008r hours on pnac
FRAME 3: -vtv -vp 2.25 0.375 1 -vd -0.816497 0.408248 -0.408248 -vu 0 0
1 -vh 15
.7224 -vv 15.7224 -vo 0 -va 0 -vs -1 -vl 0
rpict: 2099130 rays, 0.00% after 0.008u 0.000s 0.008r hours on pnac
0 1 begun
1 2 done
rpict: 2554716 rays, 95.65% after 0.010u 0.000s 0.011r hours on pnac
rpict: 2581854 rays, 100.00% after 0.010u 0.000s 0.011r hours on pnac
0 1 done

···

----- Original Message -----
From: "Greg Ward" <[email protected]>
To: "code development" <[email protected]>
Sent: Friday, February 04, 2005 6:54 PM
Subject: [Radiance-dev] Re: [Radiance-general] sharing indirect values
forparallel processing?

The locking requirements for Radiance are really quite minimal:

1) One process says in effect, "I need to have exclusive use of this
file"
2) Said process waits until other processes are done reading (or
writing) the file.
3) Process gets lock; all other processes unable to read or write the
file.
4) Process does its thing, releases lock.
5) Life goes on...

Why is this difficult? Well, it seems that this sort of behavior gets
in the way of efficient (i.e., cached) network filesystems, so some
versions of NFS don't bother with it, or do it badly. Even
implementing our own lock mechanism via creating a separate file that
says our ambient file is in use suffers from race conditions. These
race conditions have been worked out by the sendmail developers, but
the solution is quite complicated. The good news is that it works and
has been tested on many different Unix implementations. So, this looks
like the easiest path to our goal.

My motivation for doing this work is unfortunately low, because I'm
using FreeBSD and OS X systems with perfectly adequate NFS lock
managers. I don't even have a good way of testing any proposed fix,
because nothing is currently broken for me. That's not to say I don't
care if other people are having problems -- it simply means that I'm
not in a very good position to help. What we need is a good programmer
who uses Radiance on a Linux cluster and can do some testing for us (at
least). I also need to free up some time, which has proved
challenging, lately. (Hence my short and mistake-ridden replies.)

-Greg

_______________________________________________
Radiance-dev mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-dev