Missing pixels in HDRE image...?

Hello all,

So, I have been playing a bit with low-level Radiance stuff. This includes writing my own HDRE files, which have been openable, falsecolor-able and other things, which made me confident in my understanding of the format. However, today I found something weird when processing an image produced by RPICT. The header is as follows

#?RADIANCE
oconv cornell.rad
rpict -ab 2 -ad 180 -aa 0.0 -vp 3 -5 2.25 -vd 0 1 0 -vh 50 -vv 37 -x 512 -y 367
SOFTWARE= RADIANCE 5.3a lastmod Tue 21 Dec 2021 09:27:53 NZDT by root on germans-mbp.lan
VIEW= -vtv -vp 3 -5 2.25 -vd 0 1 0 -vu 0 0 1 -vh 50 -vv 37 -vo 0 -va 0 -vs 0 -vl 0
CAPDATE= 2022:04:25 21:47:35
GMT= 2022:04:25 09:47:35
FORMAT=32-bit_rle_rgbe

-Y 367 +X 511
...

Now, I thought that, after the header, there would be 367*511*5 bytes (as in R, G, B, and E for each pixel). However, this does not seem to be the case:

# Running this returns 373 bytes
head -n 10 ./images/rpict_output.hdr | wc -c
# Running this returns 661,373
cat ./images/rpict_output.hdr | wc -c

Now, this suggests that the CONTENT of the image is 661,000 bytes; i.e., 165,250 pixels. This means I am missing 22,287 pixels (because 367*511=187,537), which strikes me as a significant portion.

What is it that I am not getting?

You’re playing with at a run-length encoded image, so you don’t match the pixels indicated in the -Y+X string. See filemts.pdf at https://www.radiance-online.org/learning/documentation/references.html

“New run-length encoded:
…
The record begins with an unnormalized pixel having two bytes equal to 2, followed by the
upper byte and the lower byte of the scanline length (which must be less than 32768). A run is indicated by a byte with its highorder bit set, corresponding to a count with excess 128. A nonrun is indicated with a byte less than 128.”

2 Likes

Side-note: you can get to/from the run-length encoded version with ra_rgbe, which is also the fastest way to do integer EV adjustments as it increments or decrements the exponent of each pixel. Generally speaking, run-length encoding saves 15-40% of the image space, depending on the image of course.

Also, I decided early on to use the “32-bit_rle_rgbe” format designation whether the pixel data was run-length encoded or not. If the pixel data is CIE XYZ, the FORMAT string should be “32-bit_rle_xyze” instead, and there is a factor of 179 between rgbe and xyze pixel normalizations.

Thanks, @Greg_Ward and @Axel_Jacobs2 . I had no idea HDRE Compression was a thing! I’ll leave the handling of this as a ToDo for now. I might come back with further questions soon!

Best!

Hi @German_Molina,

If you want to implement RLE compression for HDR images quickly, I recommend referring to Bruce Walter’s site. It’s much easier to read than the Radiance source code. (No offence, Greg!)

Whether you choose to use RLE compression should depend on whether your environment is memory limited or time limited. I’ve been timing various implementations of RLE using Python, and I’ve come to the conclusion that if your primary concern is speed, you should not bother with compressed HDR files.

Thanks, Nathaniel! That is a very useful source, and the code is, indeed, readable… although I need to digest it for a while. I am not entirely sure what is doing yet.

I am not sure what—exactly—the requirements are… but I want to be able to open Radiance file anyway. At present, I am writing them flat just because it is easy.

It’s much less important to write RLE scanlines than it is to support reading them, which is also simpler. If you have no need to read RGBE files, then you’re fine.

I wasn’t aware of Bruce Walter’s site – he’s always been one of my favorite researchers, though.

-Greg