There are two issues here. First, the Cook-Torrance model is not implemented in Radiance. Second you want to apply a spatially variant model. As Greg mentioned, this is not really addressed by Radiance. However, depending on how much effort you can dedicate, there may be some approximations:
-
You can implement the C-T-model as a cal-file and compile it into a data-driven model by bsdf2ttree.
-
You can use mixpict to switch between surface properties based on pixel information. If you just blend between two (or few) parametrisations of the C-T model, this would allow you to e. g. linearly interpolate between two data-driven representations of the CT-models as an approximation.
I am using the second approach sometimes to simulate an alpha channel, e. g. blending between reflective surfaces and void. One example is the mapped image of a flame here: Views on ancient lighting: Modelling lighting devices and their effects in architecture | Zenodo . Axel Jacob’s brilliant “Radiance Cookbook” has a similar example about transparent textures on pages 27-34.
Best regards, Lars.