@cx20 I want to make sure that we are all starting with the same ground truth about our definitions of skybox, environment, and env file. Those are:
skybox: a low-dynamic range image (LDR) 8-bit image (or group of images for our skybox implementation) that shows the environment from each of the six projection directions either as a cross format cube map or individual files per face.
environment: a high-dynamic range (HDR) 32-bit image that contains values in R, G, and B that are outside the standard 0-1 range of 8-bit images and can contain values in the thousands. This file is saved as a pre-computed DDS file with mip maps that we assign to different levels of roughness for our PBR metallic/roughness lighting model. The important part of this is that the files is pre-computed with a package like Lys or IBLBaker. You will start with an HDR image that is loaded into one of these packages and the DDS will be computed with mip-maps and saved for you.
env: This is an LDR conversion of the environment DDS to save on the file size of a 32-bit image. The HDR values are converted into an RGBD format with a divisor stored in the alpha that allows us to determine where the pixel value is between the minimum and maximum value range. But the env file needs to have a precomputed DDS with every mip map necessary to represent our full range of roughness values. This playground illustrates each of the mip maps calculated in the DDS so that each sphere has the correct reflection of the environment. This method speeds up our rendering since we don’t have to calculate the specular lobe of the reflection from the surface, but rather just look up the corresponding mip texture.
From what I am reading you have six jpg files you are wanting to convert into a env for your scene. Please let me know if this is correct or if I am misreading the thread. I noticed you are using 6 jpg images as the source, but jpg does not allow for 32-bit values so calculating your DDS will produce poor results. You really need to start from an exr/hdr format so that you can push 32-bit values into the source image before calculating your DDS. And if you are starting from the six projection directions you will need to assemble them into a cube map like this:
For reference, you can always grab the source debug environment files from my GitHub for testing. But the main thing to note here is the order of the faces in your assembled cube map which is important for how the environment will be calculated. Lys will read a cube map and understand how each projection works, as is shown below with Lys converting the original cube map into a n equirectangular map.
The white in the default environment has a value of (3.0, 3.0, 3.0) so the environment is truly HDR. The source file was created in Photoshop, so you don’t need any specialized software to create an HDR file, just some sort of image editor that can set or convert values in the image. And I would work on your image already set up as a cross cube texture so that you make alterations to all faces at once if you are changing tones.
The issue that @sebavan was referencing earlier is the concern about LDR source images being converted to be used as an environment. There isn’t enough range above 1.0 in any channel to act as a light source, so it won’t look good. The best thing to do is to start with an equirectangular exr file which can be read by Lys or IBLBaker and compute your dds from there and then convert to env for size. If you don’t have an equirectangular available, assemble your images into a cross format cube map and then use Lys to compute and then convert. As far as I can tell, I don’t see a way for IBLBaker to read a cross cube texture… it only wants equirectangular formatted images.
From your last message it sounded like you might be thinking you needed to pass 6 DDS files instead of 6 jpgs, which isn’t what you want to do, but forgive me if I am reading that wrong. The path above is the best way to create your environment file. Please let me know if you have more questions.