Convert JPGEG or HDR to ENV

I need a way to create “.env”'s on the fly. We allow users to upload custom skyboxes. They most often will upload a JPEG or HDR. These files are sent to the backend where we would like to convert them in to Babylons “.ENV” format (as it is too CPU intensive to do this on the client). Problem is I can not figure out how to make this happen.

Does any know how to make this happen?

When loading an hdr file you use an HDRCubeTexture and there is a parameter to prefilterOnLoad, the 7th one IIRC :slight_smile:

Yes I’m aware, however we don’t want to download the HDR to the client which is significantly larger then the ENV. I want to build a pipeline that creates the ENV so that clients download the much smaller ENV and not the HDR which has performance implications.

ohhhh I see so you could use the same code we have in the inspector: Babylon.js/toolsTabComponent.tsx at master · BabylonJS/Babylon.js · GitHub

1 Like

Good idea! That helps. Any pointers on how we could implement in a headless environment (ie no browser)?

This wont be possible as creating the prefiltered data requires rendering :frowning: You could probably use pupetteer for it and @bghgary it would be could to see if we could do it in Babylon Native :slight_smile:

1 Like

I think Babylon Native should work as long as this code path doesn’t use any HTML things, which I don’t think it does. That said, there might be some gaps in Babylon Native that prevent this from working. It would be good to find out.

Here is an article I wrote that shows how you can render a single frame. I hope it’s not too difficult to adapt for generating an env instead.

Babylon Native in a Headless Environment | by Babylon.js | Medium

1 Like

I’m sorry, but didn’t you mention that the HDR/JPEG is uploaded from the client in the first place?

Yes but this is a networked experience. So when one client updates the skybox all clients will get sent the new skybox.

1 Like

Gotcha. Although you mentioned it’s CPU intensive on clients, would it be acceptable for only one client to incur that cost to generate an .env from an HDR and upload that to the server? To avoid blocking the UI you could consider offloading the env generation to a worker process

A potentially major benefit being you trade the client cpu time for having to transfer the HDR file to the server at all - assuming bandwidth is more “expensive” than cpu cycles of course :slightly_smiling_face:

so you could convert to env on the first client and share across as env instead of generating env on the server which will incur more CPU cost ???

1 Like

Yes, this is the way it seems and should get the job done. I know how to do this if the user upload is an HDR using HDRCubeTexture but not sure of the API for JPEG?

Assuming the JPEG is in proper color space ofc, I think the API is similar… but I need to look up the specifics and update this post :smile:

I will share an example ASAP

1 Like

With this PR Fix env texture creation from gamma space by sebavan · Pull Request #13642 · BabylonJS/Babylon.js · GitHub the following playground will work to generate an env texture from a Jpeg file: https://playground.babylonjs.com/#9Y5589#1

1 Like

@sebavan Okay so I have been working on this with @Evan_Frohlich. Here is where I am right now.

We have a pipeline working that can convert HDRs into .env files locally on the client’s machine before it is uploaded and distributed to all of our users. This function looks like this.

    public async createEnvTextureFromHDR(filepath: string, callback: () => void) {
        console.log('888 createEnvTextureFromJpg', filepath);

        const hdrCubeTexture = await Materials.creatHDRCubeTextureAsync(
            this._engine,
            filepath,
            1024,
            false,
            true,
            true
        ).catch((e) => console.error('createHDRCubeTextureError', e));

        const envTextureArgs = {
            imageType: envExportImageTypes[this._envOptions.imageTypeIndex].imageType,
            imageQuality: this._envOptions.imageQuality,
        };
        if (!hdrCubeTexture) return;
        return EnvironmentTextureTools.CreateEnvTextureAsync(hdrCubeTexture, envTextureArgs).then(
            (buffer: ArrayBuffer) => {
                const blob = new Blob([buffer], { type: 'octet/stream' });
                console.log('CreateEnvTextureAsync: ', buffer, blob);
                // TODO: DEBUGGING REMOVE THIS
                // Tools.Download(blob, `envTextBlob ${Date.now()}`);
                return blob;
            }
        );
    }

You can see that we are using createHDRCubeTextureAsync, which is defined here.

const creatHDRCubeTextureAsync = async (
    scene: Scene | ThinEngine,
    filepath: string,
    resolution = 512,
    gamma: boolean,
    prefilertOnLoad: boolean,
    generateHarmonics: boolean
) => {
    return await new Promise<HDRCubeTexture>((resolve, reject) => {
        const texture = new HDRCubeTexture(
            filepath,
            scene,
            resolution,
            true,
            generateHarmonics,
            gamma,
            prefilertOnLoad,
            () => {
                resolve(texture);
            },
            () => {
                reject();
            }
        );
    });
};

We are actually using a resolution of 1024 instead of 512 for the cube map so that we get better-looking results, and we are using WebP for the images instead of PNG. I discovered a much smaller file size for an env that is created when using webp (~.5mb) vs png (~8mb) at 1024 resolution.

If there are any downsides to using WebP, I would love to know.

The results I get from this seem to be on par or better than I get using the IBL texture tool, with a much smaller file size.

Part of the pipeline we are trying to also solve is the straight jpg to env. To do this I am currently using these two functions.

    public async createEnvTextureFromJpg(filepath: string, callback?: () => void) {
        //OLD
        console.log('888 createEnvTextureFromJpg', filepath);

        const jpgCubeTexture = await Materials.createEquiRectangularTexture(
            this._scene,
            filepath,
            1024,
            false
        ).catch((e) => console.error(e));
        const envTextureArgs = {
            imageType: envExportImageTypes[this._envOptions.imageTypeIndex].imageType,
            imageQuality: this._envOptions.imageQuality,
        };
        if (!jpgCubeTexture) return;
        return EnvironmentTextureTools.CreateEnvTextureAsync(jpgCubeTexture, envTextureArgs).then(
            (buffer: ArrayBuffer) => {
                const blob = new Blob([buffer], { type: 'octet/stream' });
                console.log('CreateEnvTextureAsync: ', buffer, blob);
                // TODO: DEBUGGING REMOVE THIS
                // Tools.Download(blob, `envTextBlob ${Date.now()}`);
                return blob;
            }
        );
    }

With createEquiRectangularTexture defined here

const createEquiRectangularTexture = async (scene: Scene, filepath: string, resolution = 512, gamma: boolean) => {
    return await new Promise<EquiRectangularCubeTexture>((resolve, reject) => {
        const texture = new EquiRectangularCubeTexture(
            filepath,
            scene,
            resolution,
            true,
            gamma,
            () => {
                resolve(texture);
            },
            () => {
                reject();
            }
        );
    });
};

The resulting env file from this conversion seems to be missing some information and is similar to the results I was getting from the HDR conversion before I set prefilterOnLoad to true. Reflections come through and the skybox changes, but the roughness and metallic values on my materials seem to be binary rather than a spectrum (very shiny or very rough, with no in-between). It also takes much longer for the jpg → env to load than the hdr → env. The other thing I noticed is that specular-antialiasing gets screwed up with the not prefiltered (jpg → env) version.

One solution to this is to obviously just convert the jpg to an hdr before the conversion to the env file begins, but if there is a way to avoid doing this and to prefilter the equirectangularCubeTexture, I would love to know. I would also love to know what exactly is happening behind the scenes if anyone has any insight.

I saw the PR at the end of this thread and if that is the solution then great. I just wanted to post this so others can see where I ended up.

Photos for reference: The first two are hdr-env and the next two are jpg-env. Thanks!




1 Like

Are the JPEGs you’re using representing a 32-bit per pixel color value in the image? It sounds like you’re getting stuck in gamma space

I do not think the jpegs are in 32-bit color space, the bit depth is 24 coming from blockade labs.

Looking at your playground I noticed I had noMipmap on EquiRectangularCubeTexture set to true and you had it set to false. I changed that on my end and now I am getting better reflections.

As far as gamma vs linear goes, I think you are right and I am getting stuck in Gamma space. There is some confusion here for me though.

In your playground example, you have gammaSpace set to true and the material is PBR. This displays the correct result, and if I change gammaSpace to false it looks similar to the results I am getting on my end.

I thought this boolean should be set to false, since the documentation says this -

*@param* `gammaSpace`
Specifies if the texture will be used in gamma or linear space (the PBR material requires those textures in linear space, but the standard material would require them in Gamma space)

Is the boolean flipped backward or am I misunderstanding something?

I tried setting the gammaSpace boolean to true on my side, but I am not seeing a difference in the output. Would this be because the source jpgs are 24 bit? (Edit: This was not working because the fix to environmentTextureTools that there is a PR for above was not applied yet)

Thanks for all of the help!

Edit: Modified playground showing the env as a skybox for easier visualization - https://playground.babylonjs.com/#9Y5589#2