Lighting and shadowing (sketchfab imitation)

Hello people.

I am trying to create something like Sketchfab viewer . But I have some issues with the a shadows.

Here is exact model that I am using in the Sketchfab viewer Go to the 3D tab to see it

ASP-134488-01 footprint & symbol by Samtec Inc. | SnapEDA.

Now you can see that model has shadows on it, and if you use ALT + LeftMouseButton you can move around the light and the shadows changes.

I have several issues here. First, I thought they are using somekind of HDRI lighting, but model doesn’t have PBRMaterial on it, so I guess that’s not the case. My guess is that they are using point light in that scene. And point light plus hemi light works just fine for the lighting part in my viewer. But after I tried implementing shadows (self-shadowing), I just cannot get shadows to look right (my results are not even bad, they are just NOT xD).

I never completely understood how to use shadows properly. Bias is confusing me, but when needed I always managed to get some result after bunch of tweaking. But in this case, I cannot even get good starting point.

So any suggestions will do. Plus if you have some ideas on how they implemented lighting and what they used, it would be helpful.

Here is a playground with the model, and basic setup I use in my scene.

Thank you

You can have a look at our viewer: The Babylon.js Viewer - Babylon.js Documentation

It provides similar features. To make shadows work, it will normalize the scene (e.g: making sure the scene size is 1x1x1 to help self shadowing)

To do so, here is the code used:

Thank you for the answer.

When I install babylonjs-viewer via npm, and import it with
import ‘babylonjs-viewer’
I get a bunch of errors

There are 3x more than seen on image.

Any idea what am I doing wrong?

You also need to import Babylon.js I guess
Pinging @RaananW

Also, your scene is made of countless meshes, I guess you should set the receiveShadows flag / addShadowCaster on all meshes… But you may kill your fps by doing so.

I did import babylonjs.

Yea. That’s kinda tricky. Basically I need it only on that main part. But problem is that I will not be the one uploading these things. It should be universal and whichever mesh is imported it should work as intended. With that said, it is impossible (at least I believe so), to detect which part is that main part among 800+ parts. So my idea is to render shadows once on load, so I don’t kill fps as you say. And if I enable that feature to rotate light around (not sure if it will be a feature at all), I thought that I somehow call rendering shadows while that process (of rotating light around) is ongoing, and once it stops, I stop updating shadows, and you can rotate camera around mesh as usual.

Not sure if that’s possible, I first want to have a good shadow, I will later think about other things. :slight_smile:

I also wanted to merge every mesh that has the same material (because on that specific model among 800+ parts, there are only 3 materials, most of those parts are those white things). So I found a way to do that, but it was brutal on processor to merge 800 parts in a way I did it.

Basically, I don’t know how many parts or meshes or anything will be in the uploaded mesh (as it should be universal and work for every model uploaded).

So I got all materials of the scene with a for loop


and then I called this thing.


If there is better approach that you know of I could consider it. But I believe that there is not lightweight solution to this (I would like loading time not to be infinite), so I probably will not try to implement this thing at all.

Let’s wait to see if @RaananW has an idea

Also did you make sure to use the same version for viewer and babylon?

Sure. Thank you.

Yes I did use same versions.


Also. I tried to implement normalizing in my example (because I had some issues when importing meshes with huge difference in size (so some small, some huge). I had to implement some ways to adjust camera.radius, wheelDelta, panningSensibility, and that kinda works for me, but my second idea was to always scale meshes to fit the same size (so let’s say 1x1x1 cube). So I have consistency in the viewer behavior without adjusting several values and hoping they will work for every mesh possible.

So as I understand that normalizing is doing just that?

But after implementing that part of the code I got this.

So different parts of the mesh scaled differently.

Oh I get what’s going on. Every part is scaled to 1x1x1 independently. So basically I would have to group everything somehow to get everything scaled together? Am I right? How could I achieve that?

Ok I solved this by creating box, parented everything to that box and scaled that box. That bring some other issues, and I will need to change some things in my code, but that was helpful. Thank yuu DK.

haha you are faster than me:) I was about to tell you to do that

good job!