Using forceSharedVertices() to enable ray picking breaks my morph targets

Hey all, hope you’re enjoying a good holiday season.

I’m currently having trouble making ray picking and a mesh morph both compatible with each other. For context, this is a rendering of a brain based on a real patient, available for public access. I processed the data into text files for the faces, vertices, and the morph targets (in this case an inflation of the hemispheres), and import them in using textFileTasks.

At the moment these meshes are very large, over 100k vertices, so basic ray picking seems to fail, giving me an ‘invalid array length’ error. This can be solved using forceSharedVertices() on the meshes, however doing that will make the morph fail (with an error) as the vertices of the mesh and it’s morph target no longer correspond 1:1.

pg: You can uncomment forceSharedVertices() at lines 95 & 119 to try out the picking (any errors I get are output to my browser’s console)

I’d like to be able to use both in my scene. Any suggestions on what I can do? Thanks in advance!

Your text files have a final carriage return, so the arrays you create when doing task.text.split(/\r\n|\n/).map(Number); have an extra element:

As you will see, picking does not work when morph is applied. That’s because the deformations are calculated on the GPU and the modified vertices are not available on the CPU.

Your best bet would probably be to use GPU picking, see my answer here:

1 Like

Thanks! With the way I decided to process each line into the array the carriage return is something I easily overlooked. There may also be something in the text file to look into that would fix this without removing the last index of the array.

GPU picking is a great way to make the morph functional with picking, so it is something I am checking out. I was more immediately worried that I couldn’t do both within my scene at the same time (i.e. pick when the mesh isn’t morphed, then switch to morphing seamlessly).

Another follow-up related to GPU picking, and how it might relate to morphs. How well does the equation for how morphs are displayed on the screen in the doc (Morph Targets | Babylon.js Documentation) work in the reverse direction? For example, if I used GPU picking to find a particular face/vertex in it’s morphed form, is there a way to determine where that location is in the original mesh? It’s unclear to me if I can do something simple such as comparing the indices of a particular face/vertex between the original mesh and the morph target

With GPU picking, you render your objects using a different color for each object, or if you want finer results, for each face. If you assign a different color to each face (for example, the face index), retrieving the color pixel under the mouse pointer will give you the face index. With this method, you can even retrieve the 3D coordinates of the point if you get the depth from the depth buffer.