ChatGPT & 3d talking models

Hi everyone :wave: I’ve been playing around with a little experiment, trying to see how a 3D talking model would look when paired with ChatGPT-generated messages and text-to-speech. Here’s what I got so far:

The 3d model has a couple of animations that loop while the character is talking. And the talking facial animation is just the mouth opening and closing randomly. Still a lot of stuff that could be improved, but decided to share anyway.

Hope you find this project interesting and I’d love to hear any feedback or suggestions you may have :pray:

6 Likes

COOOOOOOOLLLL :blush:

Please make a version as amorphous neuralnet Eldritch pointcloud and run the speech synthesizer thru reverb and pitchshifters in Tone.js

1 Like

Cool. You can use blendshapes for the mouth per vowel and match the face animation to vowels. Most of the vrm avatars are already rigged for that.

2 Likes

This is amazing!!
Maybe you can use animation blending with a coroutine to smooth the character animation.
Here is a piece of code that may help you:

function* animationBlending(fromAnim, fromAnimSpeedRatio, toAnim, toAnimSpeedRatio, repeat)
{
    let currentWeight = 1;
    let newWeight = 0;
    toAnim.play(repeat);

    fromAnim.speedRatio = fromAnimSpeedRatio;
    toAnim.speedRatio = toAnimSpeedRatio;

    while(newWeight < 1)
    {
        newWeight += 0.01;
        currentWeight -= 0.01;
        toAnim.setWeightForAllAnimatables(newWeight);
        fromAnim.setWeightForAllAnimatables(currentWeight);
        yield;
    }
}

// Run Coroutine //
//scene.onBeforeRenderObservable.runCoroutineAsync(animationBlending(fromAnim, 1.0, toAnim, 1.0, true));
4 Likes

This is very COOL! :star_struck: The model is so adorable, did you make it?

I do have a feedback: for me the colors in the chatbox have almost zero contrast so I can’t read it:
image

When looking for color combos, one website I really appreciate is Randoma11y since it gives pairs of colors with accessibility in mind :smiley:

Thank you! The model is a vroid model - this one, with a different uniform. I did some work on it though - added the animations in blender, converted shaders to a single Principled BSDF, and did some other minor tweaks here and there.

Apologies for the chat colors being messed up :grimacing: there’s definitely a bug somewhere. The text color should be black, and the input box background color should be white. I’ll look into it.

1 Like

Is this robot generated from code?

Hey wudao :wave:

I used VRoid studio to export the model. And then, I imported it on blender with a VRM blender extension. After that, I had some animations laying around, which I had to retarget to the VRM rig. I did this with the Rokoko blender plugin. After I’m done working on the model in blender, I export the model with the babylonJS blender plugin - this generates a .babylon file which I then import with SceneLoader.ImportMeshAsync.

The mouth movement is being done with the VRM model’s morph targets. To be more specific, I’m just using the “A” mouth morph target all the time. I have a talk() function that is called when there is sound available. Here’s what it looks like:


  talk() {
    if (this.isTalking) {
      return;
    }

    const maxTime = 0.5;
    const minTime = 0.2;

    const aTarget = this.getMorphTargetByName("Face.M_F00_000_00_Fcl_MTH_A");

    const vowelAnimEnd = () => {
      playMorphTargetAnim(
        "aSoundAnim",
        [0, random(minTime, maxTime), random(minTime, maxTime)],
        [0, 1, 0],
        aTarget,
        vowelAnimEnd,
        this.scene
      );
    };

    this.isTalking = true;
    playMorphTargetAnim(
      "aSoundAnim",
      [0, random(minTime, maxTime), random(minTime, maxTime)],
      [0, 1, 0],
      aTarget,
      vowelAnimEnd,
      this.scene
    );
  }

The playMorphTargetAnim looks like this:

export const playMorphTargetAnim = (
  name: string,
  durationsSeconds: number[],
  values: number[],
  morphTarget: MorphTarget,
  endCallback: () => void,
  scene: Scene
) => {
  const keyFrames: IAnimationKey[] = [];
  const framesPerSecond = 60;

  let previousFrame = 0;
  for (let i = 0; i < values.length; i++) {
    const currentFrame = previousFrame + durationsSeconds[i] * framesPerSecond;

    keyFrames.push({
      frame: currentFrame,
      value: values[i],
    });

    previousFrame = currentFrame;
  }

  const lastFrame = keyFrames.at(-1);

  var morphAnimation = new Animation(
    name,
    "influence",
    framesPerSecond,
    Animation.ANIMATIONTYPE_FLOAT
  );
  morphAnimation.setKeys(keyFrames);
  morphTarget.animations = [];
  morphTarget.animations.push(morphAnimation);

  scene.beginAnimation(morphTarget, 0, lastFrame.frame, false, 1, endCallback);
};

The eye opening / closing is also done using morph targets. However, everything else are “regular” bone animations.

Does this answer your question? If not, let me know.

2 Likes

Very cool to see how it works! :smiley: