Video and repo of educational Flight Simulator (OPENFLIGHT) with Babylon in the front end and Julia in the back-end

Dear Colleagues,

I’ve been working for the last few months on an educational Flight Simulator to be used during the Flight Mechanics lectures at the Universidad Europea de Madrid (Spain). This is part of the Aerospace Engineering degree and the students will use it to perform virtual flight testing.

The back-end is a code written in Julia and the client-server communication takes place using WebSockets.

Most of the code has been developed using AI generative tools and the beauty is that these produce almost impeccable code for Babylon.

As notable features I would mention:

  • All the terrain and textures are procedurally generated, so no external geometry is loaded, except for the optional aircraft which need to be in .glb format (I had some problems with .obj files).

  • The trees use thin instances and I can “fly” at 60fps (RTX3080) with 420000 objects in the island.

You can check out the latest video here:

2025 02 01 flight testing OPENFLIGHT

I would be delighted if you included a short sketch from OPENFLIGHT in your release video :slight_smile:

Looking forward to version 8 of Babylon.

Very best wishes, you guys are great.

Raul Llamas

PS, this is the working repo in GitHub flt-acdesign/OpenFlight_alpha

10 Likes

Hi Raul, welcome to BabylonJS!

I’m curious about your development process. Can you share (for frontend)

a) roughly how many lines of code/estimated man-hrs were saved using AI tools?
b) did the code generated contain bugs/leaks and if so, was it easy to debug?
c) did you (from relying on AI tools) experience a distortion, ie, not understanding what the generated code did as opposed to coding by hand.
d) in hindsight, was it absolutely necessary to use AI to code?

Thanks!

Thank you for sharing the code. It looks well thought out, organized, and love the emojis.

Can you share which AI tool and how you prompted some of the generation?

Welcome aboard @Raul !

This project rocks! :muscle:

Hello,

Thanks for your comments.

In short, I could have not developed this at all without the use of AI. So, in that sense, the increase in efficiency is infinite.

a) Complete swaths of the code which are not part of my core competence (websockets connection, custom meshes, generative processes for geometry, animations) would have been so hard for me to research using traditional methods that I would have not even started the project.

For a competent developer, I would guess that you can delegate the coding of simple functions to AI and focus more on the architecture (which you need to take care of in any case). I guess that nobody has full expertise on a language, let alone two as in this case, so I can see a clear case for AI in generative code development.

b) The code generally contains bugs, often due to the lack of context. Sometime the AI hallucinates and you need dozens of prompts to make it understand the problem. ChatGPT can be particularly stubborn and will at some point acknowledge the error but come up with the same code. Perplexity with Claude back end seems to be more relentless and will eventually fix the error by looking at external documentation, of which it provides references. In general, I didn’t get stuck on a concrete error and the debugging process is actually a great learning opportunity.

c) Yes, definitely. There are large parts of the code that I don’t understand. I don’t really care as long as the code fulfils my requirements. I haven’t encountered any situation where the code was so uncomprehensible to prevent me from modifying it, albeit with the help of AI. You can ask the AI to provide detailed explanations and comments of what the code is doing. As I said, I focused more on the architecture. A curious effect is that, when I coded by hand, I had a clear idea of the structure of the code but now I have the feeling that I coulnd’t explain what part of the code is doing what and this has complicated at times the cleaning-up and refactoring.

d) In my case, not only it was absolutely necessary to use AI but it was the enabler. I plan to ask my students to modify the code without worrying about learning the languages first, just jump straight into “AI based code editing”. It’s an experiment, we’ll see how it goes. I have certainly leaned a lot by looking at AI generated code and I’m working with a 12 years old boy who’s developing his own video games with babylon in the same way.

I’ll keep you posted.

Raul

1 Like

Thanks! :slight_smile:

1 Like

Hi, I have mainly used ChatGPT 4o, then o3mini-high. In my experience it has the deepest level of abstraction and can deal with blocks of code up to about 700 lines. Perplexity with Claude3.5 back end is great when ChatGPT gets stuck as it researches online for the answer. This is particularly useful for debugging, but its context window is smaller.

Some of the code has been generated with DeepSeek, but I don’t see the advantage over the paid version of ChatGPT that I use. However, I will ask my students to use deepSeek as I cannot ask them to spend money as a requirement to complete the assignments.

I will post a video showing examples of the evolution of the code and the general process, including the prompts.

Cheers,
Raul

1 Like

Did you try Qwen, Mistral and Claude, too?

Only Claude through Perplexity. I think I gave a go to Qwen, but by then I was more than happy with my chatGPT/Perplexity workflow

1 Like