Live drone WiFi camera transmission to browser

How hard would it be to get a OTG WiFi camera to pop up in browser?

Eachine ROTG01 Pro UVC OTG 5.8G 150CH Full Channel FPV Receiver W/Audio is like he hardware I’d be using. Some sort of usb video receiver

Basically I wanna get my drone FPV feed into bjs.

I have some ideas I want to try about using the GPU to clean up transmission noise.

Hi Pryme8,

This is a super cool idea! Not something I’ve ever tried before, so everything I have to say is just speculation. It’s fun to speculate, though. :smile:

If the de-noising is your main goal – i.e., you’re primarily interested in using Babylon to try to clean up the video feed – then it would almost certainly be easier to run your experiments against a representative video sample than against a live feed. Live sensor data is often crazy difficult to develop against; it requires you to have a lot of interdependent systems active at once, and any anomalies that arise can be difficult to reproduce and study. Working off a video would also allow you to run your video experiments without having tackled the transmission problem, decoupling the development of those two solutions.

Regarding pulling the video up in a browser, I can imagine it being anything from plug-and-play to insanely difficult depending on how the OS treats the camera. Maybe the first/easiest/silliest thing to try: does the camera show up as an option when you query the browser for media devices? If so, you might be able to get the feed just through a VideoTexture, since it uses that code path for webcam videos. If not, you might have to do something fancier, like a browser plugin or something. That’s even further outside my current expertise than we already were; still fun to speculate, though. :upside_down_face: If you try any experiments, keep us posted as to how they go!

1 Like

IIOnce I get back up north and get home here next week I’ll dig in and let you know what I figure out. I’m hoping it comes in as a plug and play device that would be awesome.

My idea for the noise thing is to fill in any area of noise with a texture that was synthesized from prior frames that did not have noise up to a certain number of past frames. Was thinking about identifying each section of the input as a zone and have there be a “tolerance” of level that each zone can be in for noise. Then just replace pixels identified as error with the hidden synthesized image. The idea is that hopefully it does not add too much ms delay in the output and that it hopefully replaces error noise with a color that would be closer to what should be there. Later then I would add setting to make it more aggressive or control how many frames back it stores etc.

Technical problems:
How do I sample the zone and figure out how to get a “Sum” of error?

How do I identify the error pixels individually and how intense the noise is.

Have the prior stored frames with over error levels be discarded from the sample process.

How to prevent a cascade of synthesis where a frame was partially reconstructed, then the next frames get reconstructed from that one and so on and so fourth, with that distort everything eventually or will it correct itself over time?

2 Likes

Sounds awesome! Sounds like a special case of the image inpainting problem. There are a number of existing approaches to doing that in still images, including a fluid dynamics based approach that’s available in OpenCV, but I think they’re not typically designed for video, and the current open-source implementations (that I know of) are all written for CPU. Your idea sounds really cool; it’ll be fascinating to hear how it works!

For an implementation of what you’ve described, it actually sounds a little bit like a filter – for example, a low-pass filter over time where the strength of the filter is controlled by the probability of a new pixel being noise. For example, suppose you had a way to estimate the probability that a given pixel is a noise pixel, perhaps by factoring in the difference from that same pixel in a prior frame alongside the difference/disorder in the surrounding pixel patch. You could then set the color of the filtered pixel to be something like

filteredColor = newestColor * probabilityOfNoise + priorColor * (1.0 - probabilityOfNoise);

The pixel would then take on more recent colors at differing rates based on the probability that those pixels were noise pixels. More confident pixels would adapt faster, while pixels that look like noise would hang onto the values they had before. This could produce some very strange artifacts in highly and persistently noisy images; but as long as the noise was below some threshold (and as long as it didn’t stay in any one place for too long), it might make a decent “first pass” implementation of the approach you described. Thoughts?

1 Like

I Thank you for chiming in. I think this has promise and I appreciate your thoughts. I will definitely let you know how it goes.

I see massive applications for this.

@syntheticmagus your description of the pixels behavior is exactly what I was thinking, pretty sure we are on the same page. Basically right on with what you are saying, I think a color closer to what should be there has to be better then static.

1 Like