Sounds awesome! Sounds like a special case of the image inpainting problem. There are a number of existing approaches to doing that in still images, including a fluid dynamics based approach that’s available in OpenCV, but I think they’re not typically designed for video, and the current open-source implementations (that I know of) are all written for CPU. Your idea sounds really cool; it’ll be fascinating to hear how it works!
For an implementation of what you’ve described, it actually sounds a little bit like a filter – for example, a low-pass filter over time where the strength of the filter is controlled by the probability of a new pixel being noise. For example, suppose you had a way to estimate the probability that a given pixel is a noise pixel, perhaps by factoring in the difference from that same pixel in a prior frame alongside the difference/disorder in the surrounding pixel patch. You could then set the color of the filtered pixel to be something like
filteredColor = newestColor * probabilityOfNoise + priorColor * (1.0 - probabilityOfNoise);
The pixel would then take on more recent colors at differing rates based on the probability that those pixels were noise pixels. More confident pixels would adapt faster, while pixels that look like noise would hang onto the values they had before. This could produce some very strange artifacts in highly and persistently noisy images; but as long as the noise was below some threshold (and as long as it didn’t stay in any one place for too long), it might make a decent “first pass” implementation of the approach you described. Thoughts?