@Necips
Your theory is absolutely sound! 
Yes because of the way our human vision operates, whatever we’re looking at, at the moment, is what is in sharp focus. The rest of our field of view is a little fuzzier. Your post made me think of a technique I was working on when programming my pathtraced game for the browser - The Sentinel: 2nd Look, which is an homage to and remake of Geoff Crammond’s 1986 masterpiece, The Sentinel. I tried to keep the same look and feel to his awesome other-worldly graphics, but since we can do real-time ray tracing and path tracing inside the browser, I decided to add some effects that wouldn’t even be possible with today’s AAA rasterized game engines. Namely, real-time double images (no screen space reflection hacks!) on the solar-panel terrain, and real-time correct reflections on the metal mirror sphere ( no cube maps!) the player uses as a 3d cursor for selection, and lastly, directly pertaining to your earlier comment, a real-time depth of field that updates every animation frame and exactly focuses on the metal mirror sphere, which is hopefully where the player’s eye focus remains through most of the game.
The Sentinel: 2nd Look
note: after clicking anywhere to capture mouse, press Spacebar to cycle through terrain generator, then ENTER to enter your player robot.
This is a W.I.P. so the gameplay is not functional yet, but anyway when you’re inside the player’s robot, drag the mouse around and notice how the mirror sphere remains in sharp focus, no matter how close the sphere is to you or how far back in the distance it is on the large terrain. If you look away from the sphere, the illusion is broken, because I as the lowly programmer cannot predict where every player’s eye sockets are going to be rotated, ha! But seriously, while playing if I keep looking at the sphere, I can’t really tell that the other parts of the picture are slightly out of focus, unless I consciously look at them of course. It all just looks ‘normal’, similar to how I see the world around me.
When I was playing around with this nifty feature, I thought as you do that it would be advantageous if we could spend the bulk of the time and precious samples on what really matters - where the user is looking. Unfortunately I am not that sophisticated nor do we possess reliable technology in 2021 to be able to accurately and cheaply exactly track which pixels the player/user is focusing on every split second.
But you’re right, if we could somehow track this eye motion in the near future (I’m positive our technology will get there relatively soon), then graphics techniques such as real time ray tracing and real time path tracing could really benefit. I’m not too sure about traditional rasterized graphics and games though, if they could benefit. In traditional/ubiquitous rasterized 3d graphics, the image is geometry-centric or geometry bound - namely the game or app has to loop through all the scene geometry that is in the camera’s current view. The final pixels don’t really care where the player is looking, it’s still pretty much the same amount of work to process the camera’s view before-hand. But in ray/pathtracing, we are pixel-centric or pixel-bound. We must loop over all the pixels first on traditional CPU tracers (on GPUs, query each individual pixel in GPU warps in parallel) and then later ask what geometry we need to process. In this rendering scheme, pointing the camera slightly higher at the sky or background can give you a solid 60 FPS even on mobile, whereas if you point the camera at a more graphically complex / more randomly divergent surface (like crazy terrain or water waves), it might drop down to 30 FPS or worse! So I believe it would make a big difference if someday we could, like you suggest, spend more calculations on the handful of pixels that the user is actually looking at, and then only do the minimum amount of work (or utilize all the non-mathematically-sound, non-perfect sampling cheats) on the vast majority of the image where the user is NOT focusing on anyway, then we could greatly speed up the GPU calculations - all the while not degrading the ‘perceived’ final quality of the image as the end-user navigates the dynamic scene.