Is there any sort of rough calculation when comparing the overhead of a gpu MAC to the amount of energy expended when doing a texture lookup in gpu dram? The intention is to be able to compare energy draw for “computed” textures vs “lookup” textures. And figure out the number of MAC’s where “computed” textures become more expensive, energy wise, than “lookup” textures.
I don’t use MAC but I’m sure you should be able to find tools that give you that info
You may find this link useful: How To Measure The Power Consumption of Your Frontend Application | Sustainable Software (microsoft.com)
Simple and elegant approach. Brilliant link, thanks!
I failed to clarify eariler, MAC, “multiply and accumulate”, refers to gpu SIMD floating point operations. I’m trying to figure out where computational energy consumption exceeds that of dram power consumption when using a large texture. Computing textures vs doing texture lookup, in the gpu. This will do the trick.