Why does Prometheus show 10x more memory usage than pprof for a go binary?

Ran into a peculiar issue with a discrepancy between what Prometheus and Pprof show the memory usage of a golang app to be. I’m running a go binary in a Kubernetes cluster that has Prometheus setup to monitor all the pods. While testing changes to the binary I hooked into it with pprof which shows the inuse_space to be ~80mb. Meanwhile, Prometheus shows the memory usage to be ~800, which is 10x what pprof thinks. Has anyone seen this before? I’m inclined to believe the Prometheus number but I’m not sure why pprof isn’t seeing the same thing.

pprof output:

(pprof) top50                    
Showing nodes accounting for 74.91MB, 100% of 74.91MB total
Showing top 50 nodes out of 86

Prometheus screen grab:

Prometheus calculation:

max(container_memory_working_set_bytes{container="$container",pod="$pod"} / (1024*1024)) by (container)

^ Seems to be the answer. The app is generating ~800mb of space that the GC is holding onto in case the app needs it but its only using ~1/4th of it at any given time which explains the difference.

Container and process memory is often misleading. The kernel does not always immediately reclaim memory from RSS. Also, IIRC “working set” includes memory pages that are outside the process itself.

I dont get it, is there any fomula between pprof usage and container_memory_workingset_bytes ?

I notice the similar case that pprof top shows 500mb usage while the pod working sets bytes is like 1.3G
Also the (chunks_head + WAL ) = 800M ish , so what i’m confused is how can I link pprof usage , chunks_head, WAL , and system memory usage together ?