Running a large backfill and now server is not deleting obsolete blocks

I am running a large script that is migrating from a Whisper TSDB to a Prometheus TSDB. It is working: I’m using the “promtool tsdb create-blocks-from openmetrics” which starts to work as designed. I’m seeing the system perform deletion of obsolete blocks (~100 at a time).
The issue is that we are trying to move ~ 800,000 blocks into ./data directory. It seems like there is a point that Prometheus is overwhelmed by the amount of blocks in the data directory and stalls out.

Is there for a way to ‘kick-start’ the compression process after it gets into this state? Or do I have to clear all blocks out of ./data and try again?

Any help would be greatly appreciated.