Alert only if something fails at least twice

Hi all:

I am using the Blackbox Exporter to check whether a web site is still up like this:

- alert: Blackbox15mProbeFailed
  expr: avg_over_time( probe_success{job="blackbox-http-15m"}[30m] ) < 0.45
  for: 0s
  labels:
    severity: critical
  annotations:
    summary: Blackbox probe failed (instance {{ $labels.instance }})
    description: "Probe failed\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

I want to probe once ever 15 minutes, and have an alert if 2 probes fail.

This question is actually about the general technique with such metrics and is not specific to the Blackbox Exporter.

Initially, I considered using “for: 30m”, but I had trouble with it because of staleness, see below.

Now I wonder whether there is a better way to write that kind of alert rule.

Question 1)

Is it possible to replace the [30m] time with something else not based on time? I want an alert if at least 2 probes fail, no matter what the scrape_interval for that particular job is.

The aim is to have a single alert rule for different jobs with different scrape_interval values.

Question 2)

I had trouble in the past with avg_over_time() because the default staleness is 5 minutes, but unfortunately, I did not keep detailed notes about what exactly went wrong.

I tried to learn more about it, see section “Gotchas”, “Staleness” here:

But I am afraid I do not understand the concepts there.

Is there a way avoid avg_over_time() and not worry about the staleness anymore?

Thanks in advance,
rdiez