How to generate unbiased estimate of 1/E[x] using one random draw?

Posted by Yuling Yao on Apr 19, 2022.       Tag: computing  

Quiz: you are given ONE random draw $x$ that was drawn from a density $p(x)$. Could you produce an unbiased estimate of $1/E_p[X]$?

You might want to think about this quiz before reading my solution.

Apart from mathematical fun, this type of problem comes out in stochastic approximation, in which we needs an unbiased estimate using a very small number of Monte Carlo draws. The unbiasedness here means that this sampling step will be repeated many times, but each time you are only shown one sample point $x$, and we wish the estimate to be unbiased under repeated sampling.

Here, the obvious wrong answer is to use $1/x$. You can try


It is clear that E $[1/x]= 3 $ while our desired quantity 1/E $[x]= 2 $. Indeed it is also clear that E $[1/x] >$ 1/E $[x]$ for positive $x$.

How about some Taylor series expansion? something like $1/E[p(X)] = 1- (E(x)-1) + O(E(x)-1)^2$? It is legitimate but then you get some crude approximation $2-x$, provided that I believe that $E(x) \approx 1$, which we typically do not know in the first place.

I find one solution from rejection sampling. The idea is that self-normalized importance sampling is only unbiased asymptotically, while rejection sampling is always unbiased even if you have MC size 1.

Here is the method. To make it work, I need to know the upper bound of $x$, it has to be a bounded variable. Say the upper bound is $c$. Each time I saw a realization $x$, then independent I generate a random number from uniform (0,1). If $u$ is smaller than $x/c$, accept, and report $1/x$. If $u$ is larger than $x/c$, do not report any estimate.

Then whenever I report the accepted $1/x$, it is an unbiased estimate of 1/E$[x] $.

Here is a demo code for $p(x)=$ Beta$(2,2)$:

Unbiased_estiamte=rep(NA, n)
Unbiased_estiamte[u<x]= (1/x)[u<x]

#check the answer:
mean(Unbiased_estiamte, na.rm = T)- 1/mean(x)