# Double Slit Experiment and Bayes

Nov 12, 2019
Is there anything we (statisticians) can still learn from Quantum Physics under a classic probability framework?

Read More
Bayesian Statistics, Machine Learning

Is there anything we (statisticians) can still learn from Quantum Physics under a classic probability framework?

Read More
As a Bayesian statistician, I almost love the use of the word “uncertainty” as if that manifests my expertise in inference–until recently someone talks to me that they will stop talking to me as there...

Read More
I mostly only write statistics stuff on this blog. In part it is understandable why sparsity is desired – such that we only have statistics in our life, and no more worries about how to...

Read More
I came up with this random thought when I was sitting in Andrew’s class this afternoon (as a TA). In order to emphasize why Bayesian is different, Andrew listed a few alternatives for data analysis,...

Read More
I saw a visualization of wealth distribution in the 2019 Credit Suisse Global Wealth Report (https://www.credit-suisse.com/about-us/en/reports-research/global-wealth-report.html)

Read More
This is trivial math question but it once bothers me in the construction of Dirichlet process using stick breaking process. The context is that if $V \sim beta(1, M)$ and $\theta \sim Dirichlet (\bar \alpha)$...

Read More
reserve-mode causal is hard.

Read More
That said, xenophobism is xenophobism, no matter coated with a caucasian supremacism, or a seemingly-progressive popularism, or even when it is companied with a chauvinistic salute towards another group of straw men.

Read More
No you cannot

Read More
This is a jitt question in Andrew’s class: to (by hand) approximate the standard deviation of X/Y if X is N(5,1) and Y is N(1,3).

Read More
What if I run 4 chains and find $\hat R >>1$

Read More
Like importance sampling, ABC is in principle immune to metastability that MCMC has to suffer, but ABC is also problematic as we have no idea what it will converge to when the model is misspecified.

Read More
When I talked to someone about the old proof the invariance of odds ratio in a respropective sampling, I mentioned the estimation of $q(x)$ is achieved by its non-parametric MLE or its empirical distribution (see...

Read More
Some loss functions are invariant under domain adaption, which suggests we can indeed learn the optimal model of the population from a non-representative sample without sacrifice from the extrapolation.

Read More
The odds ratio from a case-control study is exactly the same as in a cohort study, therefore I could fit a retrospective logistic regression as if it is prospective and report its MLE or Bayesian posterior distribution. But considering the sampling distribution shift, should I reweight it regardless?

Read More