A point estimator is any function \( W(\mathbf{X}) \) of a sample. That is, any statistic is a point estimator.
Maximum likelihood estimators:
In a broad sense, MLE and Bayesian estimation are both model selection methods, and they are really similar. While the former is comparatively easier to implement, the latter is more robust to assumptions.
MLE calculates the likelihood function of a given sample, then takes the model with the maximum score. With likelihood being the objective function, MLE chooses the model that best justifies your available observations, which is a really strong assumption.
Bayesian estimation only provides a probability distribution over a probability model subspace, rather than specify a specific probability model. Bayesian estimation takes not only random sample as input, but also a prior distribution over the model subspace. The output of Bayesian estimation called a posterior, the normalized product of prior and likelihood function. The posterior typically provides sharper prediction than the prior, if the prior is close to the likelihood function. The only issue with Bayesian method is how to justify your prior.
Def: unbiased
Def: UMVU
Notes on Completeness & Sufficiency
Thm: (Rao-Blackwell)
Thm: (Lehmann-Scheffe)
Cor:
Notes on Information Inequality
Def: Score function
Def: Fisher information
Thm: (Information inequality)