bayesian - Normalizing constant for beta distribution with discrete prior : R code query -
i going through bayesian thinking r jim albert. have query code example beta likelihood , discrete prior. code calculating posterior is:
pdisc <- function (p, prior, data) s = data[1] # successes f = data[2] # failures ############# p1 = p + 0.5 * (p == 0) - 0.5 * (p == 1) = s * log(p1) + f * log(1 - p1) = * (p > 0) * (p < 1) - 999 * ((p == 0) * (s > 0) + (p == 1) * (f > 0)) = exp(like - max(like)) ############# product = * prior post = product/sum(product) return(post) }
my query highlighted bit of code calculating likelihood , logic behind (not explained in book). i'm aware of pdf beta distribution, , log likelihood proportional s * log(p1) + f * log(1 - p1)
not clear following 2 lines doing - imagine it's normalizing constant, again there isn't explanation in book.
the line
like = * (p > 0) * (p < 1) - 999 * ((p == 0) * (s > 0) + (p == 1) * (f > 0))
takes care of edge cases when have prior probability @ 0 or 1. basically, if p=0 , successes observed like=-999 , if p=1 , failures observed like=-999. have preferred use -inf rather -999 log likelihood in cases.
the second line
like = exp(like - max(like))
is numerically stable way exponentiate when relative differences in logged values important. if small, e.g. had lots of successes , failures, possible exp(like) represented 0 vector in computer. relative differences important here because renormalize product sum 1 when constructing posterior probabilities.
Comments
Post a Comment