Concept

HMM Tagging as Decoding

Decoding, in models such as HMM that contains hidden variables, is the task that takes as input an HMM λ=(A,B)\lambda = (A, B) and a sequence of observations O=o1,...,oTO = o_1 , ..., o_T, and finds the most probable sequence of states Q=q1...qTQ = q_1...q_T.

For part-of-speech tagging, the goal of HMM decoding is to choose the tag sequence t1...tnt_1...t_n that is most probable given the observation sequence of n words w1...wnw_1...w_n: t^1:n=arg maxt1,...,tnP(t1...tnw1...wn)\hat t_{1:n} = \argmax_{t_1, ..., t_n} P(t_1...t_n|w_1...w_n)

0

0

Updated 2021-11-07

Tags

Data Science