I think there is never a "best place" to learn all the points of a new concept/idea. But, you need to go through a lot of sources (books, webpages, jounals, etc) to understand something.
If you are done with learning what HMMs can do, you can checkout this post which discusses about selecting optimal parameters for training of a HMM classifier.
- Here is a really good explanation of what HMMs are all about. You need some background (Bayesian Networks) with Probability though. The above website Autonlab (from Carnegie Mellon University) has really good explanation/slides for Data Mining, Machine Learning, Pattern Recognition which is helpful to Math, Statistics, Computer Science researchers.
- Here is another link (from University of Otago, they make really good tutorials which start from basics) that explains HMMs with basics of Probability and Naive C/C++ implementation.
- Here is another link from UC-Berkley which explains Hidden Markov Models (in Practical Machine Learning class). This one clearly states learning, testing/classification without much deep explanation (on how they came up with the equations) and with quick formulae to start with. This is very much unlike other tutorials on the web which seemed to me confusing.
- Another tutorial here (Utah State University) describes HMM and also talks about issues in implementing HMM (floating point underflow problem).
- Matlab documentation website does a good job in explaining Hidden Markov Models in a basic manner along with the code (to use this code, you need "Statistics" toolbox in Matlab).
- This article (University of Cambridge) compares Hidden Markov Models with Dynamic Bayesian Networks. Also covers other stuff on Computer Vision applications using these Stochastic Models.
- Most referred tutorial in writing other tutorials is "A tutorial on Hidden Markov Models and selected applications in speech recognition".
Here are set of videos on Youtube that explain Hidden Markov Models in a more mathematical way! Other videos from that guy have excellent Machine Learning examples explained very mathematically and clearly!
There are 3 problems to solve in Hidden Markov Model namely, State Estimation, Decoding or Most Probable Path (MPP) and Training/Learning HMM.
The above 3 problems are solved using the following techniques:
- State Estimation: Forward-Backward technique is used for State Estimation (what will be the next state, given set of observations). This step is also known as Classification. This can be used in testing the classifier built.
- Decoding or Most Probable Path: Viterbi Decoding technique is used for estimating Most Probable Path (given a set of observations, what is the most probable path that is taken that best explains the observations). This step is also known as Decoding. This can be used in Testing the classifier built.
- Training/Learning HMM: Baum-Welch (Expectation Maximization) technique is used for Learning HMM. If we are given a set of observations, we can predict the maximum likelihood HMM that may have produced the observations (adjust the HMM model that fits the data). This is also known as Training.
For Computer Vision guys, these slides explains how to apply it to action recognition.
Once you are comfortable with basics of HMM, you might want to look into this paper. Here authors describe on how to select initial parameters of HMM.
A background in Statisctical Pattern Recognition, Stochastics will definitely help in understanding Hidden Markov Models. Hidden Markov Models are widely used in Speech Recognition, Computer Vision (Gesture Recognition and Action Recognition).
There are 3rd party libraries available on the web for use in your project. Also, there is a Matlab toolkit available.