Questions tagged [markov-models]

Markov chain

The simplest Markov model is the Markov chain. It models the state of a system with a random variable that changes through time. In this context, the Markov property suggests that the distribution for this variable depends only on the distribution of the previous state. An example use of a Markov chain is Markov Chain Monte Carlo, which uses the Markov property to prove that a particular method for performing a random walk will sample from the joint distribution of a system.

Hidden Markov model

A hidden Markov model is a Markov chain for which the state is only partially observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist. For example, given a sequence of observations, the Viterbi algorithm will compute the most-likely corresponding sequence of states, the forward algorithm will compute the probability of the sequence of observations, and the Baum–Welch algorithm will estimate the starting probabilities, the transition function, and the observation function of a hidden Markov model. One common use is for speech recognition, where the observed data is the speech audio waveform and the hidden state is the spoken text. In this example, the Viterbi algorithm finds the most likely sequence of spoken words given the speech audio.

Markov decision process

A Markov decision process is a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards. It is closely related to Reinforcement learning, and can be solved with value iteration and related methods.

Partially observable Markov decision process

A partially observable Markov decision process (POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to be NP complete, but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.

Markov random field

A Markov random field, or Markov network, may be considered to be a generalization of a Markov chain in multiple dimensions. In a Markov chain, state depends only on the previous state in time, whereas in a Markov random field, each state depends on its neighbors in any of multiple directions. A Markov random field may be visualized as a field or graph of random variables, where the distribution of each random variable depends on the neighboring variables with which it is connected. More specifically, the joint distribution for any random variable in the graph can be computed as the product of the "clique potentials" of all the cliques in the graph that contain that random variable. Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner.

Hierarchical Markov Models

Hierarchical Markov Models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are the Hierarchical hidden Markov model and the Abstract Hidden Markov Model. Both have been used for behavior recognition, and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference.

100 questions
140
votes
5 answers

What is the difference between value iteration and policy iteration?

In reinforcement learning, what is the difference between policy iteration and value iteration? As much as I understand, in value iteration, you use the Bellman equation to solve for the optimal policy, whereas, in policy iteration, you randomly…
24
votes
5 answers

Generating Markov transition matrix in Python

Imagine I have a series of 4 possible Markovian states (A, B, C, D): X = [A, B, B, C, B, A, D, D, A, B, A, D, ....] How can I generate a Markov transformation matrix using Python? The matrix must be 4 by 4, showing the probability of moving from…
st19297
  • 521
  • 1
  • 5
  • 18
16
votes
1 answer

Decoding sequences in a GaussianHMM

I'm playing around with Hidden Markov Models for a stock market prediction problem. My data matrix contains various features for a particular security: 01-01-2001, .025, .012, .01 01-02-2001, -.005, -.023, .02 I fit a simple GaussianHMM: from…
datasci
  • 1,019
  • 2
  • 12
  • 29
13
votes
4 answers

Hidden Markov Models

I want to get started on HMM's, but don't know how to go about it. Can people here, give me some basic pointers, where to look? More than just the theory, I like to do a lot of hands-on. So, would prefer resources, where I can write small code…
7
votes
1 answer

Hidden Markov Model: Is it possible that the accuracy decreases as the number of states increases?

I constructed a couple of Hidden Markov Models using the Baum-Welch algorithm for an increasing number of states. I noticed that after 8 states, the validation score goes down for more than 8 states. So I wondered whether it's possible that the…
7
votes
3 answers

How to implement a simple Markov model to assign authors to anonymous texts?

Let's say I have harvested the posts from a forum. Then I removed all the usernames and signatures, so that now I only know what post was in which thread but not who posted what, or even how many authors there are (though clearly the number of…
Superbest
  • 25,318
  • 14
  • 62
  • 134
6
votes
1 answer

Markov Model descision process in Java

I'm writing an assisted learning algorithm in Java. I've run into a mathematical problem that I can probably solve, but because the processing will be heavy I need an optimum solution. That being said, if anyone knows a optimized library that will…
6
votes
1 answer

Replicating the example of Markov Switching Model of Hamilton using MSwM package in R

I'm trying to estimate the basic Markov Switching Model of Hamilton (1989) as is post in E-views webpage. This model is itself is an exact replication of the existing in RATS. This is the time series of the example: gnp <-…
5
votes
2 answers

Using Markov models to convert all-caps to mixed-case and related problems

I've been thinking about using Markov techniques to restore missing information to natural language text. Restore all-caps text to mixed-case. Restore accents / diacritics to languages which should have them but have been converted to plain…
hippietrail
  • 15,848
  • 18
  • 99
  • 158
5
votes
1 answer

n-gram markov chain transition table

I'm trying to build an n-gram markov model from a given piece of text, and then access the transition table for it so I can calculate the conditional entropy for each sequence of words of length n (the grams). For example, in a 2-gram model, after…
Dan
  • 105
  • 1
  • 5
5
votes
2 answers

How do Markov Chains work and what is memorylessness?

How do Markov Chains work? I have read wikipedia for Markov Chain, But the thing I don't get is memorylessness. Memorylessness states that: The next state depends only on the current state and not on the sequence of events that preceded it. If…
unknown
  • 4,859
  • 10
  • 44
  • 62
4
votes
1 answer

Markov Switching Regression: Standard errors of the msmFit and receiving Latex Output

These are my first two questions to ask on Stackoverflow - hopefully, I ask my questions the right way: First Question: Standard Error I am not entirely sure how the standard errors are specified using the "MSwM" package in R. As my data suffers…
jey-ronimo
  • 71
  • 7
4
votes
1 answer

How to predict probability of a sentence?

How to determine the probability of a sentence "what is a cat" ? with associated PCFG : Rule , Probability S -> NP VB NN -> CAT , 1 DT -> what , 1 VB->is , .5 VB->be , .5 How can this pcfg with sentence be represented as hidden markov model ? Each…
blue-sky
  • 51,962
  • 152
  • 427
  • 752
4
votes
0 answers

How to evaluate Markov Model accuracy

I have created the following Markov chain Model. And I am struggling to prove mathematically that my model works correctly, or doesn't work. Sequence: Start, state1, state2, state3, state3, state2, state1, state2, state1, end States: start, state1,…
4
votes
2 answers

Training Hidden Markov Models without Tagged Corpus Data

For a linguistics course we implemented Part of Speech (POS) tagging using a hidden markov model, where the hidden variables were the parts of speech. We trained the system on some tagged data, and then tested it and compared our results with the…
Claudiu
  • 224,032
  • 165
  • 485
  • 680
1
2 3 4 5 6 7