Questions tagged [markov]

Markov, or markov property refers to the memoryless property of a stochastic process.

Overview

From Wikipedia,

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it.

A Markov process (Yt) can be expressed like below:

enter image description here

Tag usage

Please consider stack-exchange's Cross Validated SE for asking questions concerning statistics data analysis.

255 questions
62
votes
5 answers

What is the difference between markov chains and hidden markov model?

What is the difference between markov chain models and hidden markov model? I've read in Wikipedia, but couldn't understand the differences.
good_evening
  • 21,085
  • 65
  • 193
  • 298
40
votes
4 answers

Markov decision process value iteration, how does it work?

Markov decision process (using value iteration) I can't get my head around. Resources use mathematical formulas way too complex for my competencies. I want to use it on a 2D grid filled with walls (unattainable), coins (desirable) and enemies that…
Jesse Emond
  • 7,180
  • 7
  • 32
  • 37
24
votes
2 answers

Explain markov-chain algorithm in layman's terms

I don't quite understand this Markov... it takes two words a prefix and suffix saves up a list of them and makes random word? /* Copyright (C) 1999 Lucent Technologies */ /* Excerpted from 'The Practice of Programming' */ /* by Brian W.…
20
votes
4 answers

"Anagram solver" based on statistics rather than a dictionary/table?

My problem is conceptually similar to solving anagrams, except I can't just use a dictionary lookup. I am trying to find plausible words rather than real words. I have created an N-gram model (for now, N=2) based on the letters in a bunch of text.…
user132748
12
votes
4 answers

Simple Markov Chain in R (visualization)

i'd like to do a simple first order markov chain in R. I know there are packages like MCMC, but couldn't found one to display it graphically. Is this even possible? It would be nice if given a transition matrix and an initial state, one can visually…
user1028531
  • 129
  • 1
  • 3
11
votes
6 answers

Algorithms to identify Markov generated content?

Markov chains are a (almost standard) way to generate random gibberish which looks intelligent to untrained eye. How would you go about identifying markov generated text from human written text. It would be awesome if the resources you point to are…
agiliq
  • 7,518
  • 14
  • 54
  • 74
10
votes
5 answers

Creating a matrix of arbitrary size where rows sum to 1?

My task is to create a program that simulates a discrete time Markov Chain, for an arbitrary number of events. However, right now the part I'm struggling with is creating the right stochastic matrix that will represent the probabilities. A right…
Raleigh L.
  • 599
  • 2
  • 13
  • 18
10
votes
3 answers

Data structure for Markov Decision Process

I have implemented the value iteration algorithm for simple Markov decision process Wikipedia in Python. In order to keep the structure (states, actions, transitions, rewards) of the particular Markov process and iterate over it I have used the…
JackAW
  • 164
  • 2
  • 14
8
votes
1 answer

Python libraries for on-line machine learning MDP

I am trying to devise an iterative markov decision process (MDP) agent in Python with the following characteristics: observable state I handle potential 'unknown' state by reserving some state space for answering query-type moves made by the DP…
Brian Jack
  • 468
  • 4
  • 11
8
votes
2 answers

Markov Clustering Algorithm

I've been working through the following example of the details of the Markov Clustering algorithm: http://www.cs.ucsb.edu/~xyan/classes/CS595D-2009winter/MCL_Presentation2.pdf I feel like I have accurately represented the algorithm but I am not…
methodin
  • 6,717
  • 1
  • 25
  • 27
8
votes
1 answer

Hidden Markov Model predicting next observation

I have a sequence of 500 observations of the movements of a bird. I want to predict what the 501st movement of the bird would be. I searched the web and I guess this can be done by using HMM, however I do not have any experience on that subject. Can…
user975733
  • 107
  • 1
  • 3
8
votes
1 answer

Implementing trigram markov model

Given : and the following : For : q(runs | the, dog) = 0.5 Should this not be 1 as for q(runs | the, dog) : xi=runs , xi-2=the , xi-1=dog Probability is (wi has been swapped for xi): therefore : count(the dog runs) / count(the dog) = 1 / 1…
blue-sky
  • 51,962
  • 152
  • 427
  • 752
8
votes
1 answer

Setting gamma and lambda in Reinforcement Learning

In any of the standard Reinforcement learning algorithms that use generalized temporal differencing (e.g. SARSA, Q-learning), the question arises as to what values to use for the lambda and gamma hyper-parameters for a specific task. I understand…
7
votes
0 answers

Calculate stationary distribution of Markov chain in Python

I've been working on a Google foobar problem for a couple of days and have all but one test passing, and I'm pretty stuck at this point. Let me know if you have any ideas! I'm using a method described here, and I have a working example up on repl.it…
7
votes
1 answer

Hidden Markov Model: Is it possible that the accuracy decreases as the number of states increases?

I constructed a couple of Hidden Markov Models using the Baum-Welch algorithm for an increasing number of states. I noticed that after 8 states, the validation score goes down for more than 8 states. So I wondered whether it's possible that the…
1
2 3
16 17