| Also, depending on representation, you might be able to use some matrix
manipulation package to solve an eigenvalue/eigenvector problem.
what -is- the application, anyway, if we may ask?
carl
|
| Hello,
This is a single reply to .1-0.4. First of all for (0.1), let
me define Markov Chains. Before that let me define Markov Process.
A Markov process is a stochastic process whose future state depends
solely on the present state and not on how the process arrived in
that state.
If the state space is discrete, then the markov process is called a
markov chain.
If you are interested in markov chains/stochastic processes, please
refer to:
L. Kleinrock, Queueing Systems, Vol I & II.
K. Trivedi, Probability & Statistics with Reliability, Queueing, and
Computer Science Applications.
-----------------------
I was interested in modeling a threshold based congestion control
mechanism. Without going into much details, let me give an overview
of the problem.
A single queue with a feedback loop to the input. The service rate of
the queue is mu. When the queue occupancy is less than HIGH, the
arrival rate into the queue is lambda. When the occupancy is more than
HIGH, the queue goes into a throttle mode, and the arrival rate is
throttled to lambda_c. The arrival is throttled till the occupancy
drops below LOW, when, the throttle is removed and the arrival rate
increases back to lambda.
For such a system, (apart from other things), I would like to
determine the variation of throughput with different choices of HIGH
and LOW, and lambda_c.
This problem requires modeling the queue occupancy distribution.
Basically, I would like to represent the queue state by (n, m), where
n is the number in the queue, and m is the state of output link of
the queue.
n: 0,1,....,HIGH
m: 0,1 (it indicates if the link is congested or not)
lambda and lambda_c are the arrival rates when m = 0, and m = 1,
respectively.
pi(n, m) = probability that the queue size is n, and link state = m.
The size of the markov chain increases with the value of HIGH. Thus,
computing the transition probabilities by hand will be possible only
for small values of HIGH. That is why I am looking for some tools that
can numerically solve such chains.
I think (as also stated by 0.3 & 0.4) I can use some matrix
manipulation packages. Any help(pointers) will be appreciated.
Thanks,
Krishna
226-5406
[Posted by WWW Notes gateway]
|
| re .1, additional to .5
<brain_dump rusty>
In other words, imagine that a system exists in a certain state and for
any other state has a probability of transitioning to that state
(possibly the probability is zero for some or most pairs of states),
this area of maths answers such questions as what is the long term
behaviour of the system.
One example is I think "Drunkard's Walk". In the 1D version you start
at the origin on the X-axis and each step is a random step either
forwards (+1) or backwards (-1) with equal probability. What is the
average distance from the origin in the long term? Answer: 0 if memory
serves - proving that drinking gets you nowhere. (-: In the 2D version
you wander over the 2D lattice of points with integer coords and again
if memory serves the expectation of the distance after n steps [the
distance is a random variable] is sqrt(n).
In some cases solving this problem means finding what M^n converges to
where M is a matrix (obviously square) which in turn can be
investigated by finding eigenvalues/vectors.
</brain_dump>
P.S. A famous text book on the Markov Process was banned by the
communist authorities in Czechoslovakia because "process" means "trial"
(as in a court case) and they assumed that the book was about the trial
of some dissident, Markov. (Admittedly the name is suitably Slavic).
|