Monte Carlo simulations, Markov chain, transition matrices, probability distributions, log normal- do these jargons make you uncomfortable? You have reached the right place to know these concepts in simple terms and their applications in Financial Risk Management.

On a high level, what are these financial risks we are talking about

- Market risk- the risk of financial loss owing to fluctuations in the price or yield. These are not rare events, but the occurrences of huge losses are low.
- Credit risk- the risk of default/ deterioration in the borrower’s creditworthiness. These are generally rare events but mainly would result in huge losses.

The concepts are explained in terms of how they are used at work. Yubi, being a borrower-lender marketplace, plays a crucial role in assisting lenders in estimating a borrower’s credit risk.

A lender or investor needs to know the risk involved in their investment. It help

- Price the investment correctly
- To set aside adequate capital to remain stable for financial institutions.

**So, what are Probability Distributions?**

A probability distribution is just a plot/ distribution of different values a variable can take and its corresponding probabilities/ frequencies.

Let us take three hypotheses to understand it better.

- 3+2 = 5. This is what mathematics says! So, we are 100% sure (certain event- 100% Probability) of the outcome. Hence there will be no probability distribution in this case.
- Now let us say we record an athlete’s time to complete a 50 m race, rounded off to the first decimal, and possible values of 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4. 5.5. Here, in this case, we have 10 discrete outcomes possible. And for simplicity, let us assume all outcomes are equally probable, i.e., the probability of arriving at each value is the same (1/10). So, the probability distribution will look as below. Since we can see probability, which is assigned to 10 different discrete values, this distribution is known as a discrete probability distribution

- In constant probability distribution, since there are infinite integers possible (in this case- there are infinite integers possible between 4.5 to 5.5), technically, the probability of a single value is very low, i.e., close to 0. So, in a continuous probability distribution, the practitioners would always be interested in finding the probability of a range of values. More realistically, let us record the actual time the athlete takes to complete a 50 m race without any round-off, hence the time can take any value in between continuous probability distribution.

In this case, we could be interested in finding the probability of values > 4.7 or values < 5.2 or probability for values in the range of 4.7 to 5.2; rather than a discrete value

Why is it important to know the distribution of variables or the probability of the values in a variable?

It’s easier to understand with another example; we want to find the mean of two values- 4 and 10- it would be 7. But what if the probability of getting 4 is 40% and reaching 10 is 60%- then the mean would be 7.6 This makes it essential to know the probabilities of the values in a series.

**Using Normal Distribution for Market Risk**

Let us assume that I have a portfolio of stocks, and I want to get some sense of how much market risk I am taking, considering the return of my portfolio follows a normal distribution (it is a symmetrical bell-shaped curve- the shape of the curve is dependent on the mean and standard deviation).

Based on historical performance,

- Mean of the annual portfolio returns is 9.46% and
- Standard Deviation of the annual portfolio returns is 11.75%

- I can be 78.96% confident that I would make a positive annual return.
- 90% VaR* (Market Risk) of my portfolio is 5.60%
- 95% VaR* (Market Risk) of my portfolio is 9.86%
- 99% VaR* (Market Risk) of my portfolio is 17.87%

**VaR- Value at Risk in simple terms is a concept in Risk Management which helps us to estimate what would be the maximum loss at a particular confidence level. For example, if 90% VaR of a portfolio is Rs. 100, it means the maximum loss the portfolio can face at 90% confidence is Rs. 100*

**Using Log Normal Distribution for Credit Risk**

Again, based on empirical studies, analyzing the historical bankruptcy, and bond default data, we have understood that credit losses follow a log-normal distribution (it is a distribution bounded by zero and has a long-right tail the shape of the curve here is again dependent on mean and standard deviation)

Suppose I have estimated a credit instrument’s expected loss (EL) to be 5%, and I am interested in estimating the Unexpected Loss (UL) at different confidence intervals. In simple terms, unexpected losses are the worst-case losses with a very low probability of occurrence.

The below graphical representation shows the losses at different confidence intervals. It can be easily observed from the chart that the losses increase exponentially at high confidence intervals.

Further, we will cover the simulations, why, and how these concepts are being leveraged at Yubi to estimate credit risk.

One problem is that if we rely on historical events to estimate the loss, we will miss out on modelling any unexpected events that have yet to happen. This is where simulations come into the picture- to help model unforeseen circumstances that have not happened before.

**What are Monte Carlo simulations? How to use them and how not?**

Intuitively it is common sense combined with a powerful machine.

Let’s say you have a coin and need to know whether the coin is biased or not. So you need to know what the expected probability of a *Head* is. Now, this can be solved using Monte Carlo Simulation.

The commonsensical part is that you can repeatedly toss the coin many times and take an average to get the expected probability of getting a Head. Also, by now, you will know that the higher the number of times you toss the coin, the higher the chances that the expected probability will be closer to the actual possibility.

The powerful machine part is that it’s not humanly possible to toss a coin, say 10K times or 1L times. But once we construct the situation in the machine, the tireless machine can run the simulations frequently. Suppose we have one or more stochastic input variables, and you are interested in understanding the range of outputs possible. In that case, MC simulation is one of the best techniques available. It is more powerful when you have multiple input variables that can assume different probability distributions (you can also build correlations to your input variables).

With the range of values of your output variable received, you can make the frequency plot to understand the probability of different values of your output variable.

- It is used to get the range of possible outcomes across different p-values.
- It works when one or more input variables are not deterministic (it has to be stochastic).
- Most useful when multiple input variables are assuming different probability distributions.
- It is used to understand the extreme values possible & their corresponding probabilities. In other words, extreme values at the tail-end confidence intervals.

At Yubi, we use simulations to get the Unexpected Losses (UL) at different p values.

**What is the Markov Chain? **

It is a stochastic model describing a sequence of possible events in which the probability of one state depends only on the previous state.

Key points to note here are

- Stochastic means it is random, meaning the next step may be complex/ not possible to predict precisely. Another way to look at it is that we cannot predict the next step by any feature (like a supervised ML problem). Instead, it is generally characterized by a mathematical function by observing how it happens over a long period. The noise in our general function-based ml problems is also stochastic only (it cannot be predicted)
- The sum of probabilities at any particular state = 1. This means two key things-
- we know all possible events possible at a particular state.
- Based on historical data, we should be able to calculate the probabilities for all events at a particular state.

- The sequence of events should be a logical sequence of one event after another.
- The probability of one state depends only on the previous state.

**Why Markov Chain for modeling Expected Credit Loss (ECL)?**

Credit Risk modeling broadly involves Estimation of Expected (EL) & Unexpected Losses (UL). This Markov Chain method is used in modeling Expected Losses, especially for a pool of loans generally diversified with good and bad loans.

- Good or bad loan classification can be predicted with structured ML models. PD for each loan can be predicted accurately, given loan/ borrower characteristics. But whenever we have a pool of loans (like PTC, DA, Co-lending), additional constraints are needed, like collinearity between loan defaults. Single loan default models cannot study such patterns. Hence Markov Chain comes as a good starting point.
- Retail loans are generally amortizing in nature- with a clear logic sequence in delinquency- 0 DPD, SMA0, SMA1, SMA2, NPA etc. {“SMA0”: 1 to 30 dpd, “SMA1”: 31 to 60 dpd, “SMA2”: 61 to 90 dpd, “NPA”: 90+}
- In shorter-tenor loans, where the number of repayments is less, it is challenging to study the customer payment characteristics on the current loan. This makes Markov Chain a reasonable assumption- where the probability of the current state depends on the previous states only.
- We are well aware of the possible delinquency levels in a particular state. Ex- on Period 1, the only two possible states are 0 dpd and SMA0. So the sum of probabilities in period 1 of 0 dpd + SMA0 = 1

*Note*: SMA means Special Mention Accounts- Before a loan turns into NPA it’s kept in different DPD buckets, and DPD represents Days Past Due, the number of days the account has been overdue on repayments.

**How is the Markov Chain used for estimating Expected Credit Loss (ECL)?**

The only input required for this process is the historical repayment behavior of similar loans.

process steps involved in using Markov Chain for estimating the Expected Losses (EL) are listed below

- Get the delinquency states for each period by comparing the demand and historical repayment behavior.
- Constructing Transition Matrix for each period based on the respective delinquency states.
- Apply the constructed TMs on the new pool of loans’ expected repayments (no defaults & prepayments) to arrive at the transaction level behavioural cash flows.
- Using Monte Carlo Simulation to simulate the cash flow losses and taking an average of all those simulations. Using Monte Carlo here is helpful to get Losses at different confidence intervals.

Now on, how we leverage this concept of Markov Chain at Yubi is as follows, with an example of a simple transition matrix.

Categories | Prepayment | Current | PAR 0 | PAR30 | PAR 60 | PAR 90 | PAR 120 | PAR 150 | PAR 180 | PAR 180+ |
---|---|---|---|---|---|---|---|---|---|---|

Prepayment | 100.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |

Current | 0.30% | 97.70% | 2.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |

PAR 0 | 0.20% | 10.60% | 68.40% | 20.80% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |

PAR 30 | 0.30% | 4.00% | 7.60% | 50.60% | 37.50% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |

PAR 60 | 0.40% | 2.20% | 1.20% | 6.40% | 40.60% | 49.30% | 0.00% | 0.00% | 0.00% | 0.00% |

PAR 90 | 0.90% | 1.40% | 0.80% | 1.00% | 4.30% | 39.00% | 52.60% | 0.00% | 0.00% | 0.00% |

PAR 120 | 0.00% | 0.00% | 2.20% | 0.00% | 0.10% | 0.10% | 8.70% | 89.00% | 0.00% | 0.00% |

PAR 150 | 0.10% | 3.70% | 0.90% | 0.00% | 0.00% | 0.00% | 0.00% | 0.30% | 95.00% | 0.00% |

PAR 180 | 0.00% | 3.60% | 0.40% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.10% | 95.90% |

PAR 180+ | 0.00% | 1.50% | 0.10% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 98.40% |

PAR 180+ | 0.00% | 1.50% | 0.10% | 0.00% | 0.00% | 0.00% | PAR 180+ | 0.00% | 0.00% | 98.40% |

- The rows are the loan states as of the previous (n-1) period, and columns are the loan states as of the current period (n)
- The yellow highlighted cells (2.00% and 20.80%) denote the % of loans that moved from ‘Current’ to ‘PAR 0’ state and ‘PAR 0’ to ‘PAR 30’ states. Since the loans are moving onto higher delinquency buckets, it is also called the roll forward rate.
- The blue highlighted cells (7.60%) denote the % of loans that moved from ‘PAR 30’ to ‘PAR 0’ state. Since the loans are moving onto lower delinquency buckets, it is also called rollback rates.
- This transition matrix is then captured for each period, and in the loss simulation for the current transaction, the corresponding transition matrix is then used for that period.
- Based on transition matrices, the path of each loan is simulated using random numbers generated to determine the probability of loans moving to a state.
- One set of paths of all the loans represents the states of all loans in transactions across periods. To avoid any biases, many states are generated via Monte Carlo simulation.
- Periodic cash flows for each simulation are ascertained for each state of the simulation, and loss distribution for the portfolio is thus estimated for each simulation.

Building different matrices by grouping loans into various segments can bring features into this transition matrix.

**Code Snippet for Monte Carlo Simulations**

# packages & modules import numpy as np import pandas as pd import matplotlib.pyplot as plt import time

Try different simulations and see the results

num_simulations = 10000 def mc_coins(n): sim_op = ["H" if np.random.rand() > 0.5 else "T" for i in range(n)] exp_p = sim_op.count("H")/len(sim_op) return exp_p print(mc_coins(num_simulations))

See how the expected probability comes closer to theoretical actual probability as we increase the number of simulations

num_simulations_lst = [1, 10, 100, 1000, 10000, 100000, 1000000] exp_p_lst = [mc_coins(sim) for sim in num_simulations_lst] x_axis = np.arange(len(exp_p_lst)) width = 1 plt.bar(x_axis, exp_p_lst, width, align = "center") plt.xticks(x_axis, num_simulations_lst)