As advertised to Heriot Watt University and University of Edinburgh students in
â€‹

Computational Mathematical Finance

Financial Modelling and Optimisation

Operational Research

Statistics

Financial Mathematics

Quantitative Finance and Mathematics

Actuarial Science and Management

Quantitative Financial Risk Management
â€‹
Please find below a list of the completed project placements
Summer 2020 Placements
Hymans Robertson LLP  Modelling the CPIH Measure
Supervisor:
Dr Mayukh Gayen, Risk and Modelling Consultant
â€‹
One participating student from
Heriot Watt University
With the growing debate regarding the choice of RPI as the measure of the general level of prices in the economy and underlying index of indexlinked bonds, various competing measures are also proposed. The CPI, the Bank of England’s chosen inflation measure, has replaced RPI in some instances. However, for pension schemes, liabilities being linked to ‘inflation index’ kept it open for debate around whether the choice of inflation measure should be RPI or CPI. Recent statistical analyses show CPI is lower than RPI by about 100 bps. Therefore, linking liabilities to CPI would mean lowering the level of increases in payments to those in receipt of an annuity or pension. Also, firms will provide less for liabilities if it is linked to a lower inflation measure. Indeed, the ‘true’ measure of inflation is not definite. Therefore, a new measure: CPIH, may replace both RPI and CPI for pension liabilities related calculations and future government inflationlinked bond issues
The project should contain an overview of the history of inflation measures and analyse the relative (de)merits of these three indices – RPI, CPI and CPIH in the UK.
The main part is modelling the CPIH index in software. We are particularly looking into calibration and implementation of a Stochastic Volatility Jump Diffusion (SVJD) model. Data is available from the ONS.
A demonstration of impact of different indices on a financial portfolio can be considered.
.
Moody's Analytics  The use of DSICE Model ( Dynamic Stochastic Integrated Model of Climate and Economy)
Supervisor:
Dr Jiajia Cui
â€‹
Participating student from
University of Edinburgh
At Moody’s Analytics, we help our clients understand how the uncertainty of financial markets can impact their business. This project will involve working with our awardwinning Economic Scenario Generator (ESG) software.
The Moody’s Analytics ESG is a multiasset class, multirisk factor, multitime step scenario generator which combines structural and reducedform modelling approaches in order to capture physical as well as riskneutral dynamics. In particular, it projects economic as well as financial variables using advanced time series models whose interdependencies are captured by a correlation matrix.
This project will use Nordhaus’ DICE2007 model as a starting point for our model. It is wellknown in the IAM community and widely used in the IAM literature. We extend it by adding both economic and climate shocks to the framework of DICE2007.
We would like to build a DSICE model prototype and run some testing with Economic shock and climate shock, as well as trying it with ESG data and check the impact on this model.
The project has three primary aims:
• Build a prototype of DSICE model
• Run the DSICE model with Economic/Climate Shock(see examples in the paper mentioned below)
• Apply ESG data to the DSICE model and stress test it.
Crover Ltd  Analysis of the impact of grain monitoring on the economics of IPM for grain storage
Supervisor:
Lorenzo Conti Managing Director
â€‹
Participating student from
Heriot Watt University
Cereal grains are the basis of staple food, yet postharvest losses during longterm storage are exceptionally high, above 20% in Scotland and worldwide. Pests are to blame, with grain moisture content and temperature being the most significant factors.
Crover Ltd has developed the world’s first remote probing device (a ‘Crover’) for the monitoring of stored cereal grains. A Crover is a small robotic device able to move within grains stored in bulk, such as wheat and barley in sheds and silos. Using onboard sensors to measure local parameters, it builds a full map of conditions within the bulk of the grains. Unlike current grain monitoring solutions that measure only one variable and have limited reach, Crover’s remote monitoring device provides realtime data across a range of measurements, initially temperature and moisture, throughout the whole silo. This gives early detection of potential spoilage allowing proactive management to reduce losses and maintain quality.
During the project, the student will be tasked with determining precise financial estimates of the value of the Crover data to grainstorekeepers and its impact on the grain storage economics. This is expected to be done by using the analytical model designed by Adam et al. (2006) on the cost and value of Integrated Pest Management (IPM) systems in order to calculate the cost/benefit (the economic threshold of intervention in grains) and discount based on different scenarios: doing nothing, using Crover monitoring and IPM, using old practices.
Starting from the basic model, the student will be encouraged to iteratively add more precise data to the IPM hypothesis (hence reducing the uncertainty of spoilage and subsequent decision making), and to attempt to implement into the model a quantitative measure of damage from moulds and info about discount on quality from the standard grain futures contracts.
Bank of England  Quantifying the risk of using the same models for both pricing and risk management
Participating student from
Heriot Watt University
Insurance companies typically use thirdparty software to model the likely impact of natural catastrophes on their insurance portfolios. These models are relied on not only to price the business but also in risk management as tools to monitor exposure, design risk mitigation (reinsurance) and also in many cases to set regulatory capital.
There are a limited number of suppliers of this software, with 2 firms dominating the market. There are qualitative reasons to believe that such models introduce systemic risk into the insurance industry, and empirical evidence that the models are not a good reflection of reality (for one recent example the two largest firms gave an initial estimate for an event that had occurred – and the ranges quoted did not overlap or an example where subsequent model releases significantly changed the loss distribution e.g. NZ EQ).
The concept of the “winners curse” is a known feature of the industry, as it is highly price sensitive and so you are more likely to gain business that you have underpriced than that you have overpriced.
The nature of the risk is low frequency/high severity, and so it is very difficult to draw any firm conclusions as to the possible quantitative impact of any systemic risk, as the underlying “true” risk profiles are not known.
A technique that has proved informative in similar circumstances is to perform a study where the “true” distribution is defined mathematically, with this distribution being sampled to produce a series of datasets. The analytic tool being used is then tested to see how close it gets to the true distribution.
The proposal for this project would be to set up a model of a closed system. This system would consist of two companies, who are competing for a number of contracts. The “true” distribution of each contract would be defined, and each company will set a price according to their model. The “model ” should be created so that across the full portfolio it is a good fit to the true distribution (in particular at the mean and the 99.5th percentile), but there will be errors for each contract. The model for each company should be different.
The two companies should then be allocated a portfolio of contracts based on the lowest average cost, and the overall portfolio exposure should be determined.
The true distributions should then be used to stochastically generate a large number of “event years”, and for this to be used to determine what the actual distribution of the two portfolios is.
Of particular interest will be the difference in the tail of the distribution (greater than 90th percentile, and again in particular the 99.5th percentile) as this is the area of interest for most risk mitigation techniques and regulatory capital setting.
Possible areas of investigation would be the impact of the number of contracts being modelled and the impact of using distributions with varying tail characteristics for the true distribution. At an advanced level, it would be expected that over time the two company models would converge on the lower price for each risk.
Moody’s Analytics  Multicurve scenario generation
Supervisor:
Dr Jiajia Cui, Associate Director, Research
â€‹
Participating student from
University of Edinburgh
Moody’s Economic Scenario Generator (SG) is a multiasset class, multirisk factor, multitime step scenario generator which combines structural and reducedform modelling approaches in order to capture realworld as well as riskneutral dynamics. The recognition of credit risk in benchmark yield curves led to the use of multiple yield curves within an economy, e.g. one riskfree curve used for discounting and another – creditrisky – curve used as a benchmark for floating rates. For scenario generation, this poses the challenge of generating scenarios of multiple nominal yield curves in a consistent manner within a single economy.
â€‹
The scope of the project is to implement and calibrate an additive and/or multiplicative spread model on top of the twofactor HullWhile yield curve model. The project will involve:
â€‹

Implementation of additive/multiplicative spread model in a scripting language (e.g., Matlab, Python, R).

Calibration of spread model to such curve pairs as e.g. (LIBOR,IFRS17), provided by Moody’s.

Analysis of yield curve dynamics in calibrated model.
Moody’s Analytics  Performance of variance reduction methods in scenario generation
Supervisor:
Dr Tamas Matrai, Associate Director, Research
â€‹
Participating student from
University of Edinburgh
At Moody’s Analytics, we help our clients understand how the uncertainty of financial markets can impact their business. This project will involve working with our awardwinning Economic Scenario Generator software.
Moody’s Economic Scenario Generator (SG) is a multiasset class, multirisk factor, multitime step scenario generator which combines structural and reducedform modelling approaches in order to capture realworld as well as riskneutral dynamics. As both scenario generation and pricing/risk analysis based on the scenario set are computationally intensive, it is important to make efficient use of the runtime budget. Variance reduction techniques lend themselves for minimizing Monte Carlo noise within a fixed runtime budget.
However, variance reduction methods improve performance only if they are applied to problems satisfying particular methodspecific conditions; else, they worsen performance. Yet variance reduction methods are routinely applied without testing for their applicability.
The purpose of this project is to test whether the variance reduction methods implemented in the SG software (e.g. antithetic sampling, control variates) improve performance for the models in the SG.
The project will involve:

Generation of realword and riskneutral scenarios using the SG.

Analysis of Monte Carlo noise in risk and pricing under various variance reduction methods.
Moody’s Analytics  The use of robust optimisation in portfolio optimization
Supervisor:
Dr Jiajia Cui, Associate Director, Research
â€‹
Participating student from
University of Edinburgh
At Moody’s Analytics, we help our clients understand how the uncertainty of financial markets can impact their business. This project will involve working with our awardwinning Economic Scenario Generator (ESG) software.
The Moody’s Analytics ESG is a multiasset class, multirisk factor, multitime step scenario generator which combines structural and reducedform modelling approaches in order to capture physical as well as riskneutral dynamics. In particular, it projects economic as well as financial variables using advanced time series models whose interdependencies are captured by a correlation matrix.
This project will look at robust portfolio selection approach, which is to systematically combat the sensitivity of the optimal portfolio to statistical and modelling errors in the estimates of the uncertain or unknown parameters.
In the traditional mean variance portfolio optimization activities, there are many uncertainties which can impact the decision making such as uncertainties from calibration parameters, as well as choice of models and choice of reward metric. If volatility is chosen as the risk measure, then covariance between assets will be required as an input to calculate risk from the portfolio.
Our objective is to improve the quality of decisions made in the environment by understanding, quantifying uncertainty (coming from covariance matrix) using robust optimisation. Furthermore, if we have time left, we will test the MVO stability by using Robust approach against if the correlation matrix calculated by using “Autoencoder” which is a machine learning method originally designed to learn a representation of a set of data by training the network to drop the “Noise”.
The project has three primary aims:
• To quantify the uncertainty (from covariance matrix) in optimisation practice;
• To build a robust meanvariance portfolio selection model;
• To compare the different performance between our traditional MVO and the robust optimisation