top of page

As advertised to Heriot Watt University, Strathclyde University and University of Edinburgh students in

​

  • Computational Mathematical Finance

  • Financial Modelling and Optimisation

  • Operational Research

  • Statistics

  • Financial Mathematics

  • Quantitative Finance and Mathematics

  • Actuarial Science and Management

  • Quantitative Financial Risk Management

​

Please find below a list of the completed project placements

Summer 2022 Placements

Moody’s Analytics - Parameter identification in market-consistent calibrations of equity models

Supervisor:

Dr Tamás Mátrai, Associate Director, Research

​

One participating student from  University of Edinburgh

Moody’s Scenario Generator (SG) is a multi-asset class, multi-risk factor, multi-time step scenario generator which is used by practitioners to generate real-world as well as market-consistent economic scenario sets for risk analysis and asset/liability valuation. The SG aims to offers maximum flexibility in the parameterization of its models to empower users to express their own views about the future evolution of financial markets. However, flexibility can be meaningfully exploited only if during the calibration of the models, the calibration target sets are sufficient to determine all parameters; else the models becomes overparameterized, which can lead to parameter instability.

The project aims to explore the problem of parameter identification for the market-consistent calibration of the stochastic volatility jump-diffusion (SVJD) equity model of the SG, where the eight parameters of the model are calibrated to 45 equity option market price data points on a quarterly basis. Questions:

What impact do the model parameters have on the model prices of options in the calibration target set?

For each parameter of the model, which calibration targets are most suitable for its identification?

Is the parameterization of the SVJD model redundant in practice, and if yes, is there an extension of the calibration target set which eliminates the redundancy?

The project offers an opportunity to learn about the SVJD model, model calibration in general, and the analytical and numerical techniques that can be used to answer such questions.

Moody's Analytics - Constrained stochastic gradient methods for calibrating scenario generators

Supervisor:

Nicholas Miller-Smith, Associate Director
Research

​

One participating student from 

University of Edinburgh

Moody’s Analytics produces software designed to simulate stochastically the future development of key economic and financial variables such as inflation rates, interest rates, credit spreads and equity returns, often over multi-year horizons.  These models are used by insurers and other organisations to value their liabilities and to assess the risk of their investments.

The models are calibrated to targets derived from theory, market-implied data, and historical experience.  Often targets relate to the distribution of model output simulations.  (For example, we set a target for the long-term dispersion of 10-year bond yields.)  However, the relationship between the underlying model input parameters and the model output is in general non-linear and non-analytic.  Fitting to targets using standard optimisation techniques such as gradient descent or Levenberg-Marquardt can therefore be time consuming, particularly where a large number of parameters must be calibrated simultaneously, since the model must be run many times at each optimisation step to build up the distribution from which the relevant statistics are derived.

We wish to explore alternative optimisation approaches.  In particular, techniques often applied to tune neural networks could be relevant since developers there face an analogous problem in that the tuning is typically over a large number of individual cases.   We have in mind stochastic gradient descent and subsequent developments (Adam, Adagrad, RMSProp etc).

One complexity is that the optimisation problems we face often incorporate constraints (for example acceptable bounds on parameter values, and the need for positive definite correlation matrices).  These are not in general incorporated in standard implementations of the above techniques.

GeoCapita - Data analysis for price forecasting of carbon prices and utilising the price forecasting to better manage carbon emissions price/cost financial exposure

Supervisor:

Douglas Prentice

CEO Climate Capital Tech

​

One participating student from 

Strathclyde University

Geocapita is developing a virtual trading platform, https://www.climatecapital.tech/  based in Edinburgh, London and Geneva regulated by the Financial Conduct Authority in London that involves data analysis and further uses. The platform is an online trading in environmental services for users (public, private, community, academic, researchers, and financiers, insurers, consultants etc) that have any activity relating to energy and  the environment.  They can come together and obtain data, data analysis, news information from the platform and can trade between one another virtually on the platform.   

One core aspect of the platform will be to provide data analysis about the environment.  Part of this data analysis involves price forecasting for future price of carbon emissions.  It involves more than just price forecasting, including also forecasting emissions quantities in a company, or country, or globally, analysis of carbon intensity, monitoring, reporting and verification and it involves legal analysis of changing legislation and the legal, commercial and economic impacts of climate law.   

Project description:

Sample analyses using Monte Carlo and Real Options for capital allocation into actual projects that are currently in fund raising.  Examples might include:

Solar projects in Spain
Energy efficiency projects in UK
Biogas projects in Poland
Carbon offset project from Japan
Methane capture project in Canada
The company shall decide upon the project or projects.

Methodologies involved as part of the project:

Monte Carlo simulation
Real options analysis

Project References
Why do carbon prices and price volatility change?
A stock market trading system based on foreign and domestic information
OTC trades and liquidity in the European carbon market more than meets the eye
Liquidity and resolution of uncertainty in the European carbon futures market

Hymans Robertson LLP - Marking LPI risk to models

Supervisor:

Dr Mayukh Gayen

Risk and Modelling Consultant

​

One participating student from

Heriot Watt University

What is Limited Price Indexation (LPI)?

LPI is an index which is used by (defined benefit) pension schemes in the UK to pay inflation-linked benefits. Often, they are with caps and floors. For example, in year-on-year style LPI, the benefit payments can increase in line with RPI floored at zero and capped at 3.0%. Therefore, even when the RPI is at 5.0%, the payment will be linked to 3.0%.  Consequently, the LPI linked benefits do have a different relationship with interest rates than a pure inflation linked benefit which makes it difficult to manage LPI risk in a portfolio. Therefore, understanding the risks related to LPI needs separate attention.  

What are the outputs of LPI modelling?

Outputs are the LPI rates which can be used for valuation of liabilities, setting investment strategy, assessing capital requirements or for ongoing risk management of a LPI linked portfolio.

What is the problem?

Due to the scarcity of and cost of LPI-linked instruments (e.g., LPI swaps), the LPI derivative related market is very illiquid and thus prices not available at ease. Thus, doing a marked-to-market modelling is not very reliable going forwards. Therefore, in recent years, there have been much focus on marked-to-model (i.e., real world modelling) approach.

Project aims?

A review of LPI modelling and research on marked-to-model methods.   

What it will involve?

Key deliverable will be:

A literature review of various year-on-year LPI models.

Description and comparison of different methods both market consistent and real-world, with a particular focus on real-world ones.

These can be supplemented by some modelling to support the cause of one model over the other.  

There should be some conclusions regarding the suitability of each real-world methods.

Project References:

Some initial references are:

https://www.actuaries.org.uk/system/files/documents/pdf/greenwood.pdf

https://www.actuaries.org.uk/system/files/field/document/SessionalApril2019_FINAL.PDF

https://www.lgim.com/landg-assets/lgim/_document-library/insights/client-solutions/client-solutions-lpi-linked-cashflows-sept-2018.pdf

https://www.theactuary.com/features/2017/08/2017/08/07/paying-lpi-service-market-prices

https://warwick.ac.uk/fac/sci/statistics/staff/academic-research/hutton/pensionsroyalsociety/andrewdsmith2019.03.16rspensions.pdf

https://silo.tips/download/the-use-of-option-pricing-theory-for-valuing-benefits-with-cap-and-collar-guaran

For introduction to inflation and inflation derivative modelling, the student can refer, for example Section VI of: Interest Rate Models - Theory and Practice: With Smile, Inflation and Credit (2001), Brigo and Mercurio.

 

Baillie Gifford & Howden Group - A Review of Cyber Loss Risk Taxonomies and Quantification

Supervisor:

Stephen Pashley
Risk Director

Ruben Cohen
Operational Risk Analytics

​

Two participating students from

Heriot Watt University

Detailed quantification of cyber risk is still in its infancy, yet it is a real issue for financial services businesses making decisions on the level of investment in cyber security defences along with considering cyber risk insurance. This project aims to assist firms in understanding their cyber risk profile better by bringing ideas from academic literature to help influence the development of firm’s cyber risk assessment frameworks and decision making. It will help firms to describe cyber risk better within organisations, and therefore be able to make better decisions. This includes helping firms understand what is on and off cover in the context of their own risk profile and potential loss events.

The topic is challenging as cyber risk sits across several overlapping risk areas and represents a moving target due to the evolving threat landscape. Furthermore, the mapping of cyber events and related losses is not straightforward as we have a good understanding of potential cyber events but struggle to quantify the potential losses and impacts. As a result, there continues to be a lack of clarify as to how to measure and quantify this risk in a meaningful way.  

Some academic and industry lead work has been done attempting to define lexicons and risk taxonomies, and to start looking at the quantification of cyber risks. Much of this is done in the context of cyber insurance. It seems, however, that very few of these works acknowledge and/or refer to each other, and so they do not typically build on one another but take different directions. Overall, the level of analysis continues to be far behind research undertaken more generally on operational risk across financial services.

The challenges are therefore in establishing a clear approach to measuring cyber risk, with knock on impacts for industry when looking to set risk appetite or transfer risk via insurance for example.  

The project will involve:

A review of recent academic literature on cyber risk definitions and classifications   

A catalogue of available cyber data sources (free and vendor-provided) with a high-level analysis of the data.     

A review of the popular modelling methods, including

Internal Loss Data/External Loss Data based (“actuarial”)

Scenario Analysis/Fault Tree based

Blended

Presentation of findings to sponsors and/or a wider industry group.

This project should be seen as a building block to further work on this topic. For example:

Understanding the monetary financial losses (including response costs) associated with different IT related incidents, events and “attacks”. This involves working from existing taxonomies of cyber threat and event types to combine them with a business process or product line and business process and then identifying and modelling the impact on these processes from such events or attacks. From there a loss model could be developed.  

Developing a scenario analysis and stress testing framework for cyber risk events and their impact on capital adequacy, operational resiliency or reputation.

Understand operational resilience linking causal factors to event and/or outcomes

Barclays - Predicting unprecedented moves in a bank's loan portfolio

Supervisor:

John Faben
Liquidity Risk Models VP

​

One participating student from

University of Edinburgh

Banks issue committed loan facilities to their clients in which they agree to supply the client with funds up to a given limit when they client requires them. These loan facilities are a source of liquidity risk for banks, as when clients draw down on the loan facilities, this results in cash outflows for the bank. In order to manage liquidity risk, banks need to consider the extremes of the distribution of loan drawdowns, and ensure that they will have enough liquid assets available to meet their obligations even in extreme scenarios. In particular, they might need to consider drawdown rates which are higher than those which have been historically observed, while remaining plausible given what we know about customer behaviour during times of stress, etc. This project will look at the historical data regarding loan drawdowns within a large investment bank, and attempt to design models which predict what that drawdown rate would look like in times of historically unprecedented market stress. This will involve building a regression model based on publicly available market data - with a particular focus on performance at the extremes of the distribution. The literature on 'Imbalanced Regression' should give some background on the sorts of techniques that can be used.

Project References:

Learning from imbalanced data: open challenges and future directions (Bartosz Krawczyk, 2016) Delving into Deep Imbalanced Regression (Yuzhe Yang, Kaiwen Zha, Ying-Cong Chen, Hao Wang, Dina Katabi, 2021)

Natwest - Identifying emerging threats in personal lending portfolio using machine learning techniques

Supervisor:

​Qian Yang
Credit Insight & Analytics

​

One participating student from

Heriot Watt University

Effective credit risk measurement and management is of paramount importance for any banking institution and Natwest group is no exception. Apart from making sure the right lending decisions are made in the first place, proactive portfolio monitoring and horizon scanning for emerging threats also play a vital role in supporting sustainable growth. In personal lending, portfolio credit risk measurement is carried out through a combination of art and science, in which SME judgement and quantitative analysis goes hand in hand to deliver the best result.  

To expand the portfolio credit risk measurement toolbox, we’d like to explore how data science and machine learning techniques can be leveraged to drive a more data centric approach to identify emerging threats and risk pockets. As such, this project will be empirical with access to account / customer level data of Natwest personal lending portfolio.

Recognising the vast amount and varied techniques in the field, we are happy to tailor the problem to the skills and experience of the right candidate. However, the problem will focus on the identification of emerging threats in the personal lending portfolio using machine learning techniques with a view of developing a proof of concept or, if possible, a self-contained analytical tool.  

The aspiration is to explore how data science techniques can be used systematically to compliment the incumbent approach for identifying emerging threats. The potential problems could be to identify high risk segments in the personal lending portfolio using clustering analysis, identify suitable leading indicators from unsecured portfolio to inform emerging threats in the secured portfolio using neural network, sentiment / topic analysis on established market insight data, to name a few.

bottom of page