Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Efficient use of data for LSTM mortality forecasting
Stockholm University, Faculty of Science, Department of Mathematics.ORCID iD: 0000-0001-7235-384X
Stockholm University, Faculty of Science, Department of Mathematics.ORCID iD: 0000-0001-6338-3692
2022 (English)In: European Actuarial Journal, ISSN 2190-9733, E-ISSN 2190-9741, Vol. 12, no 2, p. 749-778Article in journal (Refereed) Published
Abstract [en]

We consider a simple long short-term memory (LSTM) neural network extension of the Poisson Lee-Carter model, with a particular focus on different procedures for how to use training data efficiently, combined with ensembling to stabilise the predictive performance. We compare the standard approach of withholding the last fraction of observations for validation, with two other approaches: sampling a fraction of observations randomly in time; and splitting the population into two parts by sampling individual life histories. We provide empirical and theoretical support for using these alternative approaches. Furthermore, to improve the stability of long-term predictions, we consider boosted versions of the Poisson Lee-Carter LSTM. In the numerical illustrations it is seen that even in situations where mortality rates are essentially log-linear as a function of calendar time, the boosted model does not perform significantly worse than a simple random walk with drift, and when non-linearities are present the predictive performance is improved. Moreover, boosting allows us to obtain reasonable model calibrations based on as few data points as 20 years. 

Place, publisher, year, edition, pages
2022. Vol. 12, no 2, p. 749-778
Keywords [en]
Sequential neural networks, Mortality forecasting, Ensemble models, Boosting, Lee-Carter model
National Category
Probability Theory and Statistics
Identifiers
URN: urn:nbn:se:su:diva-193179DOI: 10.1007/s13385-022-00307-3ISI: 000777441700001Scopus ID: 2-s2.0-85127599812OAI: oai:DiVA.org:su-193179DiVA, id: diva2:1554490
Available from: 2021-05-14 Created: 2021-05-14 Last updated: 2023-06-20Bibliographically approved
In thesis
1. Modern developments in insurance: IFRS 17 and LSTM forecasting
Open this publication in new window or tab >>Modern developments in insurance: IFRS 17 and LSTM forecasting
2021 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The papers presented here cover two different themes, both with applications in life insurance. The focus in the first paper is on determining the financial position and performance of an insurance company, in a accordance with IFRS 17. To derive the financial performance of an insurance company one needs to determine how a premium paid should be earned over time, and how to measure the costs associated with this earned premium. This is a complex matter, since premium payments can provide many years of coverage, and claims payments often are not fully known until many years later. IFRS 17 suggests a way of doing this, by specifying how to measure the unearned future profit for a group of insurance contracts. We give a mathematical interpretation of the regulatory texts, resulting in an algorithm for profit or loss defined in terms of the change in this unearned future profit and the risk-based liability value. Furthermore, we suggest a multi-period cost-of-capital approach as an appropriate valuation method for this purpose, and illustrate the practicability of this method, and allocation of this value to subportfolios, in a large scale numerical example.

The second paper concentrates on mortality forecasting, which is an important aspect of valuing and pricing life insurance contracts. We consider an extension of the Poisson Lee-Carter model, where the mortality trend is modelled by a long short-term memory neural network. Different calibration approaches of the network are suggested, with the aim of using training data efficiently, combined with ensembling to enhance the predictive performance. The stability of long-term predictions is improved by considering boosted versions of the model, which, furthermore, allows us to obtain reasonable predictions even for cases when data is very scarce.

Place, publisher, year, edition, pages
Stockholm: Department of Mathematics, 2021
National Category
Probability Theory and Statistics
Research subject
Mathematical Statistics
Identifiers
urn:nbn:se:su:diva-193330 (URN)
Presentation
2021-06-09, online via Zoom, public link is available at the department website, 15:15 (English)
Opponent
Supervisors
Available from: 2021-05-21 Created: 2021-05-20 Last updated: 2022-02-25Bibliographically approved
2. Modern developments in insurance mathematics
Open this publication in new window or tab >>Modern developments in insurance mathematics
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Arguably the most important developments in the insurance industry in the last decade have been centered around two themes: regulation and machine learning. Regulation has affected both actuarial work and research in insurance mathematics through the introduction of Solvency II in 2016, and more recently IFRS 17. The use of machine learning methods, and in particular neural network models and tree-based methods, has been increasing in research papers in insurance mathematics in recent years, and has furthermore started influencing the insurance industry, e.g. in pricing. This thesis consists of four papers exploring these two themes.

Paper I is focused on how to implement the new accounting regulation IFRS 17 in an economically and theoretically sound way. We provide a mathematical interpretation of this extensive regulation. In particular we define an algorithm for how to determine unearned future profits, and how these can be systematically converted to actual profits over time. The algorithm is a crucial ingredient in any practical implementation of the regulation. Furthermore, we suggest suitable methods for valuation of insurance contracts, and allocation of this value to groups of contracts, and demonstrate the practicability of the algorithm and methods in a large scale numerical example. 

Paper II is concerned with mortality forecasting, which is an important aspect of valuing and pricing life insurance contracts. We consider an extension of the Poisson Lee-Carter model, where the mortality trend is modelled by a long short-term memory (LSTM) neural network. Different calibration approaches of the network are suggested, with the aim of using training data efficiently. In particular, we consider a novel approach to splitting data into training and validation data based on the construction of synthetic subpopulations. The stability of long-term predictions is improved by considering boosted versions of the model, which allows us to obtain reasonable predictions even in cases where the number of observations is very small. 

In Paper III we consider a premium control problem for a mutual non-life insurer, formalised in terms of a random horizon Markov decision process (MDP). The aim of the insurer is to obtain a premium rule that generates a low, stable premium, that leads to a low probability of default. In realistic settings, taking into account delays in claims payments and feedback effects, classic dynamic programming methods for solving the problem are not feasible. Instead, we explore reinforcement learning algorithms combined with function approximation. We show that a carefully designed reinforcement learning algorithm allows us to obtain an approximate optimal premium rule that gives a good approximation of the true optimal premium rule in a simplified setting, and, furthermore, that the approximate optimal premium rule in a more realistic setting outperforms several benchmark rules. 

Paper IV delves deeper into theoretical aspects of the reinforcement learning algorithm considered in Paper III. While there are earlier results showing convergence of the algorithm linear semi-gradient SARSA for infinite horizon discounted MDPs, there are none for random horizon MDPs. In Paper IV we consider a variant of this algorithm, where the parameter vector and policy are updated at the end of each trajectory, after reaching the terminal state. Using general results for stochastic approximations, we show that this version of the algorithm converges with probability one in the random horizon case, under similar conditions on the behaviour policy as those used to derive earlier results for infinite horizon discounted MDPs. 

Place, publisher, year, edition, pages
Stockholm: Department of Mathematics, Stockholm University, 2023. p. 43
National Category
Probability Theory and Statistics
Research subject
Mathematical Statistics
Identifiers
urn:nbn:se:su:diva-218320 (URN)978-91-8014-400-1 (ISBN)978-91-8014-401-8 (ISBN)
Public defence
2023-09-22, Lärosal 18, hus 2, Campus Albano, Albanovägen 18, Stockholm, 13:00 (English)
Opponent
Supervisors
Available from: 2023-08-30 Created: 2023-06-20 Last updated: 2023-08-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Lindholm, MathiasPalmborg, Lina

Search in DiVA

By author/editor
Lindholm, MathiasPalmborg, Lina
By organisation
Department of Mathematics
In the same journal
European Actuarial Journal
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 192 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf