Change search
Refine search result
1234567 1 - 50 of 451
Cite
Citation style
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Empirical properties of closed- and open-economy DSGE models of the Euro area2008In: Macroeconomic dynamics (Print), ISSN 1365-1005, E-ISSN 1469-8056, Vol. 12, p. 2-19Article in journal (Refereed)

In this paper, we compare the empirical proper-ties of closed- and open-economy DSGE models estimated on Euro area data. The comparison is made along several dimensions; we examine the models in terms of their marginal likelihoods, forecasting performance, variance decompositions, and their transmission mechanisms of monetary policy.

Stockholm University, Faculty of Social Sciences, Department of Statistics.
Forecasting performance of an open economy DSGE model2007In: Econometric Reviews, ISSN 0747-4938, E-ISSN 1532-4168, Vol. 26, no 04-feb, p. 289-328Article in journal (Refereed)

This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and, two different Bayesian approaches, and traditional benchmark models, e.g., the random. walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in, an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between, rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.

• 3. Ahmed, S. Ejaz
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Estimation of Several Intraclass Correlation Coefficients2015In: Communications in statistics. Simulation and computation, ISSN 0361-0918, E-ISSN 1532-4141, Vol. 44, no 9, p. 2315-2328Article in journal (Refereed)

An intraclass correlation coefficient observed in several populations is estimated. The basis is a variance-stabilizing transformation. It is shown that the intraclass correlation coefficient from any elliptical distribution should be transformed in the same way. Four estimators are compared. An estimator where the components in a vector consisting of the transformed intraclass correlation coefficients are estimated separately, an estimator based on a weighted average of these components, a pretest estimator where the equality of the components is tested and then the outcome of the test is used in the estimation procedure, and a James-Stein estimator which shrinks toward the mean.

• 4.
Stockholm University, Faculty of Science, Department of Mathematics.
Modeling Realized Covariance of Asset Returns2019Licentiate thesis, comprehensive summary (Other academic)

In this thesis, which consists of two papers, we consider the modeling of positive definitive symmetric matrices, in particular covariance matrices of financial asset returns. The return covariance matrix describes the magnitude in which prices of financial assets tend to change over time, and how price changes between different assets are related. It is an instrumental quantity in many financial applications, and furthermore, an important component in understanding the dynamics present prior to and during times of financial turbulence, such as the 2008 financial crisis.

In the first paper, we provide several goodness-of-fit tests applicable to models driven by a centralized Wishart process. To apply such a distributional assumption has become a popular way of modeling the stochastic properties of time-series of realized covariance matrices for asset returns. The paper includes a simulation study that aims to investigate how the tests perform under model uncertainty stemming from parameter estimation. In addition, the presented methods are used to evaluate the fit of a typical model of realized covariance adapted to real data on six stocks traded on the New York Stock Exchange.

The second paper considers positive definite and symmetric random matrices of the exponential family. Under certain conditions for this class of distributions, we derive the Stein-Haff identity. Furthermore, we determine this identity in the case of the matrix-variate gamma distribution and apply it in order to present an estimator that outperforms the maximum likelihood estimator in terms of Stein's loss function. Finally, a small simulation study is conducted to support the theoretical results.

• 5.
Stockholm University, Faculty of Science, Department of Mathematics.
Stein-Haff Identity for the Exponential Family2018In: Theory of Probability and Mathematical Statistics, ISSN 0094-9000, Vol. 99Article in journal (Refereed)

In this paper, the Stein-Haff identity is established for positive-definite and symmetric random matrices belonging to the exponential family. The identity is then applied to the matrix-variate gamma distribution, and an estimator that dominates the maximum likelihood estimator in terms of Stein's loss is obtained. Finally, a simulation study is conducted in order to support the theoretical results.

• 6.
Stockholm University, Faculty of Science, Department of Mathematics.
Scan Statistics for Space-Time Cluster Detection2018Licentiate thesis, comprehensive summary (Other academic)

Scan statistics are used by public health agencies to detect and localize disease outbreaks. This thesis provides an overview of scan statistics in the context of prospective disease surveillance and outbreak detection, presents a novel scan statistic to deal with the type of zero-abundant data that is often encountered in these settings, and—perhaps most importantly—implements this and other scan statistics in a freely available and open source R package. Additionally, Markov processes and time series methods are frequently used in many disease surveillance methods. The last part of this thesis presents some computationally efficient methods for density evaluation and simulation of irregularly sampled AR(1) processes, that may be useful when implementing surveillance methods based on these types of processes.

• 7.
Stockholm University, Faculty of Science, Department of Mathematics.
Stockholm University, Faculty of Science, Department of Mathematics.
An unconditional space–time scan statistic for ZIP‐distributed data2019In: Scandinavian Journal of Statistics, ISSN 0303-6898, E-ISSN 1467-9469, Vol. 46, no 1, p. 142-159Article in journal (Refereed)

A scan statistic is proposed for the prospective monitoring of spatiotemporal count data with an excess of zeros. The method that is based on an outbreak model for the zero‐inflated Poisson distribution is shown to be superior to traditional scan statistics based on the Poisson distribution in the presence of structural zeros. The spatial accuracy and the detection timeliness of the proposed scan statistic are investigated by means of simulation, and an application on the weekly cases of Campylobacteriosis in Germany illustrates how the scan statistic could be used to detect emerging disease outbreaks. An implementation of the method is provided in the open‐source R package scanstatistics available on the Comprehensive R Archive Network.

• 8.
Stockholm University, Faculty of Science.
Limit theorems for some stochastic epidemic models1994Doctoral thesis, comprehensive summary (Other academic)
• 9.
Stockholm University, Faculty of Science, Department of Mathematics. Matematisk statistik.
The asymptotic final size distribution of multitype chain-binomial epidemic processes.1999In: Advances in Applied Probability, ISSN 0001-8678, Vol. 31, no 1, p. 220-234Article in journal (Refereed)

A multitype chain-binomial epidemic process is defined for a closed finite population by sampling a simple multidimensional counting process at certain points. The final size of the epidemic is then characterized, given the counting process, as the smallest root of a non-linear system of equations. By letting the population grow, this characterization is used, in combination with a branching process approximation and a weak convergence result for the counting process, to derive the asymptotic distribution of the final size. This is done for processes with an irreducible contact structure both when the initial infection increases at the same rate as the population and when it stays fixed.

• 10.
Stockholm University, Faculty of Science, Department of Mathematics. Matematisk statistik.
Modelling the spread of penicillin-resistant Streptococcus pneumoniae in day-care and evaluation of intervention.2005In: Statistics in Medicine, ISSN 0277-6715, Vol. 24, no 23, p. 3593-607Article in journal (Refereed)

In 1995, a disease control and intervention project was initiated in Malmöhus county in southern Sweden to limit the spread of penicillin-resistant pneumococci. Since most of the carriers of pneumococci are preschool children, and since most of the spread is believed to take place in day-care, a mathematical model, in the form of a stochastic process, for the spread in a day-care group was constructed. Effects of seasonal variation and size of the day-care group were particularly considered. The model was then used for comparing results from computer simulations without and with intervention. Results indicate that intervention is highly effective in day-care groups with more than ten children during the second half of the year.

• 11.
Stockholm University, Faculty of Science, Department of Mathematics.
Card counting in continuous timeManuscript (preprint) (Other academic)
• 12.
Stockholm University, Faculty of Science, Department of Mathematics.
Credit default model for a dynamically changing economyArticle in journal (Refereed)
• 13.
Stockholm University, Faculty of Science, Department of Mathematics.
Credit default model for a dynamically changing economy2008Report (Other academic)

We propose a model describing an economy where companies may default due to contagion. By using standard approximation results for stochastic process we are able to describe the features of the model. It turns out that the model reproduces the oscillations in the default rates that has been observed empirically. That is, we have an intrinsic oscillation in the economic system without applying any external macroeconomic force. These oscillations can be understood as cleansing of the unhealthy companies during a recession and the recession ending when sufficiently many of the unhealthy companies have left the economy. This is important both from a risk management perspective as well as from a policy perspective since it shows that contagious defaults may help to explain the oscillations of business cycles. We also investigate the first-passage times of the default process, using this as a proxy for the time to a recession.

• 14.
Stockholm University, Faculty of Science, Department of Mathematics.
Four applications of stochastic processes: Contagious disease, credit risk, gambling and bond portfolios2011Doctoral thesis, comprehensive summary (Other academic)

This thesis consists of four papers on applications of stochastic processes.

In Paper I we study an open population SIS (Susceptible - Infective - Susceptible) stochastic epidemic model from the time of introduction of the disease, through a possible outbreak and to extinction. The analysis uses coupling arguments and diffusion approximations.

In Paper II we propose a model describing an economy where companies may default due to contagion. The features of the model are analyzed using diffusion approximations. We show that the model can reproduce oscillations in the default rates similar to what has been observed empirically.

In Paper III we consider the problem of finding an optimal betting strategy for a house-banked casino card game that is played for several coups before reshuffling. A limit result for the return process is found and the optimal card counting strategy is derived. This continuous time strategy is shown to be a natural generalization of the discrete time strategy where the so called effects of removals are replaced by the infinitesimal generator of the card process.

In Paper IV we study interest rate models where the term structure is given by an affine relation and in particular where the driving stochastic processes are so-called generalised Ornstein-Uhlenbeck processes. We show that the return and variance of a portfolio of bonds which are continuously rolled over, also called rolling horizon bonds, can be expressed using the cumulant generating functions of the background driving Lévy processes associated with the OU processes. We also show that if the short rate, in a risk-neutral setting, is given by a linear combination of generalised OU processes, the implied term structure can be expressed in terms of the cumulant generating functions.

• 15.
Stockholm University, Faculty of Science, Department of Mathematics.
Stockholm University, Faculty of Science, Department of Mathematics.
A stochastic SIS epidemic with demography: initial stages and time to extinction2011In: Journal of Mathematical Biology, ISSN 0303-6812, E-ISSN 1432-1416, Vol. 62, no 3, p. 333-348Article in journal (Refereed)

We study an open population stochastic epidemic model from the time of introduction of the disease, through a possible outbreak and to extinction. The model describes an SIS (susceptible–infective–susceptible) epidemic where all individuals, including infectious ones, reproduce at a given rate. An approximate expression for the outbreak probability is derived using a coupling argument. Further, we analyse the behaviour of the model close to quasi-stationarity, and the time to disease extinction, with the aid of a diffusion approximation. In this situation the number of susceptibles and infectives behaves as an Ornstein–Uhlenbeck process, centred around the stationary point, for an exponentially distributed time before going extinct.

• 16.
Stockholm University, Faculty of Science, Department of Mathematics.
Optimal bond portfolios with fixed time to maturityArticle in journal (Refereed)

We study interest rate models where the term structure is given by an affine relation and in particular where the driving stochastic processes are so-called generalised Ornstein-Uhlenbeck processes.

For many institutional investors it is natural to consider investment in bonds where the time to maturity of the bonds in the portfolio is kept fixed over time. We show that the return and variance of such a portfolio of bonds which are continuously rolled over, also called rolling horizon bonds, can be expressed using the cumulant generating functions of the background driving L´evy processes associated with the OU processes. This allows us to calculate the efficient mean-variance portfolio. We exemplify the results by a case study on U.S. Treasury bonds.

We also show that if the short rate, in a risk-neutral setting, is given by a linear combination of generalised OU processes, the implied term structure can be expressed in terms of the cumulant generating functions. This makes it possible to quite easily see what kind of term structures can be generated with a particular short rate dynamics.

• 17.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
A Classroom Approach to Illustrate Transformation and Bootstrap Confidence Interval Techniques Using the Poisson Distribution2017In: International Journal of Statistics and Probability, ISSN 1927-7032, Vol. 6, no 2, p. 42-53Article in journal (Refereed)

The Poisson distribution is here used to illustrate transformation and bootstrap techniques in order to construct a confidence interval for a mean. A comparison is made between the derived intervals and the Wald  and score confidence intervals. The discussion takes place in a classroom, where the teacher and the students have previously discussed and evaluated the Wald and score confidence intervals. While step by step  interactively getting acquainted  with new techniques,  the students will learn about the effects of e.g. bias and asymmetry and ways of dealing with such phenomena. The primary purpose of this teacher-student communication is therefore not to find the  best possible interval estimator for this particular case, but rather to provide a study displaying a teacher and her/his students interacting with each other in an efficient and rewarding way. The teacher has a strategy of encouraging the students to take initiatives. This is accomplished by providing the necessary background of the problem and some underlying theory after which the students are confronted with questions and problem solving. From this the learning process starts. The teacher has to be flexible according to how the students react.  The students are supposed to have studied mathematical statistics for at least two semesters.

• 18.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
A Classroom Approach to the Construction of an Approximate Confidence Interval of a Poisson Mean Using One Observation2015In: American Statistician, ISSN 0003-1305, E-ISSN 1537-2731, Vol. 69, no 3, p. 160-164Article in journal (Refereed)

Even elementary statistical problems may give rise to a deeper and broader discussion of issues in probability and statistics. The construction of an approximate confidence interval for a Poisson mean turns out to be such a case. The simple standard two-sided Wald confidence interval by normal approximation is discussed and compared with the score interval. The discussion is partly in the form of an imaginary dialog between a teacher and a student, where the latter is supposed to have studied mathematical statistics for at least one semester.

• 19.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
A Classroom Approach to the Construction of Bayesian Credible Intervals of a Poisson Mean2018Report (Other academic)

The Poisson distribution is here used to illustrate Bayesian inference concepts with the ultimate goal to construct credible intervals for a mean. The evaluation of the resulting intervals is in terms of potential negative effects of mismatched priors and posteriors. The discussion is in the form of an imaginary dialogue between a teacher and a student, who have met earlier, discussing and evaluating the Wald and score confidence intervals, as well as confidence intervals based on transformation and bootstrap techniques. From the perspective of the student the learning process is akin to a real research situation. By this time the student  is supposed to have studied mathematical statistics for at least two semesters.

• 20.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
A classroom approach to the construction of Bayesian credible intervals of a Poisson mean2019In: Communications in Statistics - Theory and Methods, ISSN 0361-0926, E-ISSN 1532-415XArticle in journal (Refereed)

The Poisson distribution is here used to illustrate Bayesian inference concepts with the ultimate goal to construct credible intervals for a mean. The evaluation of the resulting intervals is in terms of mismatched priors and posteriors. The discussion is in the form of an imaginary dialog between a teacher and a student, who have met earlier, discussing and evaluating the Wald and score confidence intervals, as well as confidence intervals based on transformation and bootstrap techniques. From the perspective of the student the learning process is akin to a real research situation. The student is supposed to have studied mathematical statistics for at least two semesters.

• 21.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Central limit theorems from a teaching perspective2015In: Festschrift in Honor of Hans Nyquist on the Occasion of his 65th Birthday / [ed] Ellinor Fackle-Fornius, Stockholm: Stockholm University, 2015, p. 1-6Chapter in book (Other academic)

Central limit theorems and their applications constitute highlights in probability theory and statistical inference. However, as a teacher, especially in undergraduate courses, you are faced with the challenges of how to introduce the results. These challenges especially concern ways of presentation and discussion of under which conditions asymptotic (approximate) results hold. This paper attempts to present some relevant examples for possible use in the classroom.

• 22.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Design-based "Optimal" Calibration Weights Under Unit Nonresponse in Survey Sampling2018Report (Other academic)

High nonresponse is a very common problem in sample surveys today. In statistical terms we are worried about increased bias and variance of estimators for population quantities such as totals or means. Different methods have been suggested in order to compensate for this phenomenon. We can roughly divide them into imputation and calibration and it is the latter approach we will focus on here. A wide spectrum of possibilities is included in the class of calibration estimators. We explore linear calibration, where we suggest using a nonresponse version of the design-based optimal regression estimator. Comparisons are made between this estimator and a GREG type estimator. Distance measures play a very important part in the construction of calibration estimators. We show that an estimator of the average response propensity (probability) can be included in the "optimal" distance measure under nonresponse, which will help reducing the bias of the resulting estimator.  To illustrate empirically the theoretically derived results for the suggested estimators, a simulation study has been carried out. The population is called KYBOK and consists of clerical municipalities in Sweden, where the variables include financial as well as size measurements. The  results are encouraging for the "optimal" estimator in combination with the estimated average response propensity, where the bias was highly reduced for the Poisson sampling cases in the study.

• 23.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Calibration for nonresponse treatment: In one or two steps?2016In: Statistical Journal of the IAOS, ISSN 1874-7655, E-ISSN 1875-9254, Vol. 32, no 3, p. 375-381Article in journal (Refereed)

This paper explores the different ways in which auxiliary information can be put to use in calibrated weighting adjustment under survey nonresponse.  Information is often present at two levels, the population level and the sample level. The many options available in executing the calibration derive from several factors: One is the order in which the two sources of information enters into calibration, a choice of a bottom-up as opposed to a top-down approach. Another is whether the calibration should be carried out sequentially in two steps, or in one single step with the combined information. A third question is whether one can simplify the procedure, at no major loss of accuracy, by transcribing individual population auxiliary data from the register to the sample units only. We make a systematic list of the possibilities arising for calibration adjustment in this setting. An empirical study concludes the paper.

• 24.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Calibration for nonresponse treatment using auxiliary information at different levels2016In: Proceedings of the Fifth International Conference on Establishment Surveys, 2016Conference paper (Other academic)

This paper explores the different ways in which auxiliary information can be put to use in cali-brated weighting adjustment under survey nonresponse. Information is often present at two levels,the population level and the sample level. The many options available in executing the calibrationderive from several factors: One is the order in which the two sources of information enters intocalibration, a choice of a bottom-up as opposed to a top-down approach. Another is whether thecalibration should be carried out sequentially in two steps, or in one single step with the combinedinformation. A third question is whether one can simplify the procedure, at no major loss of accu-racy, by transcribing individual population auxiliary data from the register to the sample units only. We make a systematic list of the possibilities arising for calibration adjustment in this setting. Anempirical study concludes the paper.

• 25.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
On Simulation of a Fractional Ornstein-Uhlenbeck Process of the Second Kind by the Circulant Embedding Method2018In: Stochastic Processes and Applications: SPAS2017, Västerås and Stockholm, Sweden, October 4-6, 2017 / [ed] Sergei Silvestrov, Anatoliy Malyarenko, Milica Rančić, Springer, 2018, p. 155-164Chapter in book (Refereed)
• 26.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Stockholm University, Faculty of Social Sciences, Department of Statistics.
Simulations-based Study of Covariance Structure for Fractional Ornstein-Uhlenbeck process of the Second KindManuscript (preprint) (Other academic)
• 27.
Statistiska centralbyrån, Statistics Sweden.
Stockholm University, Faculty of Social Sciences, Department of Statistics. Statistiska centralbyrån, Statistics Sweden. Statistiska centralbyrån, Statistics Sweden. Statistiska centralbyrån, Statistics Sweden.
Alternativa datainsamlingsmetoder i ULF, fas 2: En jämförelse mellan två olika datainsamlingsmetoder2010Report (Other academic)

I föreliggande rapport redovisas resultaten från den andra, och avslutande, fasen av den metodstudie som genomförts inom ramen för projektet Alternativa datainsamlingsmetoder för Undersökning av Levnadsförhållanden (ULF), som påbörjades 2002.

Metodstudiens huvudsakliga syfte var att jämföra två olika metoder för datainsamling: en mixed mode ansats (MM) med en kombination av insamlingsmetoderna besök- och telefonintervju utan datorstöd respektive datorstödd telefonintervju (CATI). Jämförelser avseende huvudsakligen fyra olika kvalitetsaspekter redovisas: (1) mätkvaliteten, (2) bortfallsfelets storlek och inverkan på skattningar, (3) svarsandelen i Barn-ULF och (4) viljan hos uppgiftslämnarna för att delta i undersökningen.

Den allmänna slutsatsen från studien är att den systematiska felkomponenten (mät- och bortfallsfel) i tillförlitlighetskomponenten bedöms vara oförändrad vid en övergång till CATI. I kombination med att övergången skulle frigöra resurser för en urvalsökning, innebär detta att medelkvadratfelet (MSE) i skattningarna skulle minska vid en övergång från den tidigare ansatsen med MM till CATI som primär insamlingsmetod.

• 28. Ball, Frank
Stockholm University, Faculty of Science, Department of Mathematics.
Reproduction numbers for epidemic models with households and other social structures II: Comparisons and implications for vaccination2016In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 274, p. 108-139Article in journal (Refereed)

In this paper we consider epidemic models of directly transmissible SIR (susceptible -> infective -> recovered) and SEIR (with an additional latent class) infections in fully-susceptible populations with a social structure, consisting either of households or of households and workplaces. We review most reproduction numbers defined in the literature for these models, including the basic reproduction number R-0 introduced in the companion paper of this, for which we provide a simpler, more elegant derivation. Extending previous work, we provide a complete overview of the inequalities among these reproduction numbers and resolve some open questions. Special focus is put on the exponential-growth-associated reproduction number R-r, which is loosely defined as the estimate of R-0 based on the observed exponential growth of an emerging epidemic obtained when the social structure is ignored. We show that for the vast majority of the models considered in the literature R-r >= R-0 when R-0 >= 1 and R-r <= R-0 when R-0 <= 1. We show that, in contrast to models without social structure, vaccination of a fraction 1 - 1/R-0 of the population, chosen uniformly at random, with a perfect vaccine is usually insufficient to prevent large epidemics. In addition, we provide significantly sharper bounds than the existing ones for bracketing the critical vaccination coverage between two analytically tractable quantities, which we illustrate by means of extensive numerical examples.

• 29. Baresel, Christian
Stockholm University, Faculty of Science, Department of Physical Geography and Quaternary Geology.
Uncertainty-Accounting Environmental Policy and Management of Water Systems2007In: Environmental Science & Technology, Vol. 41, no 10, p. 3653–3659-Article in journal (Refereed)

Environmental policies for water quality and ecosystem

management do not commonly require explicit stochastic

accounts of uncertainty and risk associated with the

quantification and prediction of waterborne pollutant loads

and abatement effects. In this study, we formulate and

investigate a possible environmental policy that does require

an explicit stochastic uncertainty account. We compare

both the environmental and economic resource allocation

performance of such an uncertainty-accounting environmental

policy with that of deterministic, risk-prone and riskaverse

environmental policies under a range of different

hypothetical, yet still possible, scenarios. The comparison

indicates that a stochastic uncertainty-accounting

policy may perform better than deterministic policies over

a range of different scenarios. Even in the absence of

reliable site-specific data, reported literature values appear

to be useful for such a stochastic account of uncertainty.

• 30. Bauder, David
Stockholm University, Faculty of Science, Department of Mathematics.
BAYESIAN INFERENCE FOR THE TANGENT PORTFOLIO2018In: International Journal of Theoretical and Applied Finance, ISSN 0219-0249, Vol. 21, no 8, article id 1850054Article in journal (Refereed)

In this paper, we consider the estimation of the weights of tangent portfolios from the Bayesian point of view assuming normal conditional distributions of the logarithmic returns. For diffuse and conjugate priors for the mean vector and the covariance matrix, we derive stochastic representations for the posterior distributions of the weights of tangent portfolio and their linear combinations. Separately, we provide the mean and variance of the posterior distributions, which are of key importance for portfolio selection. The analytic results are evaluated within a simulation study, where the precision of coverage intervals is assessed.

• 31.
Stockholm University, Faculty of Science, Department of Mathematics.
Regression methods in multidimensional prediction and estimation2007Doctoral thesis, comprehensive summary (Other academic)

In regression with near collinear explanatory variables, the least squares predictor has large variance. Ordinary least squares regression (OLSR) often leads to unrealistic regression coefficients. Several regularized regression methods have been proposed as alternatives. Well-known are principal components regression (PCR), ridge regression (RR) and continuum regression (CR). The latter two involve a continuous metaparameter, offering additional flexibility.

For a univariate response variable, CR incorporates OLSR, PLSR, and PCR as special cases, for special values of the metaparameter. CR is also closely related to RR. However, CR can in fact yield regressors that vary discontinuously with the metaparameter. Thus, the relation between CR and RR is not always one-to-one. We develop a new class of regression methods, LSRR, essentially the same as CR, but without discontinuities, and prove that any optimization principle will yield a regressor proportional to a RR, provided only that the principle implies maximizing some function of the regressor's sample correlation coefficient and its sample variance. For a multivariate response vector we demonstrate that a number of well-established regression methods are related, in that they are special cases of basically one general procedure. We try a more general method based on this procedure, with two meta-parameters. In a simulation study we compare this method to ridge regression, multivariate PLSR and repeated univariate PLSR. For most types of data studied, all methods do approximately equally well. There are cases where RR and LSRR yield larger errors than the other methods, and we conclude that one-factor methods are not adequate for situations where more than one latent variable are needed to describe the data. Among those based on latent variables, none of the methods tried is superior to the others in any obvious way.

• 32.
Stockholm University, Faculty of Science, Department of Mathematics.
Stochastic claims reserving in non-life insurance: Bootstrap and smoothing models2011Doctoral thesis, comprehensive summary (Other academic)

In practice there is a long tradition of actuaries calculating reserve estimates according to deterministic methods without explicit reference to a stochastic model. For instance, the chain-ladder was originally a deterministic reserving method. Moreover, the actuaries often make ad hoc adjustments of the methods, for example, smoothing of the chain-ladder development factors, in order to fit the data set under analysis.

However, stochastic models are needed in order to assess the variability of the claims reserve. The standard statistical approach would be to first specify a model, then find an estimate of the outstanding claims under that model, typically by maximum likelihood, and finally the model could be used to find the precision of the estimate. As a compromise between this approach and the actuary's way of working without reference to a model the object of the research area has often been to first construct a model and a method that produces the actuary's estimate and then use this model in order to assess the uncertainty of the estimate. A drawback of this approach is that the suggested models have been constructed to give a measure of the precision of the reserve estimate without the possibility of changing the estimate itself.

The starting point of this thesis is the inconsistency between the deterministic approaches used in practice and the stochastic ones suggested in the literature. On one hand, the purpose of Paper I is to develop a bootstrap technique which easily enables the actuary to use other development factor methods than the pure chain-ladder relying on as few model assumptions as possible. This bootstrap technique is then extended and applied to the separation method in Paper II. On the other hand, the purpose of Paper III is to create a stochastic framework which imitates the ad hoc deterministic smoothing of chain-ladder development factors which is frequently used in practice.

• 33.
Stockholm University, Faculty of Science, Department of Mathematics.
Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix2016In: Journal of Multivariate Analysis, ISSN 0047-259X, E-ISSN 1095-7243, Vol. 148, p. 160-172Article in journal (Refereed)

For a sample of $n$ independent identically distributed $p$-dimensional centered random vectorswith covariance matrix $\bSigma_n$ let $\tilde{\bS}_n$ denote the usual sample covariance(centered by the mean) and $\bS_n$ the non-centered sample covariance matrix (i.e. the matrix of second moment estimates), where $p> n$. In this paper, we provide the limiting spectral distribution andcentral limit theorem for linear spectralstatistics of the Moore-Penrose inverse of $\bS_n$ and $\tilde{\bS}_n$. We consider the large dimensional asymptotics when the number of variables $p\rightarrow\infty$ and the sample size $n\rightarrow\infty$ such that $p/n\rightarrow c\in (1, +\infty)$. We present a Marchenko-Pastur law for both types of matrices, which shows that the limiting spectral distributions for both sample covariance matrices are the same. On the other hand, we demonstrate that the asymptotic distribution of linear spectral statistics of the Moore-Penrose inverse of $\tilde{\bS}_n$ differs in the mean from that of $\bS_n$.

• 34.
Stockholm University, Faculty of Science, Department of Mathematics.
On the Simes inequality in elliptical models2017In: Annals of the Institute of Statistical Mathematics, ISSN 0020-3157, E-ISSN 1572-9052, Vol. 69, no 1, p. 215-230Article in journal (Refereed)

We provide some necessary and some sufficient conditions for the validity of the inequality ofSimes in models with elliptical dependencies. Necessary conditions are presented in terms of sufficient conditions for the reverse Simes inequality. One application of our main results concerns the problem of model misspecification, in particular the case that the assumption of Gaussianity of test statistics is violated. Since our sufficient conditions require non-negativity of correlation coefficients between test statistics, we also develop two exact tests for vectors of correlation coefficients and compare their powers in computer simulations.

• 35.
Bowling Green State University, USA.
Robustness of the Inference Procedures for the Global Minimum Variance Portfolio Weights in a Skew Normal Model2015In: European Journal of Finance, ISSN 1351-847X, E-ISSN 1466-4364, Vol. 21, no 13-14, p. 1176-1194Article in journal (Refereed)

In this paper, we study the influence of skewness on the distributional properties of the estimated weightsof optimal portfolios and on the corresponding inference procedures derived for the optimal portfolioweights assuming that the asset returns are normally distributed. It is shown that even a simple form ofskewness in the asset returns can dramatically influence the performance of the test on the structure of theglobal minimum variance portfolio. The results obtained can be applied in the small sample case as well.Moreover, we introduce an estimation procedure for the parameters of the skew-normal distribution that isbased on the modified method of moments.A goodness-of-fit test for the matrix variate closed skew-normaldistribution has also been derived. In the empirical study, we apply our results to real data of several stocksincluded in the Dow Jones index.

• 36.
Stockholm University, Faculty of Science, Department of Mathematics.
Direct shrinkage estimation of large dimensional precision matrix2016In: Journal of Multivariate Analysis, ISSN 0047-259X, E-ISSN 1095-7243, Vol. 146, p. 223-236Article in journal (Refereed)

In this work we construct an optimal shrinkage estimator for the precision matrix in high dimensions. We consider the general asymptotics when the number of variables p -> infinity and the sample size n -> infinity so that p/n -> c is an element of (0, +infinity). The precision matrix is estimated directly, without inverting the corresponding estimator for the covariance matrix. The recent results from random matrix theory allow us to find the asymptotic deterministic equivalents of the optimal shrinkage intensities and estimate them consistently. The resulting distribution-free estimator has almost surely the minimum Frobenius loss. Additionally, we prove that the Frobenius norms of the inverse and of the pseudo-inverse sample covariance matrices tend almost surely to deterministic quantities and estimate them consistently. Using this result, we construct a bona fide optimal linear shrinkage estimator for the precision matrix in case c < 1. At the end, a simulation is provided where the suggested estimator is compared with the estimators proposed in the literature. The optimal shrinkage estimator shows significant improvement even for non-normally distributed data.

• 37.
Stockholm University, Faculty of Science, Department of Mathematics.
Dynamic Conditional Correlation Multiplicative Error Processes2016In: Journal of Empirical Finance, ISSN 0927-5398, E-ISSN 1879-1727, Vol. 36, p. 41-67Article in journal (Refereed)

We introduce a dynamic model for multivariate processes of (non-negative) high-frequency tradingvariables revealing time-varying conditional variances and correlations. Modeling the variables' conditional mean processes using a multiplicative error model, we map the resulting residuals into aGaussian domain using a copula-type transformation. Based on high-frequency volatility, cumulativetrading volumes, trade counts and market depth of various stocks traded at the NYSE, we show thatthe proposed transformation is supported by the data and allows capturing (multivariate) dynamicsin higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficientlyflexible to be applicablein high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in tradingprocesses supports the usefulness of the approach. Taking these higher-order dynamics explicitlyinto account significantly improves the goodness-of-fit and out-of-sample forecasts of the multiplicative error model.

• 38.
Stockholm University, Faculty of Science, Department of Mathematics.
Stockholm University, Faculty of Science, Department of Mathematics.
On the Product of a Singular Wishart Matrix and a Singular Gaussian Vector in High Dimension2018In: Theory of Probability and Mathematical Statistics, ISSN 0094-9000, Vol. 99Article in journal (Refereed)

In this paper we consider the product of a singular Wishart random matrix and a singular normal random vector. A very useful stochastic representation of this product is derived, using which its characteristic function and asymptotic distribution under the double asymptotic regime are established. We further document a good finite sample performance of the obtained high-dimensional asymptotic distribution via an extensive Monte Carlo study.

• 39.
Stockholm University, Faculty of Science, Department of Mathematics.
Singular inverse Wishart distribution and its application to portfolio theory2016In: Journal of Multivariate Analysis, ISSN 0047-259X, E-ISSN 1095-7243, Vol. 143, p. 314-326Article in journal (Refereed)

The inverse of the standard estimate of covariance matrix is frequently used in the portfolio theory to estimate the optimal portfolio weights. For this problem, the distribution of the linear transformation of the inverse is needed. We obtain this distribution in the case when the sample size is smaller than the dimension, the underlying covariance matrix is singular, and the vectors of returns are independent and normally distributed. For the result, the distribution of the inverse of covariance estimate is needed and it is derived and referred to as the singular inverse Wishart distribution. We use these results to provide an explicit stochastic representation of an estimate of the mean-variance portfolio weights as well as to derive its characteristic function and the moments of higher order. The results are illustrated using actual stock returns and a discussion of practical relevance of the model is presented.

• 40.
Stockholm University, Faculty of Science, Department of Mathematics.
Optimal shrinkage estimator for high-dimensional mean vector2019In: Journal of Multivariate Analysis, ISSN 0047-259X, E-ISSN 1095-7243, Vol. 170, p. 63-79Article in journal (Refereed)

In this paper we derive the optimal linear shrinkage estimator for the high-dimensional mean vector using random matrix theory. The results are obtained under the assumption that both the dimension $p$ and the sample size $n$ tend to infinity in such a way that $p∕n\to c\in(0,\infty)$. Under weak conditions imposed on the underlying data generating mechanism, we find the asymptotic equivalents to the optimal shrinkage intensities and estimate them consistently. The proposed nonparametric estimator for the high-dimensional mean vector has a simple structure and is proven to minimize asymptotically, with probability 1, the quadratic loss when $c\in(0,1)$. When $c\in(1,\infty)$ we modify the estimator by using a feasible estimator for the precision covariance matrix. To this end, an exhaustive simulation study and an application to real data are provided where the proposed estimator is compared with known benchmarks from the literature. It turns out that the existing estimators of the mean vector, including the new proposal, converge to the sample mean vector when the true mean vector has an unbounded Euclidean norm.

• 41.
Stockholm University, Faculty of Science, Department of Mathematics.
The Exact Solution of Multi-period Portfolio Choice Problem with Exponential Utility2016In: Operations Research Proceedings 2014: Selected Papers of the Annual International Conference of the German Operations Research Society (GOR), RWTH Aachen University, Germany, September 2-5, 2014 / [ed] Marco Lübbecke, Arie Koster, Peter Letmathe, Reinhard Madlener, Britta Peis, Grit Walther, Springer, 2016, p. 45-51Chapter in book (Refereed)

In the current paper we derive the exact analytical solution of the multiperiod portfolio choice problem for an exponential utility function. It is assumed that the asset returns depend on predictable variables and that the joint random process of the asset returns follows a vector autoregression. We prove that the optimal portfolio weights depend on the covariance matrices of the next two periods and the conditional mean vector of the next period. The case without predictable variables and the case of independent asset returns are partial cases of our solution.

• 42.
Stockholm University, Faculty of Science, Department of Mathematics.
How risky is the optimal portfolio which maximizes the Sharpe ratio?2017In: AStA Advances in Statistical Analysis, ISSN 1863-8171, E-ISSN 1863-818X, Vol. 101, no 1, p. 1-28Article in journal (Refereed)

In this paper, we investigate the properties of the optimal portfolio in the sense of maximizing the Sharpe ratio (SR) and develop a procedure for the calculation of the risk of this portfolio. This is achieved by constructing an optimal portfolio which minimizes the Value-at-Risk (VaR) and at the same time coincides with the tangent (market) portfolio on the efficient frontier which is related to the SR portfolio. The resulting significance level of the minimum VaR portfolio is then used to determine the risk of both the market portfolio and the corresponding SR portfolio. However, the expression of this significance level depends on the unknown parameters which have to be estimated in practice. It leads to an estimator of the significance level whose distributional properties are investigated in detail. Based on these results, a confidence interval for the suggested risk measure of the SR portfolio is constructed and applied to real data. Both theoretical and empirical findings document that the SR portfolio is very risky since the corresponding significance level is smaller than 90 % in most of the considered cases.

• 43.
Stockholm University, Faculty of Science, Department of Mathematics.
Toward Sequential Data Assimilation for NWP Models Using Kalman Filter Tools2010Doctoral thesis, comprehensive summary (Other academic)

The aim of the meteorological data assimilation is to provide an initial field for Numerical Weather Prediction (NWP) and to sequentially update the knowledge about it using available observations. Kalman filtering is a robust technique for the sequential estimation of the unobservable model state based on the linear regression concept. In the iterative use together with Kalman smoothing, it can easily be extended to work powerfully in the non-Gaussian and/or  non-linear framework. The huge dimensionality of the model state variable for high resolution NWP models (magnitude 108) makes it impossible with any explicit manipulations of the forecast error covariance matrix required for Kalman filter and Kalman smoother recursions. For NWP models the technical implementation of a Kalman filtering becomes the main challenge which provokes developments of novel data assimilation algorithms.

This thesis is concerned with extensions of the Kalman filtering when the assumptions on linearity and Gaussianity of the state space model are violated. The research includes both theoretical studies of the properties of such extensions, within the framework of idealized small-dimensional models, and the development of the data assimilation algorithms for a full scale limited area high resolution NWP forecasting system.

This thesis shows that non-Gaussian state space models can efficiently be approximated by a Gaussian state space model with an adaptively estimated variance of the stochastic forcing. That results in a type of local smoothing, in contrast to the global smoothing provided by Gaussian state space models. With regards to NWP models, the thesis shows that the sequential update of the uncertainty about the model state estimate is essential for efficient extraction of information from observations. The Ensemble Kalman filters can be used to represent both flow- and observation-network-dependent structures of the forecast error covariance matrix, in spite of a severe rank-deficiency of the Ensemble Kalman filters. As a culmination of this research the hybrid variational data assimilation has been developed on top of the HIRLAM variational data assimilation system. It provides the possibility of utilizing, during the data assimilation process, the error-of-the-day structure of the forecast error covariance, estimated from the ensemble of perturbations, at the same time as the full rank of the variational data assimilation is preserved.

• 44.
Stockholm University, Faculty of Science, Department of Mathematics.
SMHI.
Aspects of non-linearities for data assimilation by Kalman filtering in a shallow water modelManuscript (preprint) (Other academic)
• 45.
Stockholm University, Faculty of Science, Department of Mathematics.
SMHI.
The EKTF rescaling scheme in HIRLAM2011In: Tellus. Series A, Dynamic meteorology and oceanography, ISSN 0280-6495, E-ISSN 1600-0870, Vol. 63, no 3, p. 685-401Article in journal (Other academic)

The ETKF rescaling scheme has been implemented into the HIRLAM forecasting system in order to estimate the uncertainty of the model state. The main purpose is to utilize this uncertainty information for modelling of flow-dependent background error covariances within the framework of a hybrid variational ensemble data assimilation scheme. The effects of rank-deficiency in the ETKF formulation is explained and the need for variance inflation as a way to compensate for these effects is justified. A filter spin-up algorithm is proposed as a refinement of the variance inflation. The proposed spin-up algorithm will also act to prevent ensemble collapse since the ensemble will receive ‘fresh blood’ in the form of additional perturbation components, generated on the basis of a static background error covariance matrix. The resulting ETKF-based ensemble perturbations are compared with ensemble perturbations based on targeted singular vectors and are shown to have more realistic spectral characteristics.

• 46.
Stockholm University, Faculty of Science, Department of Mathematics.
SMHI.
A hybrid variational ensemble data assimilation for the HIgh Resolution Limited Area Model (HIRLAM)Manuscript (preprint) (Other academic)
• 47.
Stockholm University, Faculty of Science, Department of Mathematics.
Stockholm University, Faculty of Science, Department of Mathematics.
Non-Gaussian state space models in decomposition of ice core time series in long and short time-scales2010In: Environmetrics, ISSN 1180-4009, E-ISSN 1099-095X, Vol. 21, no 6, p. 562-587Article in journal (Refereed)

Statistical modelling of six time series of geological ice core chemical data from Greenland is discussed. We decompose the total variation into long time-scale (trend) and short time-scale variations (fluctuations around the trend), and a pure noise component. Too heavy tails of the short-term variation makes a standard time-invariant linear Gaussian model inadequate. We try non-Gaussian state space models, which can be efficiently approximated by time-dependent Gaussian models. In essence, these time-dependent Gaussian models result in a local smoothing, in contrast to the global smoothing provided by the time-invariant model. To describe the mechanism of this local smoothing, we utilise the concept of a local variance function derived from a heavy-tailed density. The time-dependent error variance expresses the uncertainty about the dynamical development of the model state, and it controls the influence of observations on the estimates of the model state components. The great advantage of the derived time-dependent Gaussian model is that the Kalman filter and the Kalman smoother can be used as efficient computational tools for performing the variation decomposition. One of the main objectives of the study is to investigate how the distributional assumption on the model error component of the short time-scale variation affects the decomposition.

• 48. Bollobas, Bela
Stockholm University, Faculty of Science, Department of Mathematics.
Bootstrap percolation on Galton-Watson trees2014In: Electronic Journal of Probability, ISSN 1083-6489, E-ISSN 1083-6489, Vol. 19, p. 1-27Article in journal (Refereed)

Bootstrap percolation is a type of cellular automaton which has been used to model various physical phenomena, such as ferromagnetism. For each natural number r, the r-neighbour bootstrap process is an update rule for vertices of a graph in one of two states: 'infected' or 'healthy'. In consecutive rounds, each healthy vertex with at least r infected neighbours becomes itself infected. Percolation is said to occur if every vertex is eventually infected. Usually, the starting set of infected vertices is chosen at random, with all vertices initially infected independently with probability p. In that case, given a graph G and infection threshold r, a quantity of interest is the critical probability, p(c)(G, r), at which percolation becomes likely to occur. In this paper, we look at infinite trees and, answering a problem posed by Balogh, Peres and Pete, we show that for any b >= r and for any epsilon > 0 there exists a tree T with branching number br(T) = b and critical probability p(c)(T, r) < epsilon. However, this is false if we limit ourselves to the well-studied family of Galton-Watson trees. We show that for every r >= 2 there exists a constant c(r) > 0 such that if T is a Galton-Watson tree with branching number br(T) - b >= r then pc (T, r) > c(r)/b e(-b/r-1). We also show that this bound is sharp up to a factor of O (b) by giving an explicit family of Galton-Watson trees with critical probability bounded from above by C(r)e(-b/r-1) for some constant C-r > 0.

• 49. Boqvist, S
Stockholm University, Faculty of Science, Department of Mathematics.
Sources of sporadic Yersinia enterocilitica infection in children in Sweden, 2004: a case-control study2009In: Epidemiology and Infection, ISSN 0950-2688, E-ISSN 1469-4409, Vol. 137, p. 897-905Article in journal (Refereed)

Young children account for a large proportion of reported

Yersinia enterocolitica infections in Sweden with a high incidence compared with other gastrointestinal infections, such as salmonellosis and campylobacteriosis. A case-control study was conducted to investigate selected risk factors for domestic sporadic yersiniosis in children aged 0–6 years in Sweden. In total, 117 cases and 339 controls were included in the study. To minimize exclusion of observations due to missing data a multiple non-parametric imputation technique was used. The following risk factors were identified in the multivariate analysis : eating food prepared from raw pork products (OR 3.0, 95% CI 1.8–5.1) or treated sausage (OR 1.9, 95% CI 1.1–3.3), use of a baby’s dummy (OR 1.9, 95% CI 1.1–3.2) and contact with domestic animals (OR 2.0, 95% CI 1.2–3.4). We believe that the importance of Y. enterocolitica

infection in children has been neglected and that results from this study can be used to develop preventive recommendations.

• 50.
Stockholm University, Faculty of Science.
Epidemics with heterogeneous mixing: stochastic models and statistical tests1996Doctoral thesis, comprehensive summary (Other academic)
1234567 1 - 50 of 451
Cite
Citation style
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf