Change search
Refine search result
1234567 1 - 50 of 722
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Abbadini, Marco
    et al.
    Di Liberti, Ivan
    Stockholm University, Faculty of Science, Department of Mathematics.
    DUALITY FOR COALGEBRAS FOR VIETORIS AND MONADICITY2024In: Journal of Symbolic Logic (JSL), ISSN 0022-4812, E-ISSN 1943-5886Article in journal (Refereed)
    Abstract [en]

    We prove that the opposite of the category of coalgebras for the Vietoris endofunctor on the category of compact Hausdorff spaces is monadic over $\mathsf {Set}$ . We deliver an analogous result for the upper, lower, and convex Vietoris endofunctors acting on the category of stably compact spaces. We provide axiomatizations of the associated (infinitary) varieties. This can be seen as a version of Jonsson-Tarski duality for modal algebras beyond the zero-dimensional setting.

  • 2.
    Abdalla, Mahmoud
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Claesson, Malin
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Stationshöjdens inverkan på resmönster i offentliga lånecykelsystem2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The focus of this essay is to study the relationship between the usage pattern of public bike-sharing system subscribers and the location of the individual bike station in terms of height. Our hypothesis is that stations at higher elevations are more likely to be used as points of departure rather than arrival, and the opposite for stations with lower elevations.

    Based on data gathered in Chicago and the Greater Boston Area we apply a multiple linear regression analysis with relative elevation, gender and age as regressors, and the proportion of departures of the total activity at each station as the regressand. As an estimator, Weighted Least Squares (WLS) is used. The bike stations activity is varying, and this might cause skewness in the data, something that WLS can reduce. The result from the regression analysis shows that the relative elevation of the station does influence the stations usage pattern, but the parameter value for this regressor is small. The age and gender parameters are not statistically significant, so we end up with a simple linear regression with relative elevation as the only regressor.

    The conclusion is thus that even though the relative elevation of the bike station do have a statistically significant impact on the usage pattern in both cities. If the study had been undertaken in cities with more varied topography than Boston or Chicago, the result might have been different. We therefore consider this a field that could benefit from research that is more extensive in the future.

  • 3. Adolfson, Malin
    et al.
    Laseen, Stefan
    Linde, Jesper
    Villani, Mattias
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Empirical properties of closed- and open-economy DSGE models of the Euro area2008In: Macroeconomic Dynamics, ISSN 1365-1005, E-ISSN 1469-8056, Vol. 12, p. 2-19Article in journal (Refereed)
    Abstract [en]

    In this paper, we compare the empirical proper-ties of closed- and open-economy DSGE models estimated on Euro area data. The comparison is made along several dimensions; we examine the models in terms of their marginal likelihoods, forecasting performance, variance decompositions, and their transmission mechanisms of monetary policy.

  • 4. Adolfson, Malin
    et al.
    Linde, Jesper
    Villani, Mattias
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Forecasting performance of an open economy DSGE model2007In: Econometric Reviews, ISSN 0747-4938, E-ISSN 1532-4168, Vol. 26, no 04-feb, p. 289-328Article in journal (Refereed)
    Abstract [en]

    This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and, two different Bayesian approaches, and traditional benchmark models, e.g., the random. walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in, an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between, rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.

  • 5.
    Ahlberg, Daniel
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Angel, Omer
    Kolesnik, Brett
    Annihilating Branching Brownian Motion2024In: International mathematics research notices, ISSN 1073-7928, E-ISSN 1687-0247Article in journal (Refereed)
    Abstract [en]

    We study an interacting system of competing particles on the real line. Two populations of positive and negative particles evolve according to branching Brownian motion. When opposing particles meet, their charges neutralize and the particles annihilate, as in an inert chemical reaction. We show that, with positive probability, the two populations coexist and that, on this event, the interface is asymptotically linear with a random slope. A variety of generalizations and open problems are discussed.

  • 6.
    Ahlberg, Daniel
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Deijfen, Maria
    Stockholm University, Faculty of Science, Department of Mathematics.
    Sfragara, Matteo
    Stockholm University, Faculty of Science, Department of Mathematics.
    From stability to chaos in last-passage percolation2024In: Bulletin of the London Mathematical Society, ISSN 0024-6093, E-ISSN 1469-2120, Vol. 56, no 1, p. 411-422Article in journal (Refereed)
    Abstract [en]

    We study the transition from stability to chaos in a dynamic last passage percolation model on  with random weights at the vertices. Given an initial weight configuration at time 0, we perturb the model over time in such a way that the weight configuration at time t is obtained by resampling each weight independently with probability t. On the cube [0, n]d, we study geodesics, that is, weight-maximizing up-right paths from (0,0,⋯,0) to (n,n,⋯,n), and their passage time T. Under mild conditions on the weight distribution, we prove a phase transition between stability and chaos at tVar(T). Indeed, as n grows large, for small values of t, the passage times at time 0 and time t are highly correlated, while for large values of t, the geodesics become almost disjoint.

  • 7. Ahmed, S. Ejaz
    et al.
    Fallahpour, Saber
    von Rosen, Dietrich
    von Rosen, Tatjana
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Estimation of Several Intraclass Correlation Coefficients2015In: Communications in statistics. Simulation and computation, ISSN 0361-0918, E-ISSN 1532-4141, Vol. 44, no 9, p. 2315-2328Article in journal (Refereed)
    Abstract [en]

    An intraclass correlation coefficient observed in several populations is estimated. The basis is a variance-stabilizing transformation. It is shown that the intraclass correlation coefficient from any elliptical distribution should be transformed in the same way. Four estimators are compared. An estimator where the components in a vector consisting of the transformed intraclass correlation coefficients are estimated separately, an estimator based on a weighted average of these components, a pretest estimator where the equality of the components is tested and then the outcome of the test is used in the estimation procedure, and a James-Stein estimator which shrinks toward the mean.

  • 8.
    Akinyi Lagehäll, Amanda
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Yemane, Elelta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Multilevel Cox Regression of Transition to Parenthood among Ethiopian Women2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The birth of the first child is a special event for a mother whose life can change dramatically. In Ethiopia women’s timing to enter motherhood vary between the regions. This paper is therefore focusing on how birth cohort, education and residence affect the rate of entering motherhood for Ethiopian women in the different regions and the entire country. The dataset is extracted from the 2016 Ethiopia Demographic and Health Survey (EDHS) and contains 15,019 women from 487 different households. For more accurate estimations and results, the correlation within households is taken into consideration with multilevel survival analysis. The methods used are the Cox proportional hazard model and two frailty models. The results of the paper show that women residing in rural areas have an increased rate of entering motherhood compared to those residing in urban areas, every age group older than those born 1997 to 2001 have a higher intensity to enter parenthood and those with education have a decreased intensity ratio compared to the women with no education. It also shows that there is a regional difference in the effect of the estimated ratios of the covariates. Performing the multilevel analysis only changes the estimated effects of the covariates in the cities and one region. It is concluded that the estimated intensity ratio of multilevel survival analysis only varies from the standard Cox regression when the region is heterogeneous.

    Download full text (pdf)
    Lagehall_A_Yemane_E_2021
  • 9.
    Alfelt, Gustav
    Stockholm University, Faculty of Science, Department of Mathematics.
    Closed-form estimator for the matrix-variate Gamma distribution2020In: Theory of Probability and Mathematical Statistics, ISSN 0094-9000, Vol. 103, p. 137-154Article in journal (Refereed)
    Abstract [en]

    In this paper we present a novel closed-form estimator for the parameters of the matrix-variate gamma distribution. The estimator relies on the moments of a transformation of the observed matrices, and is compared to the maximum likelihood estimator (MLE) through a simulation study. The study reveals that when the underlying scale matrix parameter is ill-conditioned, or when the shape parameter is close to its lower bound, the suggested estimator outperforms the MLE, in terms of sample estimation error. In addition, since the suggested estimator is closed-form, it does not require numerical optimization as the MLE does, thus needing shorter computation time and is furthermore not subject to start value sensitivity or convergence issues. Finally, regarding the case of general parameter values, using the proposed estimator as start value in the optimization procedure of the MLE is shown to substantially reduce computation time, in comparison to using arbitrary start values.

  • 10.
    Alfelt, Gustav
    Stockholm University, Faculty of Science, Department of Mathematics.
    Modeling Realized Covariance of Asset Returns2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis, which consists of two papers, we consider the modeling of positive definitive symmetric matrices, in particular covariance matrices of financial asset returns. The return covariance matrix describes the magnitude in which prices of financial assets tend to change over time, and how price changes between different assets are related. It is an instrumental quantity in many financial applications, and furthermore, an important component in understanding the dynamics present prior to and during times of financial turbulence, such as the 2008 financial crisis.

    In the first paper, we provide several goodness-of-fit tests applicable to models driven by a centralized Wishart process. To apply such a distributional assumption has become a popular way of modeling the stochastic properties of time-series of realized covariance matrices for asset returns. The paper includes a simulation study that aims to investigate how the tests perform under model uncertainty stemming from parameter estimation. In addition, the presented methods are used to evaluate the fit of a typical model of realized covariance adapted to real data on six stocks traded on the New York Stock Exchange.

    The second paper considers positive definite and symmetric random matrices of the exponential family. Under certain conditions for this class of distributions, we derive the Stein-Haff identity. Furthermore, we determine this identity in the case of the matrix-variate gamma distribution and apply it in order to present an estimator that outperforms the maximum likelihood estimator in terms of Stein's loss function. Finally, a small simulation study is conducted to support the theoretical results.

  • 11.
    Alfelt, Gustav
    Stockholm University, Faculty of Science, Department of Mathematics.
    Modeling the covariance matrix of financial asset returns2021Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The covariance matrix of asset returns, which describes the fluctuation of asset prices, plays a crucial role in understanding and predicting financial markets and economic systems. In recent years, the concept of realized covariance measures has become a popular way to accurately estimate return covariance matrices using high-frequency data. This thesis contains five research papers that study time series of realized covariance matrices, estimators for related random matrix distributions, and cases where the sample size is smaller than the number of assets considered.

    Paper I provides several goodness-of-fit tests for discrete realized covariance matrix time series models that are driven by an underlying Wishart process. The test methodology is based on an extended version of Bartlett's decomposition, allowing to obtain independent and standard normally distributed random variables under the null hypothesis. The paper includes a simulation study that investigates the tests' performance under parameter uncertainty, as well as an empirical application of the popular conditional autoregressive Wishart model fitted to data on six stocks traded over eight and a half years.

    Paper II derives the Stein-Haff identity for exponential random matrix distributions, a class which for example contains the Wishart distribution. It furthermore applies the derived identity to the matrix-variate gamma distribution, providing an estimator that dominates the maximum likelihood estimator in terms of Stein's loss function. Finally, the theoretical results are supported by a simulation study.

    Paper III supplies a novel closed-form estimator for the parameters of the matrix-variate gamma distribution. The estimator appears to have several benefits over the typically applied maximum likelihood estimator, as revealed in a simulation study. Applying the proposed estimator as a start value for the numerical optimization procedure required to find the maximum likelihood estimate is also shown to reduce computation time drastically, when compared to applying arbitrary start values.

    Paper IV introduces a new model for discrete time series of realized covariance matrices that obtain as singular. This case occur when the matrix dimension is larger than the number of high frequency returns available for each trading day. As the model naturally appears when a large number of assets are considered, the paper also focuses on maintaining estimation feasibility in high dimensions. The model is fitted to 20 years of high frequency data on 50 stocks, and is evaluated by out-of-sample forecast accuracy, where it outperforms the typically considered GARCH model with high statistical significance.

    Paper V is concerned with estimation of the tangency portfolio vector in the case where the number of assets is larger than the available sample size. The estimator contains the Moore-Penrose inverse of a Wishart distributed matrix, an object for which the mean and dispersion matrix are yet to be derived. Although no exact results exist, the paper extends the knowledge of statistical properties in portfolio theory by providing bounds and approximations for the moments of this estimator as well as exact results in special cases. Finally, the properties of the bounds and approximations are investigated through simulations.

    Download full text (pdf)
    Modeling the covariance matrix of financial asset returns
    Download (jpg)
    omslagsframsida
  • 12.
    Alfelt, Gustav
    Stockholm University, Faculty of Science, Department of Mathematics.
    Stein-Haff Identity for the Exponential Family2018In: Theory of Probability and Mathematical Statistics, ISSN 0094-9000, Vol. 99, p. 5-17Article in journal (Refereed)
    Abstract [en]

    In this paper, the Stein-Haff identity is established for positive-definite and symmetric random matrices belonging to the exponential family. The identity is then applied to the matrix-variate gamma distribution, and an estimator that dominates the maximum likelihood estimator in terms of Stein's loss is obtained. Finally, a simulation study is conducted in order to support the theoretical results.

  • 13.
    Alfelt, Gustav
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Bodnar, Taras
    Stockholm University, Faculty of Science, Department of Mathematics.
    Javed, Farrukh
    Tyrcha, Joanna
    Stockholm University, Faculty of Science, Department of Mathematics.
    Singular Conditional Autoregressive Wishart Model for Realized Covariance Matrices2023In: Journal of business & economic statistics, ISSN 0735-0015, E-ISSN 1537-2707, Vol. 41, no 3, p. 833-845Article in journal (Refereed)
    Abstract [en]

    Realized covariance matrices are often constructed under the assumption that richness of intra-day return data is greater than the portfolio size, resulting in nonsingular matrix measures. However, when for example the portfolio size is large, assets suffer from illiquidity issues, or market microstructure noise deters sampling on very high frequencies, this relation is not guaranteed. Under these common conditions, realized covariance matrices may obtain as singular by construction. Motivated by this situation, we introduce the Singular Conditional Autoregressive Wishart (SCAW) model to capture the temporal dynamics of time series of singular realized covariance matrices, extending the rich literature on econometric Wishart time series models to the singular case. This model is furthermore developed by covariance targeting adapted to matrices and a sector wise BEKK-specification, allowing excellent scalability to large and extremely large portfolio sizes. Finally, the model is estimated to a 20-year long time series containing 50 stocks and to a 10-year long time series containing 300 stocks, and evaluated using out-of-sample forecast accuracy. It outperforms the benchmark models with high statistical significance and the parsimonious specifications perform better than the baseline SCAW model, while using considerably less parameters. 

  • 14.
    Alfelt, Gustav
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Mazur, Stepan
    On the mean and variance of the estimated tangency portfolio weights for small samples2022In: Modern Stochastics: Theory and Applications, ISSN 2351-6046, Vol. 9, no 4, p. 453-482Article in journal (Refereed)
    Abstract [en]

    In this paper, a sample estimator of the tangency portfolio (TP) weights is considered. The focus is on the situation where the number of observations is smaller than the number of assets in the portfolio and the returns are i.i.d. normally distributed. Under these assumptions, the sample covariance matrix follows a singular Wishart distribution and, therefore, the regular inverse cannot be taken. In the paper, bounds and approximations for the first two moments of the estimated TP weights are derived, as well as exact results are obtained when the population covariance matrix is equal to the identity matrix, employing the Moore–Penrose inverse. Moreover, exact moments based on the reflexive generalized inverse are provided. The properties of the bounds are investigated in a simulation study, where they are compared to the sample moments. The difference between the moments based on the reflexive generalized inverse and the sample moments based on the Moore–Penrose inverse is also studied.

  • 15.
    Allévius, Benjamin
    Stockholm University, Faculty of Science, Department of Mathematics.
    Scan Statistics for Space-Time Cluster Detection2018Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Scan statistics are used by public health agencies to detect and localize disease outbreaks. This thesis provides an overview of scan statistics in the context of prospective disease surveillance and outbreak detection, presents a novel scan statistic to deal with the type of zero-abundant data that is often encountered in these settings, and—perhaps most importantly—implements this and other scan statistics in a freely available and open source R package. Additionally, Markov processes and time series methods are frequently used in many disease surveillance methods. The last part of this thesis presents some computationally efficient methods for density evaluation and simulation of irregularly sampled AR(1) processes, that may be useful when implementing surveillance methods based on these types of processes.

  • 16.
    Allévius, Benjamin
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Höhle, Michael
    Stockholm University, Faculty of Science, Department of Mathematics.
    An unconditional space–time scan statistic for ZIP‐distributed data2019In: Scandinavian Journal of Statistics, ISSN 0303-6898, E-ISSN 1467-9469, Vol. 46, no 1, p. 142-159Article in journal (Refereed)
    Abstract [en]

    A scan statistic is proposed for the prospective monitoring of spatiotemporal count data with an excess of zeros. The method that is based on an outbreak model for the zero‐inflated Poisson distribution is shown to be superior to traditional scan statistics based on the Poisson distribution in the presence of structural zeros. The spatial accuracy and the detection timeliness of the proposed scan statistic are investigated by means of simulation, and an application on the weekly cases of Campylobacteriosis in Germany illustrates how the scan statistic could be used to detect emerging disease outbreaks. An implementation of the method is provided in the open‐source R package scanstatistics available on the Comprehensive R Archive Network.

    Download full text (pdf)
    fulltext
  • 17.
    Andblom, Mikael
    Stockholm University, Faculty of Science, Department of Mathematics.
    Generalized Bühlmann-Straub credibility theory for correlated data2023Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, we first go through classical results from the field of credibility theory. One of the most well-known models in the field is the Büuhlmann-Straub model. The model is relatively straightforward to apply in practice and is widely used. A major advantage of the model is its simplicity and intuitive dependency on its model parameters. From our perspective, the main drawback is the assumption regarding uncorrelated data. We show that the correlation can be used to cancel observational noise and therefore obtain more accurate estimators. This leads to an extended credibility formula that contains the Bühlmann-Straub model as a special case. This comes at the cost of introducing singularities which may cause the estimator to behave unexpectedly under certain circumstances. Further research is needed to better understand how often the circumstances are met in practice and if transforming the optimal weights could be a way forward in such cases. Finally, a simulation study based on real-world data shows that the proposed model outperforms the Bühlmann-Straub model.

    Download full text (pdf)
    fulltext
  • 18.
    Andersson, Håkan
    Stockholm University, Faculty of Science.
    Limit theorems for some stochastic epidemic models1994Doctoral thesis, comprehensive summary (Other academic)
  • 19.
    Andersson, Mikael
    Stockholm University, Faculty of Science, Department of Mathematics. Matematisk statistik.
    The asymptotic final size distribution of multitype chain-binomial epidemic processes.1999In: Advances in Applied Probability, ISSN 0001-8678, Vol. 31, no 1, p. 220-234Article in journal (Refereed)
    Abstract [en]

    A multitype chain-binomial epidemic process is defined for a closed finite population by sampling a simple multidimensional counting process at certain points. The final size of the epidemic is then characterized, given the counting process, as the smallest root of a non-linear system of equations. By letting the population grow, this characterization is used, in combination with a branching process approximation and a weak convergence result for the counting process, to derive the asymptotic distribution of the final size. This is done for processes with an irreducible contact structure both when the initial infection increases at the same rate as the population and when it stays fixed.

  • 20.
    Andersson, Mikael
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics. Matematisk statistik.
    Ekdahl, Karl
    Mölstad, Sigvard
    Persson, Kristina
    Hansson, Hans Bertil
    Giesecke, Johan
    Modelling the spread of penicillin-resistant Streptococcus pneumoniae in day-care and evaluation of intervention.2005In: Statistics in Medicine, ISSN 0277-6715, Vol. 24, no 23, p. 3593-607Article in journal (Refereed)
    Abstract [en]

    In 1995, a disease control and intervention project was initiated in Malmöhus county in southern Sweden to limit the spread of penicillin-resistant pneumococci. Since most of the carriers of pneumococci are preschool children, and since most of the spread is believed to take place in day-care, a mathematical model, in the form of a stochastic process, for the spread in a day-care group was constructed. Effects of seasonal variation and size of the day-care group were particularly considered. The model was then used for comparing results from computer simulations without and with intervention. Results indicate that intervention is highly effective in day-care groups with more than ten children during the second half of the year.

  • 21.
    Andersson, Patrik
    Stockholm University, Faculty of Science, Department of Mathematics.
    Card counting in continuous timeManuscript (preprint) (Other academic)
  • 22.
    Andersson, Patrik
    Stockholm University, Faculty of Science, Department of Mathematics.
    CARD COUNTING IN CONTINUOUS TIME2012In: Journal of Applied Probability, ISSN 0021-9002, E-ISSN 1475-6072, Vol. 49, no 1, p. 184-198Article in journal (Refereed)
    Abstract [en]

    We consider the problem of finding an optimal betting strategy for a house-banked casino card game that is played for several coups before reshuffling. The sampling without replacement makes it possible to take advantage of the changes in the expected value as the deck is depleted, making large bets when the game is advantageous. Using such a strategy, which is easy to implement, is known as card counting. We consider the case of a large number of decks, making an approximation to continuous time possible. A limit law of the return process is found and the optimal card counting strategy is derived. This continuous-time strategy is shown to be a natural analog of the discrete-time strategy where the so-called effects of removal are replaced by the infinitesimal generator of the card process.

  • 23.
    Andersson, Patrik
    Stockholm University, Faculty of Science, Department of Mathematics.
    Credit default model for a dynamically changing economyArticle in journal (Refereed)
  • 24.
    Andersson, Patrik
    Stockholm University, Faculty of Science, Department of Mathematics.
    Credit default model for a dynamically changing economy2008Report (Other academic)
    Abstract [en]

    We propose a model describing an economy where companies may default due to contagion. By using standard approximation results for stochastic process we are able to describe the features of the model. It turns out that the model reproduces the oscillations in the default rates that has been observed empirically. That is, we have an intrinsic oscillation in the economic system without applying any external macroeconomic force. These oscillations can be understood as cleansing of the unhealthy companies during a recession and the recession ending when sufficiently many of the unhealthy companies have left the economy. This is important both from a risk management perspective as well as from a policy perspective since it shows that contagious defaults may help to explain the oscillations of business cycles. We also investigate the first-passage times of the default process, using this as a proxy for the time to a recession.

  • 25.
    Andersson, Patrik
    Stockholm University, Faculty of Science, Department of Mathematics.
    Four applications of stochastic processes: Contagious disease, credit risk, gambling and bond portfolios2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis consists of four papers on applications of stochastic processes.

    In Paper I we study an open population SIS (Susceptible - Infective - Susceptible) stochastic epidemic model from the time of introduction of the disease, through a possible outbreak and to extinction. The analysis uses coupling arguments and diffusion approximations.

    In Paper II we propose a model describing an economy where companies may default due to contagion. The features of the model are analyzed using diffusion approximations. We show that the model can reproduce oscillations in the default rates similar to what has been observed empirically.

    In Paper III we consider the problem of finding an optimal betting strategy for a house-banked casino card game that is played for several coups before reshuffling. A limit result for the return process is found and the optimal card counting strategy is derived. This continuous time strategy is shown to be a natural generalization of the discrete time strategy where the so called effects of removals are replaced by the infinitesimal generator of the card process.

    In Paper IV we study interest rate models where the term structure is given by an affine relation and in particular where the driving stochastic processes are so-called generalised Ornstein-Uhlenbeck processes. We show that the return and variance of a portfolio of bonds which are continuously rolled over, also called rolling horizon bonds, can be expressed using the cumulant generating functions of the background driving Lévy processes associated with the OU processes. We also show that if the short rate, in a risk-neutral setting, is given by a linear combination of generalised OU processes, the implied term structure can be expressed in terms of the cumulant generating functions.

  • 26.
    Andersson, Patrik
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Lindenstrand, David
    Stockholm University, Faculty of Science, Department of Mathematics.
    A stochastic SIS epidemic with demography: initial stages and time to extinction2011In: Journal of Mathematical Biology, ISSN 0303-6812, E-ISSN 1432-1416, Vol. 62, no 3, p. 333-348Article in journal (Refereed)
    Abstract [en]

    We study an open population stochastic epidemic model from the time of introduction of the disease, through a possible outbreak and to extinction. The model describes an SIS (susceptible–infective–susceptible) epidemic where all individuals, including infectious ones, reproduce at a given rate. An approximate expression for the outbreak probability is derived using a coupling argument. Further, we analyse the behaviour of the model close to quasi-stationarity, and the time to disease extinction, with the aid of a diffusion approximation. In this situation the number of susceptibles and infectives behaves as an Ornstein–Uhlenbeck process, centred around the stationary point, for an exponentially distributed time before going extinct.

  • 27.
    Andersson, Patrik
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Nordvall Lagerås, Andreas
    Optimal bond portfolios with fixed time to maturityArticle in journal (Refereed)
    Abstract [en]

    We study interest rate models where the term structure is given by an affine relation and in particular where the driving stochastic processes are so-called generalised Ornstein-Uhlenbeck processes.

    For many institutional investors it is natural to consider investment in bonds where the time to maturity of the bonds in the portfolio is kept fixed over time. We show that the return and variance of such a portfolio of bonds which are continuously rolled over, also called rolling horizon bonds, can be expressed using the cumulant generating functions of the background driving L´evy processes associated with the OU processes. This allows us to calculate the efficient mean-variance portfolio. We exemplify the results by a case study on U.S. Treasury bonds.

    We also show that if the short rate, in a risk-neutral setting, is given by a linear combination of generalised OU processes, the implied term structure can be expressed in terms of the cumulant generating functions. This makes it possible to quite easily see what kind of term structures can be generated with a particular short rate dynamics.

  • 28.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    A Classroom Approach to Illustrate Transformation and Bootstrap Confidence Interval Techniques Using the Poisson Distribution2017In: International Journal of Statistics and Probability, ISSN 1927-7032, Vol. 6, no 2, p. 42-53Article in journal (Refereed)
    Abstract [en]

    The Poisson distribution is here used to illustrate transformation and bootstrap techniques in order to construct a confidence interval for a mean. A comparison is made between the derived intervals and the Wald  and score confidence intervals. The discussion takes place in a classroom, where the teacher and the students have previously discussed and evaluated the Wald and score confidence intervals. While step by step  interactively getting acquainted  with new techniques,  the students will learn about the effects of e.g. bias and asymmetry and ways of dealing with such phenomena. The primary purpose of this teacher-student communication is therefore not to find the  best possible interval estimator for this particular case, but rather to provide a study displaying a teacher and her/his students interacting with each other in an efficient and rewarding way. The teacher has a strategy of encouraging the students to take initiatives. This is accomplished by providing the necessary background of the problem and some underlying theory after which the students are confronted with questions and problem solving. From this the learning process starts. The teacher has to be flexible according to how the students react.  The students are supposed to have studied mathematical statistics for at least two semesters. 

  • 29.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    A Classroom Approach to the Construction of an Approximate Confidence Interval of a Poisson Mean Using One Observation2015In: American Statistician, ISSN 0003-1305, E-ISSN 1537-2731, Vol. 69, no 3, p. 160-164Article in journal (Refereed)
    Abstract [en]

    Even elementary statistical problems may give rise to a deeper and broader discussion of issues in probability and statistics. The construction of an approximate confidence interval for a Poisson mean turns out to be such a case. The simple standard two-sided Wald confidence interval by normal approximation is discussed and compared with the score interval. The discussion is partly in the form of an imaginary dialog between a teacher and a student, where the latter is supposed to have studied mathematical statistics for at least one semester.

  • 30.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    A Classroom Approach to the Construction of Bayesian Credible Intervals of a Poisson Mean2018Report (Other academic)
    Abstract [en]

    The Poisson distribution is here used to illustrate Bayesian inference concepts with the ultimate goal to construct credible intervals for a mean. The evaluation of the resulting intervals is in terms of potential negative effects of mismatched priors and posteriors. The discussion is in the form of an imaginary dialogue between a teacher and a student, who have met earlier, discussing and evaluating the Wald and score confidence intervals, as well as confidence intervals based on transformation and bootstrap techniques. From the perspective of the student the learning process is akin to a real research situation. By this time the student  is supposed to have studied mathematical statistics for at least two semesters.

  • 31.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    A classroom approach to the construction of Bayesian credible intervals of a Poisson mean2020In: Communications in Statistics - Theory and Methods, ISSN 0361-0926, E-ISSN 1532-415X, Vol. 49, no 22, p. 5493-5503Article in journal (Refereed)
    Abstract [en]

    The Poisson distribution is here used to illustrate Bayesian inference concepts with the ultimate goal to construct credible intervals for a mean. The evaluation of the resulting intervals is in terms of mismatched priors and posteriors. The discussion is in the form of an imaginary dialog between a teacher and a student, who have met earlier, discussing and evaluating the Wald and score confidence intervals, as well as confidence intervals based on transformation and bootstrap techniques. From the perspective of the student the learning process is akin to a real research situation. The student is supposed to have studied mathematical statistics for at least two semesters.

  • 32.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Central limit theorems from a teaching perspective2015In: Festschrift in Honor of Hans Nyquist on the Occasion of his 65th Birthday / [ed] Ellinor Fackle-Fornius, Stockholm: Stockholm University, 2015, p. 1-6Chapter in book (Other academic)
    Abstract [en]

    Central limit theorems and their applications constitute highlights in probability theory and statistical inference. However, as a teacher, especially in undergraduate courses, you are faced with the challenges of how to introduce the results. These challenges especially concern ways of presentation and discussion of under which conditions asymptotic (approximate) results hold. This paper attempts to present some relevant examples for possible use in the classroom.

    Download full text (pdf)
    fulltext
  • 33.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Design-based "Optimal" Calibration Weights Under Unit Nonresponse in Survey Sampling2018Report (Other academic)
    Abstract [en]

    High nonresponse is a very common problem in sample surveys today. In statistical terms we are worried about increased bias and variance of estimators for population quantities such as totals or means. Different methods have been suggested in order to compensate for this phenomenon. We can roughly divide them into imputation and calibration and it is the latter approach we will focus on here. A wide spectrum of possibilities is included in the class of calibration estimators. We explore linear calibration, where we suggest using a nonresponse version of the design-based optimal regression estimator. Comparisons are made between this estimator and a GREG type estimator. Distance measures play a very important part in the construction of calibration estimators. We show that an estimator of the average response propensity (probability) can be included in the "optimal" distance measure under nonresponse, which will help reducing the bias of the resulting estimator.  To illustrate empirically the theoretically derived results for the suggested estimators, a simulation study has been carried out. The population is called KYBOK and consists of clerical municipalities in Sweden, where the variables include financial as well as size measurements. The  results are encouraging for the "optimal" estimator in combination with the estimated average response propensity, where the bias was highly reduced for the Poisson sampling cases in the study. 

  • 34.
    Andersson, Per Gösta
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    The Wald Confidence Interval for a Binomial p as an Illuminating “Bad” Example2023In: American Statistician, ISSN 0003-1305, E-ISSN 1537-2731, Vol. 77, no 4, p. 443-448Article in journal (Refereed)
    Abstract [en]

    When teaching we usually not only demonstrate/discuss how a certain method works, but, not less important, why it works. In contrast, the Wald confidence interval for a binomial p constitutes an excellent example of a case where we might be interested in why a method does not work. It has been in use for many years and, sadly enough, it is still to be found in many textbooks in mathematical statistics/statistics. The reasons for not using this interval are plentiful and this fact gives us a good opportunity to discuss all of its deficiencies and draw conclusions which are of more general interest. We will mostly use already known results and bring them together in a manner appropriate to the teaching situation. The main purpose of this article is to show how to stimulate students to take a more critical view of simplifications and approximations. We primarily aim for master’s students who previously have been confronted with the Wilson (score) interval, but parts of the presentation may as well be suitable for bachelor’s students. 

  • 35.
    Andersson, Per Gösta
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Särndal, Carl-Erik
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Calibration for nonresponse treatment: In one or two steps?2016In: Statistical Journal of the IAOS, ISSN 1874-7655, E-ISSN 1875-9254, Vol. 32, no 3, p. 375-381Article in journal (Refereed)
    Abstract [en]

    This paper explores the different ways in which auxiliary information can be put to use in calibrated weighting adjustment under survey nonresponse.  Information is often present at two levels, the population level and the sample level. The many options available in executing the calibration derive from several factors: One is the order in which the two sources of information enters into calibration, a choice of a bottom-up as opposed to a top-down approach. Another is whether the calibration should be carried out sequentially in two steps, or in one single step with the combined information. A third question is whether one can simplify the procedure, at no major loss of accuracy, by transcribing individual population auxiliary data from the register to the sample units only. We make a systematic list of the possibilities arising for calibration adjustment in this setting. An empirical study concludes the paper.

  • 36.
    Andersson, Per Gösta
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Särndal, Carl-Erik
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Calibration for nonresponse treatment using auxiliary information at different levels2016In: Proceedings of the Fifth International Conference on Establishment Surveys, 2016Conference paper (Other academic)
    Abstract [en]

    This paper explores the different ways in which auxiliary information can be put to use in cali-brated weighting adjustment under survey nonresponse. Information is often present at two levels,the population level and the sample level. The many options available in executing the calibrationderive from several factors: One is the order in which the two sources of information enters intocalibration, a choice of a bottom-up as opposed to a top-down approach. Another is whether thecalibration should be carried out sequentially in two steps, or in one single step with the combinedinformation. A third question is whether one can simplify the procedure, at no major loss of accu-racy, by transcribing individual population auxiliary data from the register to the sample units only. We make a systematic list of the possibilities arising for calibration adjustment in this setting. Anempirical study concludes the paper.

  • 37.
    Andreev, Andriy
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Morlanes, Jose Igor
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    On Simulation of a Fractional Ornstein-Uhlenbeck Process of the Second Kind by the Circulant Embedding Method2018In: Stochastic Processes and Applications: SPAS2017, Västerås and Stockholm, Sweden, October 4-6, 2017 / [ed] Sergei Silvestrov, Anatoliy Malyarenko, Milica Rančić, Springer, 2018, p. 155-164Chapter in book (Refereed)
  • 38.
    Andreev, Andriy
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Morlanes, José Igor
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Simulations-based Study of Covariance Structure for Fractional Ornstein-Uhlenbeck process of the Second KindManuscript (preprint) (Other academic)
  • 39.
    Antonilli, Stefanie
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Embaie, Lydia
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Charlson and Rx-Risk Comorbidity Indices – A Correlation Analysis2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The objective of this study was to investigate the utilization of the diagnose-based Charlson Comorbidity Index (CCI) and the medication-based Rx-Risk Comorbidity Index on Swedish administrative data. Data was collected over a ten-year period from the National Patient Register and the National Prescribed Medication Register on 3609 respondents from the national public health survey 2018, aged 16-84 and registered in Stockholm County. The overall aim was to identify comorbid conditions in the study population; and to examine if the identified comorbidities differ between indices, based on subject characteristics such as age and gender. Moreover, the specific aim was to quantify correlation between the indices, as well as within indices over look-back periods of up to ten years.

    Among the study population, 13 % were identified with at least one comorbid condition through CCI, and 87 % had medications indicative of at least one condition covered by Rx-Risk. Both the original Charlson weights and updated weights by Quan were used to compute the comorbidity scores for CCI. Results showed that when CCI and Quan may have scored low, the Rx-Risk picked up more conditions. The Spearman rank correlation between CCI and Quan scores resulted in relatively high correlation with a coefficient of 0.82 (p-value < 0.05) over look-back periods of 2, 5 and 10 years. Moreover, the correlation between CCI and Rx-Risk was fairly low over all look-back periods with a correlation coefficient of 0.34 (p-value < 0.05) at most. The within-correlation showed that CCI identified much of the comorbidity between the one- and two-year look-back periods, whilst Rx-Risk identified much comorbidity within the one-year look-back period.

    The overall implications of the presented results are that a utilization of Charlson index and Rx-Risk is likely to capture comorbid conditions in different health care settings, and thus expected correlation is to be of modest level between the two indices. The research question of interest should therefore determine which index is favorable when assessment of comorbidity is desired.

    Download full text (pdf)
    2020 - [ Antonilli & Embaie ] - Charlson and Rx-Risk Comorbidity Indices - A Correlation Analysis
  • 40. Armelius, Hanna
    et al.
    Solberger, Martin
    Spånberg, Erik
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Österholm, Pär
    The evolution of the natural rate of interest: evidence from the Scandinavian countries2024In: Empirical Economics, ISSN 0377-7332, E-ISSN 1435-8921, Vol. 66, no 4, p. 1633-1659Article in journal (Refereed)
    Abstract [en]

    In this paper, the natural rate of interest in Denmark, Norway and Sweden is estimated. This is done by augmenting the Laubach and Williams (Rev Econ Stat 85:1063–1070, 2003) framework with a dynamic factor model linked to economic indicators––a modelling choice which allows us to better identify business cycle fluctuations. We estimate the model using Bayesian methods on data ranging from 1990Q1 to 2022Q4. The results indicate that the natural rate has declined substantially and in all countries is at a low level at the end of the sample. 

  • 41.
    Axelson, Martin
    et al.
    Statistiska centralbyrån, Statistics Sweden.
    Carlson, Michael
    Stockholm University, Faculty of Social Sciences, Department of Statistics. Statistiska centralbyrån, Statistics Sweden.
    Mirza, Hassan
    Statistiska centralbyrån, Statistics Sweden.
    Andersson, Karin
    Statistiska centralbyrån, Statistics Sweden.
    Alternativa datainsamlingsmetoder i ULF, fas 2: En jämförelse mellan två olika datainsamlingsmetoder2010Report (Other academic)
    Abstract [sv]

    I föreliggande rapport redovisas resultaten från den andra, och avslutande, fasen av den metodstudie som genomförts inom ramen för projektet Alternativa datainsamlingsmetoder för Undersökning av Levnadsförhållanden (ULF), som påbörjades 2002.

    Metodstudiens huvudsakliga syfte var att jämföra två olika metoder för datainsamling: en mixed mode ansats (MM) med en kombination av insamlingsmetoderna besök- och telefonintervju utan datorstöd respektive datorstödd telefonintervju (CATI). Jämförelser avseende huvudsakligen fyra olika kvalitetsaspekter redovisas: (1) mätkvaliteten, (2) bortfallsfelets storlek och inverkan på skattningar, (3) svarsandelen i Barn-ULF och (4) viljan hos uppgiftslämnarna för att delta i undersökningen.

    Den allmänna slutsatsen från studien är att den systematiska felkomponenten (mät- och bortfallsfel) i tillförlitlighetskomponenten bedöms vara oförändrad vid en övergång till CATI. I kombination med att övergången skulle frigöra resurser för en urvalsökning, innebär detta att medelkvadratfelet (MSE) i skattningarna skulle minska vid en övergång från den tidigare ansatsen med MM till CATI som primär insamlingsmetod.

  • 42.
    Ay, Belit
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Efrem, Nabiel
    Benford’s law applied to sale prices on the Swedish housing market2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Benford’s law is based on an observation that certain digits occur more often than others in a set of numbers. This have provided researchers to apply the law in different areas including identifying digit patterns and manipulated data. To our knowledge, this have yet not been tested in the Swedish housing market. The purpose of this thesis is to examine whether the sale price for 171 643 tenant-owned apartments in Stockholm, Gothenburg and Malmö follow Benford’s law. Numerous researchers have used this law for testing various types of data but based solely on the first digit distribution of their data. This study will furthermore test the second digit and the first two digits of our data. The tests used to evaluate our data’s conformity to Benford’s law include Kolmogorov-Smirnov test and Mean absolute deviation (MAD) test. We found that the second digit of sale prices did follow Benford’s law, the first digit and the first two digits did not follow the law. The results show that Benford’s law is a good method for identify certain digit patterns and further research is needed to draw the conclusion that sale price does not follow Benford’s law as certain limitations on our data was identified. 

    Download full text (pdf)
    fulltext
  • 43. Ball, Frank
    et al.
    Pellis, Lorenzo
    Trapman, Pieter
    Stockholm University, Faculty of Science, Department of Mathematics.
    Reproduction numbers for epidemic models with households and other social structures II: Comparisons and implications for vaccination2016In: Mathematical Biosciences, ISSN 0025-5564, E-ISSN 1879-3134, Vol. 274, p. 108-139Article in journal (Refereed)
    Abstract [en]

    In this paper we consider epidemic models of directly transmissible SIR (susceptible -> infective -> recovered) and SEIR (with an additional latent class) infections in fully-susceptible populations with a social structure, consisting either of households or of households and workplaces. We review most reproduction numbers defined in the literature for these models, including the basic reproduction number R-0 introduced in the companion paper of this, for which we provide a simpler, more elegant derivation. Extending previous work, we provide a complete overview of the inequalities among these reproduction numbers and resolve some open questions. Special focus is put on the exponential-growth-associated reproduction number R-r, which is loosely defined as the estimate of R-0 based on the observed exponential growth of an emerging epidemic obtained when the social structure is ignored. We show that for the vast majority of the models considered in the literature R-r >= R-0 when R-0 >= 1 and R-r <= R-0 when R-0 <= 1. We show that, in contrast to models without social structure, vaccination of a fraction 1 - 1/R-0 of the population, chosen uniformly at random, with a perfect vaccine is usually insufficient to prevent large epidemics. In addition, we provide significantly sharper bounds than the existing ones for bracketing the critical vaccination coverage between two analytically tractable quantities, which we illustrate by means of extensive numerical examples.

  • 44. Baresel, Christian
    et al.
    Destouni, Georgia
    Stockholm University, Faculty of Science, Department of Physical Geography and Quaternary Geology.
    Uncertainty-Accounting Environmental Policy and Management of Water Systems2007In: Environmental Science & Technology, Vol. 41, no 10, p. 3653–3659-Article in journal (Refereed)
    Abstract [en]

    Environmental policies for water quality and ecosystem

    management do not commonly require explicit stochastic

    accounts of uncertainty and risk associated with the

    quantification and prediction of waterborne pollutant loads

    and abatement effects. In this study, we formulate and

    investigate a possible environmental policy that does require

    an explicit stochastic uncertainty account. We compare

    both the environmental and economic resource allocation

    performance of such an uncertainty-accounting environmental

    policy with that of deterministic, risk-prone and riskaverse

    environmental policies under a range of different

    hypothetical, yet still possible, scenarios. The comparison

    indicates that a stochastic uncertainty-accounting

    policy may perform better than deterministic policies over

    a range of different scenarios. Even in the absence of

    reliable site-specific data, reported literature values appear

    to be useful for such a stochastic account of uncertainty.

  • 45. Basri, Layla
    et al.
    Bouggar, Driss
    El Fatini, Mohamed
    El Khalifi, Mohamed
    Stockholm University, Faculty of Science, Department of Mathematics.
    Laaribi, Aziz
    Extinction and persistence in a stochastic Nicholson’s model of blowfly population with delay and Lévy noise2023In: Mathematical Population Studies, ISSN 0889-8480, Vol. 30, no 4, p. 209-228Article in journal (Refereed)
    Abstract [en]

    Existence and uniqueness of a global positive solution are proved for a stochastic Nicholson's equation of a blowfly population with delay and Levy noise. The first-order moment of the solution is bounded and the mean of its second moment is finite. A threshold quantity Tj depending on the parameters is involved in the drift, the diffusion parameter, and the magnitude and distribution of jumps. The blowfly population goes extinct exponentially fast when Tj < 1. It persists when Tj > 1. The case Ts = 1 does not allow for knowing whether the population goes extinct or not.

  • 46. Bauder, David
    et al.
    Bodnar, Taras
    Stockholm University, Faculty of Science, Department of Mathematics.
    Mazur, Stepan
    Okhrin, Yarema
    BAYESIAN INFERENCE FOR THE TANGENT PORTFOLIO2018In: International Journal of Theoretical and Applied Finance, ISSN 0219-0249, Vol. 21, no 8, article id 1850054Article in journal (Refereed)
    Abstract [en]

    In this paper, we consider the estimation of the weights of tangent portfolios from the Bayesian point of view assuming normal conditional distributions of the logarithmic returns. For diffuse and conjugate priors for the mean vector and the covariance matrix, we derive stochastic representations for the posterior distributions of the weights of tangent portfolio and their linear combinations. Separately, we provide the mean and variance of the posterior distributions, which are of key importance for portfolio selection. The analytic results are evaluated within a simulation study, where the precision of coverage intervals is assessed.

  • 47.
    Bergström, Fanny
    et al.
    Stockholm University, Faculty of Science, Department of Mathematics.
    Günther, Felix
    Stockholm University, Faculty of Science, Department of Mathematics.
    Höhle, Michael
    Stockholm University, Faculty of Science, Department of Mathematics.
    Britton, Tom
    Stockholm University, Faculty of Science, Department of Mathematics.
    Bayesian nowcasting with leading indicators applied to COVID-19 fatalities in Sweden2022In: PloS Computational Biology, ISSN 1553-734X, E-ISSN 1553-7358, Vol. 18, no 12, article id e1010767Article in journal (Refereed)
    Abstract [en]

    The real-time analysis of infectious disease surveillance data is essential in obtaining situational awareness about the current dynamics of a major public health event such as the COVID-19 pandemic. This analysis of e.g., time-series of reported cases or fatalities is complicated by reporting delays that lead to under-reporting of the complete number of events for the most recent time points. This can lead to misconceptions by the interpreter, for instance the media or the public, as was the case with the time-series of reported fatalities during the COVID-19 pandemic in Sweden. Nowcasting methods provide real-time estimates of the complete number of events using the incomplete time-series of currently reported events and information about the reporting delays from the past. In this paper we propose a novel Bayesian nowcasting approach applied to COVID-19-related fatalities in Sweden. We incorporate additional information in the form of time-series of number of reported cases and ICU admissions as leading signals. We demonstrate with a retrospective evaluation that the inclusion of ICU admissions as a leading signal improved the nowcasting performance of case fatalities for COVID-19 in Sweden compared to existing methods.

  • 48.
    Bjermo, Jonas
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimal Test Design for Estimation of Mean Ability GrowthManuscript (preprint) (Other academic)
    Abstract [en]

    The design of an achievement test is of importance for many reasons. This paper focuses on the mean ability growth of a population from one school grade to another. With test design, we mean how to allocate the test items concerning difficulties. The objective is to estimate the mean ability growth as efficiently as possible. We use the asymptotic expression for the mean ability growth in terms of the test information. With that expression as the criterion for optimization, we use particle swarm optimization to find the optimal design. The optimization function is dependent on the examinees' abilities, and therefore the value of the unknown mean ability growth. Hence, we will also use an optimum in average design. The conclusion is that we should allocate the common items in the middle of the difficulty span, with the two separate test items on different sides. When we decrease the difference in mean ability between the groups, the ranges of the common and test items coincide more.

  • 49.
    Bjermo, Jonas
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Test Design for Mean Ability Growth and Optimal Item Calibration for Achievement Tests2021Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis, we examine two topics in the area of educational measurement. The first topic studies how to best design two achievement tests with common items such that a population mean-ability growth is measured as precisely as possible. The second examines how to calibrate newly developed test items optimally. These topics are two optimal design problems in achievement testing. Paper I consist of a simulation study where different item difficulty allocations are compared regarding the precision of mean ability growth when controlling for estimation method and item difficulty span. We take a more theoretical approach on how to allocate the item difficulties in Paper II. We use particle swarm optimization on a multi-objective weighted sum to determine an exact design of the two tests with common items. The outcome relies on asymptotic results of the test information function. The general conclusion of both papers is that we should allocate the common items in the middle of the difficulty span, with the two separate test items on different sides. When we decrease the difference in mean ability between the groups, the ranges of the common and test items coincide more.

    In the second part, we examine how to apply an existing optimal calibration method and algorithm using data from the Swedish Scholastic Aptitude Test (SweSAT). We further develop it to consider uncertainty in the examinees' ability estimates. Paper III compares the optimal calibration method with random allocation of items to examinees in a simulation study using different measures. In most cases, the optimal design method estimates the calibration items more efficiently. Also, we can identify for what kind of items the method works worse.

    The method applied in Paper III assumes that the estimated abilities are the true ones. In Paper IV, we further develop the method to handle uncertainty in the ability estimates which are based on an operational test. We examine the asymptotic result and compare it to the case of known abilities. The optimal design using estimates approaches the optimal design assuming true abilities for increasing information from the operational test.

    Download full text (pdf)
    Test Design for Mean Ability Growth and Optimal Item Calibration for Achievement Tests
    Download (jpg)
    presentationsbild
  • 50.
    Bjermo, Jonas
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Fackle Fornius, Ellinor
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Miller, Frank
    Stockholm University, Faculty of Social Sciences, Department of Statistics.
    Optimal Item Calibration in the Context of the Swedish Scholastic Aptitude TestIn: Article in journal (Other academic)
    Abstract [en]

    Large scale achievement tests require the existence of item banks with items for use in future tests. Before an item is included into the bank, it's characteristics need to be estimated. The process of estimating the item characteristics is called item calibration. For the quality of the future achievement tests, it is important to perform this calibration well and it is desirable to estimate the item characteristics as efficiently as possible. Methods of optimal design have been developed to allocate calibration items to examinees with the most suited ability. Theoretical evidence shows advantages with using ability-dependent allocation of calibration items. However, it is not clear whether these theoretical results hold also in a real testing situation. In this paper, we investigate the performance of an optimal ability-dependent allocation in the context of the Swedish Scholastic Aptitude Test (SweSAT) and quantify the gain from using the optimal allocation. On average over all items, we see an improved precision of calibration. While this average improvement is moderate, we are able to identify for what kind of items the method works well. This enables targeting specific item types for optimal calibration. We also discuss possibilities for improvements of the method.

1234567 1 - 50 of 722
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf